Every time a major Internet-connected-product is released, we keep coming back to the plead over confidence vs. convenience. The course of arguments goes something like this:
- One organisation expresses outrage/skepticism/ridicule of how this product doesn’t need to be connected to the Internet;
- Another organisation argues how the advantages transcend the risks and/or how the risks are overblown;
- There will be news stories on both sides of the issue, and the plead shortly dies down as people pierce on to the next thing; and
- Most users are left wondering what to believe.
As a confidence researcher, we mostly consternation either the conveniences offering by these Internet-connected-devices are worth the intensity confidence risks. To meaningfully know the nuances of this ecosystem, we consciously done these inclination a partial of my daily life over the past year. One thing immediately stood out to me: there seems to be no correct resource to help users know the ramifications of the risk/reward tradeoffs around these ordinarily used “personal” Internet-connected-devices, which creates it formidable for users to have any arrange of effective understanding of their risks. we forked out the same in a recent CNN Tech article about Amazon Key, where we also said:
A elementary order of ride here could be to daydream the best case, normal case, and misfortune case scenarios, see how any of those impact you, and take a call on either you are versed to understanding with the tumble out, and either the tradeoffs are worth the convenience.
Without suggestive a user’s specific needs, this is substantially as close as it gets to any arrange of “useful advice” any confidence veteran could give. But this is still only a semi-useful platitude, since it doesn’t answer a very important question:
How could users meaningfully establish what the best case, normal case, and misfortune case scenarios are, without truly understanding the ramifications of the security/convenience tradeoffs they make?
It turns out that we need to answer a few other questions before we could even get to this clearly apparent question. And these other questions are mostly not utterly apparent themselves. So until we figure out what these other questions are and what their answers could be, I’m fearful the best any confidence veteran can do is give semi-useful platitudes like the one we gave.
Well, semi-useful platitudes suck. But this is also a extended and difficult question. Given its range and complexity, I’ll residence the doubt in 3 parts: in the first part, we conclude what accurately we are trying to solve for, and how Personal Threat Models are impending to the solution. In the second part, we show how Personal Threat Models now work, how they are unsound to solve the (now clearly defined) problem, and what needs to change. In the third part, we plead how we could rethink the proceed toward Personal Threat Models so that we could maybe offer something some-more than semi-useful platitudes.
IoT Risk and an everlasting debate
Irrespective of how they are marketed, smart devices like Amazon Echo, Amazon Key, Google Home, etc. are “Lifestyle Products” directed at improving convenience—how necessary these products are depends on how meaningfully they confederate into one’s lifestyle.
Hence, either it is “worth” compromising some security/privacy to reap the conveniences offering by these products is a very personal and biased decision. In some cases there is genuine alleviation to one’s peculiarity of life (e.g. voice assistants are utterly useful for people with certain disabilities, and the remoteness concerns transcend the preference for many people in this context), but in other cases, these Internet-connected-products just supplement to the series of avenues that could be used to concede one’s confidence (these “avenues” are rigourously called attack vectors).
So how do we confirm what products are “safe?” In other words, what is “acceptable risk” in the tradeoff between confidence and convenience? Also, “safe,” “trust,” “risk,” etc. meant opposite things to opposite people. How do we even define/formalize these terms?
Clearly, there are no “right” (or standard) definitions here, but until we confirm on what these things should meant in this context, we will always come back to the same plead every time a new Internet-connected-product is released.
Further, the Internet of Things (IoT) ecosystem includes a extended accumulation of inclination and device-systems such as energy plants, vehicles, home appliances, etc. Risk comment in the IoT ecosystem is sincerely difficult overdue to, among other things, the non-homogeneity of the underlying platforms (giving arise to ecosystem-specific hurdles w.r.t information management, authentication/authorization protocols etc.).
Given this scenario, there is little value in defining/adopting the same vernacular and risk comment metrics for… say, an Internet-connected-speaker for domestic use, and a wireless-sensor for crop monitoring. In other words, nonetheless there is the unifying thesis of all these IoT inclination being connected to the Internet, threats compared with “Internet Connected Lifestyle Products” need to be visualized differently.
Further, given the fragmented inlet of this Internet Connected Lifestyle Products ecosystem (different forms of users, lifestyles, requirements, hardware, protocols, information storage etc.), there is no objective, universal way to definitively establish what turn of risk is “acceptable” solely to investigate any case where confidence would be compromised for preference and establish what tradeoffs would be excusable for any user in any of these cases. At best, we could organisation identical cases and give some ubiquitous best practices, but this is not scarcely adequate given how some of these inclination can catastrophically concede one’s confidence (often due to suboptimal/erroneous risk assessment).
Thus, in the context of confidence vs. convenience, a lot boils down to one’s personal definitions of “safe” and “trust,” then one’s Personal Threat Model (and consequently, risk assessment) ensuing from these definitions. Unless we conclude the range clearly, come up with a suggestive way to formalize some of these terms, residence any substantial assumptions, and assess/quantify risk in a way that creates clarity in this specific ecosystem, we will keep having variants of the same debate.
Further, even if we assessed the intensity attack vectors (and risk) compared with whatever Internet-connected-device is the season of the week, the advantages of doing that competence not matter if there is no suggestive way to consider the risks in the range of the user’s Personal Threat Model.