If you build it they will come — but they won’t necessarily set up two-factor authentication.
That’s because hacks and other online invasions are almost always avoidable. Users are lectured that they should have chosen long, unique passwords and that they should have subscribed to a password manager; that they should have set up two-factor authentication; that they should never use public wifi. In other words, they got burned because they didn’t do what they were told.
It shouldn’t be this way.
“The problem with all of these kinds of solutions is they put the onus of security responsibility on the user,” Marc Rogers, VP of cybersecurity at access management company Okta, told Recode. “And the user is the least equipped person to do anything about that. They don’t understand the risks well, and they don’t want the complexity.”
Take Amazon Ring, a video security device consumers are increasingly using to give themselves a sense of security and peace of mind in their homes. Ironically, these devices have left people feeling less secure, after a spate of high-profile hacks in late 2019 made it possible for strangers to commandeer Ring cameras to surveil and harass people in their own homes. In one instance, a strange man talked to and terrified an 8-year-old girl in her own bedroom, where her parents had placed a Ring security camera as a communication and security measure.
In response, Ring said it hadn’t done anything wrong and blamed the hacks on customers. It said the hacked customers had made their devices vulnerable by reusing old, compromised passwords. Some of these users have disputed that claim, but either way, the point is clear: The tech company faulted its customers, rather than acknowledging its own role in the situation.
Months after news of these hacks went public, Ring has introduced a number of standard security measures for users, like default two-factor authentication — a feature that requires users provide a second piece of info, like a code from their phone, before they can get access to an account — and a dashboard through which they can monitor who else might be accessing their video feeds. Ring had stopped short of mandating two-factor for existing users, saying that doing so could cause mass logouts, but after sustained pressure, including an article I published last year calling for this change, Ring finally made two-factor a requirement for all users last week.
But the fact remains that they sold insecure devices with inadequate safety protocols to an untold number of consumers first.
Tech companies tend to put the onus of security on users in part because they are trying to get as many people as possible to use their devices, and they see any extra security measures as something that creates friction that might turn off those users. It’s also not a coincidence that good security practices, like any other extra layer of oversight, cost these companies more time and money to develop.
“At Ring, our top priority is the safety and security of our customers. We understand that Ring users put their trust in our products, and we strive to maintain that trust so our customers can feel confident that their homes and personal information are safe with Ring,” Ring said in a statement to Recode. “We reinforced that commitment with the addition of mandatory two-step verification for all users, and we will continue to add additional features related to user privacy and account security while maintaining the convenience and ease-of-use our customers have come to expect.”
Security and ease of use are often positioned as being diametrically opposed, with one coming at the expense of the other. They don’t have to be. Reconciling them will require a lot of effort, and no tech company will get everything right. There will also be some trade-offs between ease of use and security. But none of that should prevent tech companies from aiming for a reasonable balance and meeting basic standards.
“We have to convince all the big companies that it is not the user’s responsibility to make their stuff secure,” Rogers said. “Security should be seen but not heard. It should be something that’s simple. It shouldn’t get in the way. But it should be there when it’s needed, shouldn’t force the users to do complicated things or memorize huge strings of numbers that they’re just going to write down.”
“I don’t think you have to completely trade off one for another,” Jen King, director of consumer privacy at the Center for Internet and Society at Stanford Law School, told Recode. “And I think that people who are still making that argument are kind of in a mindset of 10-plus years ago.”
Rather, she says, it’s a design issue.
“There is a lot of work that’s been done in this area, both in the academic field, followed by corporate leaders in this space like Apple, to really try to understand human limitations and how we design products to minimize or anticipate those limitations so that people don’t have to work as hard,” King said. Instituting these best practices requires investments in people who do user experience research, which considers “how people think, [and] what their priorities and incentives are” in order to develop products and features that will ensure their security.
It also requires looking at how others have solved these issues.
“Certainly, there’s no excuse not to look around at your competition and see what other people are doing,” King said.
What hardware and software companies need to do to make us all more secure
Ultimately, it’s every tech company’s responsibility to make sure their products are secure in a way that’s accessible to regular users.
Apple’s Face ID and Touch ID, which allow you to unlock your iPhone with tech that either recognizes your face or your fingerprint, are a move in the right direction. The process is faster and often easier than entering in a passcode, all while ensuring security.
“When Touch ID came out, I think it was something like less than one in five people even had a PIN code on their iPhone. And the reason why is they found it inconvenient,” Rogers said. “Apple brought out Touch ID. And that went up to like 80 or 90 percent of people had security on their phone. It wasn’t because they suddenly woke up and decided they needed security, it was because security suddenly met their lifestyle — it became convenient.”
Other companies have other creative security solutions. Google offers a version of two-factor authentication wherein an alert will simply pop up on your phone if it’s in range of another device that’s asking permission, which is much easier than retrieving a text code or going to an authenticator app for a code. The different methods for two-factor authentication vary in their relative security — dynamically generated codes in an app have historically been more secure than sending a code via text, for example — but are all better than no two-factor at all.
At the very least, big tech companies should institute some basic best practices.
These include suggesting or requiring more difficult passwords, as well as shipping devices with their own unique password attached. Device and software makers should make sure their default settings — which are what most people end up using — are the most secure options they have, rather than an option for only people who are privacy savvy. They should also mandate two-factor authentication, although that can be trickier for the less tech-savvy among us. In fact, they should take that as a challenge and explore inventing an easier alternative. Rogers suggests that using biometrics — like a thumbprint or face scan — to prove who you are can be both secure and easy to use.
This isn’t to say keeping up with security challenges is easy.
The security issues companies have to contend with are getting more difficult as hackers become more savvy and as we desire our apps and devices to become more connected with one another. We like the convenience of effortlessly sharing a photo from our phones to a social network; we expect to seamlessly upload our contact lists to new accounts. We just want to be in control of the process.
“In the old days, if you wanted to compromise a phone, you would have to break into the phone,” Rogers said. “Now, the application you target has all these permissions. And every permission that an app has is something that can be exploited,” he said.
To combat these added difficulties, he suggests looking for software and device makers that engage in a concept called “zero trust,” a model that assumes you can’t trust anyone, even people within your own company. It continually verifies that an app or device or person connecting to your account actually should have access. More and more companies are testing this model, including Google, the pharmaceutical company Allergan, and Okta, though it’s far from mainstream.
“We should automatically assume that any connection that we see coming from the internet into a phone or from an app to another app or an app to data could be untrustworthy, and then take every step we can do to dynamically assess it, and treat it as untrusted until we can prove that it is trusted,” Rogers said. “We start off protecting things from that kind of model and then you’re going to have a much more strong system.”
While there have been numerous government attempts to create regulation around basic digital security practices, none have gotten off the ground. The Federal Trade Commission can fine companies for egregious data breaches and issue reports, creating a rough idea of guidelines, but these efforts fall short of actually being able to legislate that companies meet those standards. So for now, in the absence of regulation requiring security and privacy best practices, consumers are reliant on tech companies to take the initiative to act in our best interests — but as we’ve seen so far, they tend to hurl blame at us first.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.