/cdn.vox-cdn.com/uploads/chorus_image/image/49796641/GettyImages-80823058.0.jpg)
One of the dangers of expressing opinions on the internet for a living is that you sometimes express opinions that turn out to be totally incorrect. After I published a piece about the failure of home 3D printing on Monday, the American Conservative's Robert VerBruggen reminded me that I had a very different perspective on the topic four years ago:
@RAVerBruggen People in 1975 couldn't think of many programs they'd want to run on a personal computer.
— Timothy B. Lee (@binarybits) September 21, 2012
@RAVerBruggen I think you're underestimating hindsight bias. Those are obviously useful on today's powerful computers.
— Timothy B. Lee (@binarybits) September 21, 2012
Obviously my thinking on this topic has changed. And I think there's a broader lesson for how we think about technology as it becomes increasingly intertwined with the physical world.
As I say in these tweets, people underestimated the first PCs in the 1970s. They were so underpowered that you could hardly do anything useful with them. So lots of smart, sophisticated, thoughtful people dismissed them as overpriced toys. Then, as everyone now knows, PCs took over the world.
The same thing happened with the internet. In the 1980s it was hard to use and couldn't do very much. People mocked the idea that it could eventually support billion-dollar businesses. Then we got Amazon, Google, and Facebook, and people stopped laughing.
It happened again with mobile phones. People mocked the concept of using phones to check email or take photos. And then ... you get the idea.
A new generation of smart devices has been struggling
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6611531/148238897.jpg)
By 2010, these stories had become the default way technology pundits like me looked at the world. "New technologies always look overly complex and underpowered at the outset," we'd say. "But they don't stay that way."
But in this decade, we've been seeing more and more examples where the PC analogy doesn't seem to be working.
When Google Glass was introduced in 2012, supporters saw it as the next great computing platform. But normal people weren't actually that enthusiastic about having computers on their faces, and after several years of mockery, Google has put Glass on the back burner.
In 2014, Google spent $3.2 billion to acquire the smart thermostat company Nest. Its CEO, Tony Fadell, quit last week after struggling to expand to other "smart home" products. Other companies have rolled out "internet of things" products like smart lightbulbs and wifi-connected slow cookers, but consumers haven't seemed very interested.
Though iRobot has experienced modest success building its Roomba robotic vacuum cleaners, it has struggled to produce follow-on products, and Roombas remain a niche product 14 years after its introduction.
Home 3D printing was introduced with great fanfare in 2012. But so far there's been no sign that consumers want 3D printers in their homes. Instead, 3D printer companies have pivoted to selling their wares to commercial customers.
In each of these cases, optimists a few years ago argued that we needed to give the products more time to mature. They often drew explicit parallels (as I did with 3D printers) to the early days of the PC.
But there is a big difference between these products and the famous examples of the PC and the internet.
There's no Moore's law for the physical world
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6500257/roomba_oops.png)
People underestimated early PCs because they were drastically inferior to their successors. A modern PC isn't two, 10, or 100 times better than an Apple II circa 1977 — it has 100,000 times more computing power.
As a consequence it can do a lot of things — like editing large graphic and video files, playing sophisticated video games, and rendering complex webpages — that would have been far beyond the capabilities of the first PCs. The first PCs were slow, expensive, and bulky, but people just had to wait a few years for Moore's law to produce computer chips that were faster, smaller, and more affordable.
But not all problems can be solved with more computing power. If an errant cat toy jams your Roomba, no algorithm is going to get it unstuck. More sophisticated software won't necessarily make people interested in having computers on their faces. A faster computer chip isn't going to bring down the cost of the fairly expensive plastic used by most entry-level 3D printers — nor will it make consumers interested in having a lot of plastic junk lying around the house.
So when trying to predict if a new digital product will get better over time, it's helpful to ask whether the big problems are related to a lack of computing power or something else. For example, I'm bullish about self-driving cars because the challenges there mostly are software-related. It seems likely that collecting enough data and throwing enough computing power at it will eventually lead to cars that can drive themselves more safely than could human beings.
But a lot of other futuristic gadgets are being held back by physical complexity, consumer inconvenience, or a simple lack of value for consumers. These are not problems that more computing power can fix.