According to a witty saying usually attributed to Albert Einstein, “Everything should be made as simple as possible, but not simpler.”
And when engineers try to solve a problem, says MIT’s Joi Ito, they often veer over that line.
“Tech people tend to want to just solve the problem,” Ito said on the latest episode of Recode Decode with Kara Swisher. “But the problem with the problem is it’s not like previous problems, where you just solve it. You actually have to keep asking the question, ‘Is this even the right question?’ And this is why it’s much more difficult.”
Ito has presided over MIT’s famous “antidisciplinary” research laboratory, the Media Lab, for the last eight years. Originally founded in an era of techno-utopianism, the MIT Media Lab now finds itself responding to the ways technology and engineering culture are accelerating and creating problems like climate change, inequality, and obesity.
“You know that little girl in The Exorcist? That’s what the internet feels like to me,” Ito said. “You have this little girl and you think she’s going to become this wonderful kid and then she gets possessed and starts becoming this demon. And we have to exorcize her and we have to kind of bring her back.”
Although some in the tech world may believe we’re living in a simulation, Ito told Swisher, that’s only because they’re looking at the world as an equation to be optimized and not the messy place that it is.
“A lot of decisions that we make about how we feel about civil rights or gay marriage, whatever it is, those aren’t decisions that are optimizations,” he said. “Those are these cultural transformations that occur in society. And I think what’s missing right now in the conversation in AI is that that’s kind of the point of life, in my own personal opinion.”
Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Joi.
Kara Swisher: Hi, I’m Kara Swisher, editor-at-large at Recode. You may know me as the author of an existential play narrated by the 20th letter of the alphabet, it’s called “Am I T?” But in my spare time I talk tech, and you’re listening to Recode Decode from the Vox Media Podcast Network.
Today in the red chair is one of my favorite people I’ve known for a long time, Joi Ito. He’s the director of the MIT Media Lab, which is separate from MIT itself, which is a university. He’s also an activist, entrepreneur, venture capitalist, and the editor of an upcoming essay collection called Resisting Reduction: Designing Our Complex Future With Machines.
Joi, welcome to Recode Decode.
Joichi Ito: Can I amend your introduction a little bit?
Because it is part of MIT.
Yes, it is. Yes, it is. But it operates in its own, the Media Lab...
It’s a peculiar part of MIT.
It’s a peculiar part of MIT. And that’s what I meant to say. So why don’t you explain that? Explain what the MIT Media Lab is and how ... I want to talk about your getting to it, how you got to it, but explain what it is so people can get ...
So the Media Lab is a weird part of MIT because normally you have labs which do the research and you have academic programs that do the degrees and the tenure and things. They’re usually at tension, like church and state. And the media lab is both an academic program, the program of media, arts, and sciences as well as a lab. It was created over 30 years ago by the then-president of MIT Jerry Wiesner, Nicholas Negroponte, and a few founding faculty. Jerry Wiesner sort of finally called it the “Department of None of the Above.” Seymour Papert, who was an educator, was one of the founders, and he had this idea called constructionism, which is learning through doing. So we often say learning through construction rather than instruction.
The reason we were able to get the lab and the academic program together is instead of learning in classes we learn through doing projects. So the projects are funded by a consortium of over 80 companies.
And it’s a very tight link between businesses, and making things that can be made, essentially, that can be used and deployed.
That’s right. All of the students have to build the thing they’re talking about. And because it’s so interdisciplinary, we use the term antidisciplinary, the building of the thing is the way you sort of explain what it is you’re doing because it’s hard to explain it when you have three different disciplines trying to work together. And then also, MIT was traditionally more of a government-funded institution and the Media Lab kind of pioneered this consortium model of corporate funding. We’re corporate funded, but the other part about the Media Lab is we don’t do what they tell us to do. We’re sort of the ...
The company is the funder...
Yeah, we’re kind of a hedge against being wrong, so we often discover things that ... We do things that they wouldn’t otherwise do.
And you’re there to do ... The reason they invest in you is because they want to be on the cutting edge of whatever’s next, right? With kind of crazy wacky ideas, versus stuff they don’t want to do for tomorrow. Maybe it’s 20 years out.
And sometimes we tell them things they don’t want to hear. In the early days of high definition in Japan, the Media Lab said, “You’re doing it all wrong. It’s going to be progressive digital. It’s not going to be this analog thing.” And they were very unhappy, but we turned out to be right.
These days, I think a lot of times it’s not even about technology. We have Hollywood companies that are members, but when they did the SOPA-PIPA bill, which was this anti-copyright bill, we formally went against it. These days I think more and more ... I mean, I think the Media Lab was very techno-utopian when it first started but I think there’s a huge chunk of the Media Lab that’s critical of technology these days. So we’re often criticizing the companies that support us.
All right, so explain your journey, because I think that’s how I would explain you. You’ve been very tech forward — and at the same time you’ve always been someone very early who’s been talking about the downsides and upsides. I remember that from when we met 20 years ago. I think almost 20 years ago. Because Louie was ... You met me after I had my baby, so ... I always time things based on him. But talk a little bit about your journey of where you started and how you got to this job.
I joined a little over eight years ago. And this is when they were doing a search. Initially Nicholas Negroponte, who was the founding director, and actually Megan, reached out to me, and they were doing a service ...
Megan Smith. This is my ex-wife.
Megan Smith, yes. Yeah. And I was on a bus and she said, “Hey, would you want to be director of the Media Lab?” I said, “Yes,” but then they looked at my resume which says that I didn’t even get an undergraduate degree and they said, “Well, this probably isn’t going to work.” So they went through a whole process for months, and I guess they went through lots of candidates, but they didn’t find one that worked. So they scraped the bottom of the barrel and picked up my resume and said, “Come in for some interviews.”
Let’s talk about your bottom-of-the-barrel resume, then. Talk about your journey of where you went to as an entrepreneur.
I went to college twice. I went to Tufts and I went to University of Chicago. I was interested in computer networking.
You grew up in ...
I was born in Japan, grew up in Michigan, went back to Japan for junior high, and then came to the United States for college. This is sort of early ’80s, so this is when computer networking was just starting, when the internet just started. I was interested in communities and in music and in networks. And there just weren’t programs like that.
And the other thing that was interesting is, this is the early days of the internet, so if you went online and chatted a sysadmin and you said you were some kid from Japan, they’d give you access to everything. And if you emailed professors they would reply. So I found that I could meet just about anybody I knew to meet on the internet and I could sort of kluge together my own education.
In fact there weren’t very many classes about online communities and things like that. So I dropped out of college and I just started doing startups, internet startups. I started doing a lot of internet governance, too. So I did things like ICANN and Creative Commons. So I was really interested in the sort of nonprofit structure of how the internet and communities came together.
And why? Why was that?
I’ve always been interested in parties and communities and just subcultures. When I dropped out of University of Chicago, I became a disc jockey in a nightclub. And the reason I dropped out was I realized that there was kind of a monoculture at the university, at least in the department that I was in. And then when I was going to the nightclubs, this is early ’80s when AIDS was kind of rampant, and at the nightclub, you had skinheads and drug dealers and drag queens, and everybody was coming together around this crisis. I realized that this working-class community was much more loving and much more sophisticated in the way that, the community dynamics were resilient than what I was learning in school.
I felt like I needed to understand how communities worked in order to understand things. And then I realized that a lot of the stuff that was going on in these communities ... And also just being a DJ. You change the music and you change the way the community behaves. So I was trying to map my experiences in the nightclub onto what was going to happen online. And then I saw things ... I was still pretty optimistic at the time, but I’ve always been kind of trying to figure out that intersection.
And when it got online, you thought that was even more people that you could reach out to and communicate with.
Yeah. And in fact, when I was in high school in Tokyo and I was 16, people didn’t know I was a kid. I remember when I was I think 15, I went to — this is before the internet, but this is the source and these online services — I went to a wedding and we had people who had met for the first time showing up. And the best man had met for the first time. So I thought in the early days of online communities that this was going to change the world and if we could just connect everybody else we’d have world peace or something like that.
Utopian, the techno utopian idea. I remember when I met you, you had all kinds of ideas. You were the first person who really did talk about this idea of online communities. You introduced me, I remember when I was in Japan, to a company called ImaHima, if you remember.
Which was “I am here,” right? That’s what it means?
No, “I am free.” Or, “I have free time now.”
I remember it blowing my mind. I remember thinking there were two. There was Six Apart and ImaHima and they were, one was I am here, which was it would locate people who you could hang out with. Right?
Yeah. And it was also for jobs. So it was like the first kind of gig economy site that was around.
Yeah. It had a lot of concepts of “I am here, I want to have a beer in this part of Tokyo, do you want to meet me?” And it wasn’t just friends. It was just anybody.
It was suspending belief that you can meet anybody just anywhere. There was a lot of trust to it. And then Six Apart was that you can be linked to anybody, which was really interesting. It was in Cambridge. Amazon ended up buying it, which was interesting. I think it was Amazon. Or one of them. Or was it Mark Pincus? Or Reid Hoffman?
Six Apart — well, Reid and I invested, and Six Apart was in San Francisco, but I can’t even remember the name of the company that bought it!
But anyway, these were two massive concepts of what was about to come. It precursored Facebook, to Tinder, to everything that was going to happen. But this was the idea of community and connection and using computers to do so. And linking people by their groups and stuff like that. So that was the concept.
In your early days, you were an investor, right? You went around ... You were kind of a gadfly in a lot of ways. You wrote a lot?
In the early days, I was actually an entrepreneur so I helped set up PSI Net in Japan, and I did InfoSeek in Japan.
This was an early internet service provider.
Yeah. And when blogging came out was around when I started pivoting a little bit towards investing, so I invested in Flickr, and later Twitter. I was mostly interested in social media. I unfortunately passed on YouTube and didn’t get into Facebook.
It was interesting because when blogging started, if you remember, a lot of it was that you had all these unemployed smart engineers who had a lot to say. And they just wrote their own software. Even the founders of Six Apart had been laid off and they said, “Well, we’re going to do our own thing.” Right after the bubble, so it’s like early 2000s, I had raised a fund and I started investing in Silicon Valley. So the gadfly part of it was that there really weren’t ...
I didn’t mean that in an insulting way. You were an idea person more than most investors.
But also, it wasn’t very crowded. All the carpetbaggers were gone because they thought the party was over. And there were very few investors. Ev was there, a bunch of us were hanging out. You were there. And we just started blogging. I remember when I ... It was the early days of Google, too. So bloggers got the best search results. I remember writing a silly blog post saying, “Is Diet Coke Bad For You?” And when you searched for “Diet Coke,” it was the first result. Or when I quit drinking for a while, I wrote a blog post that said, “I quit drinking,” and when you search for “quit drinking,” it was the first result. And what was funny was that in the comments, people started sharing all their things. The comment section of my blog became a support group for people who needed to quit drinking, really.
And then one of my friends created a website called We Quit Drinking, which became sort of an online AA, and it was this whole community of people who were supporting each other, sort of pseudonymously sharing journals. And then the FAQ, it says, “The origin of this site was this guy who quit drinking and wrote about it. We all found each other.” And so I, still to this day, although now with this podcast I’m out again, I now occasionally drink, but I don’t blog about it because I don’t want to sort of shatter this genesis story.
You’re like the founder of bitcoin. We found you again! We’re very disappointed.
But you were investing, you were doing all kinds of things, and again, were prominently talking about the concepts of what was happening. How did you look at the internet then? Because I want to sort of talk a little bit about how it is now, and then what you’re doing, obviously, at the Media Lab.
I was concerned about certain things. Like I was very concerned about privacy. You’ll be proud of me. I won the lifetime achievement award yesterday from the Electronic Privacy Information Center.
I’m getting to that age when I’m getting lifetime achievement awards. So I was concerned about things like privacy, but I was generally optimistic. I remember in 2000 and ... I’m trying to remember the year now, 2004 or so. A bunch of us who were just talking on the blogs thought about this idea. We called it emergent democracy. So I wrote this paper together with a bunch of other people called “Emergent Democracy.”
And this was sort of before Arab Spring. We had this idea that instead of just mainstream media and polling, we could have conversations online, and that we could talk directly to government and that there would be a future public sphere and that democracy was going to change. We thought it was going to change for the better.
When Arab Spring started, we said here it comes. I remember going to Tunisia like a week after the revolution, kind of as CEO of Creative Commons at the time, just sort of celebrating the social media’s rise. So those were the good times.
Right. Right. So you were the head of Creative Commons, which ... explain to people what that is.
Yeah, so Creative Commons was created by Lawrence Lessig and a bunch of other really smart people before I joined. It was this idea that with the internet, you had the ability to share content and build on the work of others, but because of a lot of the copyright restrictions that big media companies in Hollywood were putting on, even though people wanted to share, the technology was preventing it. So the idea of creative commons was to mark your work with the permissions you wanted to grant. So you can use this as much as you want as long as you provide attribution. And to build that into the code so that people could share freely.
Creative Commons is now more widely used on Flickr and all these places you can tag your work. So this is an organization that coordinated an international effort to create the licenses.
So you’re doing that, and then doing investing, entrepreneur. How did you decide you wanted to run MIT? This was ...
Well, I’m kind of ...
What were you doing previous just before it? You were writing ...
I had a place in Dubai. I was still investing. I was running around the world investing. I was going around the world literally twice.
You had a place in Dubai?
Yeah, yeah. An apartment in Dubai. So this is a good example. I went to Bahrain and I was so confused. I didn’t understand anything. I said, “I have to understand this part of the world.” So I got a place and just started hanging out. So for me, and this is similar to the Media Lab, my sister’s an academic and until recently when I said something was “academic” it was meant as a pejorative. And so when they asked if I was interested, my answer to just about anything like that is, “Sure, I’m interested.” So I went to visit. And when I visited I realized this is really different from what I thought of as academia. It was through that first meeting that I realized that this was possibly a really interesting place for me to hang out.
Mm-hmm. So you have been there. So you’ve been there eight years. We’re going to talk a little bit about what’s going on at the Media Lab in the next section. When you joined ... What is the difference between when you joined and now? How do you look at that?
I don’t want to take responsibility for the changes, really. I think that the Media Lab had wanted to do a lot of things and I have facilitated them. I think one of them is really that it was created in an explosion of techno-utopianism, and the students and a lot of the faculty, not all of them, have wanted the Media Lab to become more reflective and to think about what did social consequences and bringing social sciences in, things like that. So that’s one.
I’ve grown in ... so a lot of it was just blocking and tackling some of the issues around accounting and dealing with members. It’s doubled in size and things like that. But we also — and this wasn’t me. I mean, the previous director had started to go into life sciences, but I’d say a pretty healthy chunk, maybe 30 or more percent — or more than that, maybe 40 percent — of the projects have something to do with life sciences. We’ve launched a space initiative. We’ve created a digital currency initiative. We’ve created a bunch of different things like that.
And we, I think, probably integrated it more into MIT than it was before. I think it’s also personality because Nicholas came from MIT so he sort of made the identity of the Media Lab “not MIT.” Whereas I came from the outside and I saw all these really wonderful people and resources inside of MIT, so I’ve been sort of pushing the Media Lab back closer to MIT, I think.
Give us a little overview of what’s going on at the Media Lab because media has always been ... you know, I’m trying to go and I went there, again, 20 years ago, the stuff they were talking about has now been commercialized. Everyone’s a billionaire from running it and everything else.
But some of the ideas didn’t happen. A lot of them ... I remember haptic touch was one that everyone thought was going to be big. Last time I was there there was a lot of self-driving cars. There was an accordion fold-up car. Like, there were robots that self-care things. It’s, sort of, an explosion of just very creative ideas and what the professors and students want to do. Explain what’s going on there now. What do you think is the most exciting stuff?
Yeah, I think first, I just ... I’ll say that the key for the Media Lab is to always try to do the stuff that other people wouldn’t do.
And, if other people start doing it, we stop it.
Stop doing it. Right.
So, we were really into wearables in the ’80s and we’re [doing] less and less of it. And so, when we do a search, we’re always looking for, often for, things that we haven’t yet thought. The really interesting stuff that I think we’re going to do is stuff that we haven’t really thought of yet.
So, it’s ... I think I’ll point that out, I think, let’s see. So, in the life sciences, for instance, Kevin Esvelt, who’s one of the inventors of CRISPR gene drive which is the technology that allows an edited gene to be inherited by all of the offspring so that you can use it for doing things like fighting Zika or trying to create mice that are resistant to ticks, or things like that. But it also has a lot of risks because you’re, you know, editing whole populations of genes. What could go wrong?
And so, a lot of his efforts have been in trying to figure out how to do community consent, building safety systems and things like that. And that’s something, for instance, that couldn’t really happen in a traditional genetics lab because a lot of it is social. So, he’s, he joined after I joined. We’re doing, you know, conformable decoders and implantables. So instead of wearables, these are things like, that power themselves by the beating of the heart or ...
Explain what that would do?
These are devices that go inside your body.
Implant? What is it? Implantable decoders?
Well, the name of her group is, Canan’s group, is conformable decoders.
Yes, so these are things like a pacemaker powered by something that goes around your heart, or something ... your brain is actually really hard to implant things in. I mean, we’ve wanted to do it for a long time, but they don’t ... yeah, it’s tricky. And then, we have people who are working on how to power these things, how to get signal out of these things. And so a lot of the work is also around the interfaces between, sort of, the human body and electronics.
Is it to make people robots, or to add things to improve the human body via machines?
I think if you go the history of the Media Lab, it’s always been somewhere around the interface between humans and machines, and then it was a lot of the early social media and internet stuff came from the Media Lab. It was networks and society, and more recently it’s data and AI in society. But there’s always something about human beings and technology. And in the life sciences, it’s often around the interface between the inside of your body and machines, or Hugh Herr’s group, biomechatronics.
Yeah. And, he has, we have a thing called the Center for Extreme Bionics. We like to take words ...
As in, like, The Bionic Man?
It is, it is, it is. We ... words are really important.
Yes, they are. I like “bionics.”
And sometimes people throw away words and then we revive them. It’s really about robotic prostheses. How do you communicate with these prostheses? It’s interesting because there are cases where if you have a bad ankle and you’re constantly having surgery, actually amputating it and replacing with a robotic prosthesis will increase your quality of life. It’s just these ... and somewhat taboo. And so, some of these things are about poking at things that are taboo that might actually make you better.
But to answer your question, I think generally, everybody’s working on trying to make the human condition better. But I think “better” is a really tricky word. It’s better for who, at what time scale?
That’s exactly right.
And, I think every faculty member has a slightly different view, but I think that we’re more and more asking the question, “Are we solving the right problem?” And I think one of the problems with just straight engineering is you sort of assume that the person who gave you the problem knows what the problem is and your job is just to solve it. And I think that ...
Which is what you get when you have an engineering point of view. “Let’s just solve it.”
Not any of the repercussions or what you did to get there or anything else.
Yeah, and there is kind of science and engineering about understanding things like system dynamics and things like that, but the framing of why you’re doing this and for who ... I mean, so questioning is usually the artist and scientist, and the designers and the engineers are the ones that usually try to just kind of make it, create the utility. At the Media Lab, what we’re trying to do is bring all of those together: arts, science, engineering, and design, on a good day.
Which is sort of like Apple, or there are some companies that did that.
Yeah, yeah, although ... yeah, that’s true. Yeah.
Yeah. All right. Talk about some more stuff you’re doing there. So, biomechanics?
Biomechatronics. That must sound good to a big company. “We’re giving money to mechatronics.”
We have Ed Boyden, who is doing synthetic neurobiology. So, he’s trying to understand all the different ways to understand the brain and perturb the brain to tackle some ... because the brain’s really complicated.
Yes, I’ve heard.
And it’s connected to a bunch of things, and I think that we have a very simplistic view of how the brain works. Medicine is really about, kind of, trial and error, and not ... there isn’t as much understanding of underlying mechanisms as you think.
I mean, a lot of the pills that you’d take, when it says mechanism of action, it says “unknown.” And so a lot of what Ed is trying to do is understand the brain, because most of the diseases that we have these days is, now we live longer and longer, are diseases that involve the brain. And so that’s a whole group, and in that group, it’s not just neuroscience; we have AI, we’ve got robotics, we’ve got nano tech and it’s a very integrated group. We have 25 groups and each one is very, very different. I won’t go through the whole list.
That’s amazing. How do you decide which ones to pick?
So, it’s really the faculty search process. I remember there was a faculty search process that the founder, Nicholas, was running, and we called it the Professor of Other; the candidate had to be proficient in at least two orthogonal fields and be antidisciplinary. But we were looking for this faculty member and we had a faculty meeting and we had this candidate everybody loved, and in typical of Nicholas form, he said, “That’s not ‘other,’ that’s ‘another.’” He’s like, “No.”
I think what really is a key thing about the Media Lab which I love is that we really do embrace the other. We are always trying to bring in things that make us uncomfortable, things that we don’t understand sometimes. And I think that’s what’s very different from a traditional academic department, where you’re sort of extending but mostly going into areas that are either adjacent to or connected somehow to the thing that you’re doing.
So, what is that to you right now? What’s something that makes you uncomfortable that’s being developed at the lab?
Well, I think for me, the privacy — the surveillance capitalism conversation.
This is Shoshana Zuboff at Harvard.
As Shoshanna’s book ... yeah. I mean, the Media Lab, I’d say, plays a big role in her book. You know? She calls out a lot of the work at the Media Lab as the tools of the trade of surveillance capitalism. And I think that thinking about our role in the world that she describes and thinking about how ...
Explain what Shoshana’s saying. She’s saying that capitalism is built to track us and sell us and sell to us.
Yeah, yeah. I think she’s basically saying that the capitalist system that converged with the Silicon Valley big tech companies that realized that taking user information and behavior and turning that into money was the currency of the day. And all of the technology for collecting information, understanding the information, tracking that information, are the tools of the trade. And a lot of that was developed at the Media Lab.
I think it’s an interesting book. I do think there are some things that I ... for instance, when I was fighting the copyright battles, the constant thing was ... So there’s rivalrous goods and non-rivalrous goods. If somebody steals your car, you don’t have it anymore. But if somebody takes your picture from Flickr, you still have it. And so, one of, I think, the problems with her book is she treats data as if it were a rival good, like oil. It’s different. I mean, some of the key points she points out I think are okay, but some of the ...
You get to keep it even if someone takes it.
Yeah, and it’s not exactly surveillance. I mean, “surveillance” is something that is a very loaded word that has a very ...
Well, I think that’s why she used it.
Yeah. But, I guess my point is that the word surveillance is usually used by law enforcement, and it’s used to catch bad people, in the eyes of the surveillor. Whereas, I think the surveillance capitalists today, I would say, are not using the data to catch these people and put them in jail. They’re using it to do stuff to you that, that may ...
Sell you cornflakes, or whatever. Behavioral or otherwise.
Yeah. And again, I think it’s a really important conversation to have, and this is, sort of, a longer conversation, but I also think, like, Cambridge Analytica, their sales pitch sounded a lot scarier than ... When you actually look at the data, it’s not nearly as effective. And so, one of the other meta questions is so how effective are these ads in making you buy cornflakes. And, I think, again, it’s probably more effective than the average person thinks, but I don’t think it’s nearly as effective as some people sell it.
Well, I think people are especially warned by sci-fi that they see, and everything else that they’ve had over the course of their lives, it will be. You can iterate it out to the end and if it gets good, it gets really scary.
Yeah, and I think my hypothesis is that it gets scary in a different way because I think people are more complicated and I don’t think machines can solve the complexity of human beings as easily as the technology people might think. I think we’re going to have intervention pretty soon.
And again, books like that help the intervention happen, but I don’t think that ... I think that a lot of people who really believe in machines feel that the machines will get so smart that all of the messy complexity of human existence will no longer be a problem. I think that’s impossible. So I think that the worst-case scenario isn’t as scary, but I also think the best-case scenario isn’t as optimistic.
Optimistic. All right. Finishing up on MIT, what would you like to be putting in there now? You know, you’ve been there eight years. What’s an area you would really like to see work on?
So, since I got there, I got there as an administrator, but over time they made me a professor of practice and I started teaching. I’m taking students now. So, my own area of research is based on what I’ve observed being there. It’s really on understanding automated decision-making in criminal justice, or trying to understand the way that academic publishing, tenure, and funding creates silos and creates the way that science and knowledge get created, and to try and turn it on its head.
What I’m trying to do is take what we’ve learned at the Media Lab and to try to push it out, to fix some of the problems that I see. And also, bring things like social sciences and history and other things that didn’t feel applied enough to the Media Lab into the Media Lab, because I think that’s really, really important. So that’s what I’m teaching, that’s what I’m working on.
And, the idea of automated ... explain automated decision making for people that don’t know that.
Yeah. So, for example, right now we have these risk scores that we use in criminal justice for setting bail, sentencing, and probation.
If they’re going to do it again.
Yes. Basically what it does is it predicts how likely you are to recommit a crime. But, for example, they use rearrests as a proxy for your likelihood to recommit a crime, and it turns out that rearrests have a lot more to do with policing practice than the inherent criminality of the individual. What it does is it tends to reinforce social biases.
So, if you look at data ... For instance, there’s a famous study that shows that in Oakland, when you do a health study, everybody across Oakland does drugs. But when you look at the police data, only poor neighborhoods do drugs because all they’re looking at are drugs arrests, and they only arrest people where they go. But when you look at the police data and you put it into the algorithm, the algorithm thinks that only poor people do drugs.
Yes, poor black people.
Right? So what happens with these risk scores, it reinforces the bias, it also takes the agency ... so, it says, “No, no, no. We’re just trying to predict the criminality of the pool people, and we’ll just make policing more efficient and make jails more efficient.” Instead of saying, “Wait. Is there something we can change in the system?”
“Should we arrest more people?”
No, I’m kidding.
Automated decision making, the problem is the judge gets this risk score, and some of the risk scores ... like, the difference between like a five or six might be a few percentage points, but that’s, but they say, “Oh. It’s a six. We should sentence this person to a longer time in prison or not give them bail.” What happens is it’s taking ... And again, the argument is that humans are biased and these things are less biased, but the problem is that the data is biased.
Right. It’s called dirty data.
It’s dirty data, and even if it’s perfectly accurate ...
It’s still dirty.
Society’s unfair. Right?
Yeah. It’s still dirty.
So, society’s dirty. And so, what you’re doing is you’re reinforcing, sort of, backward-looking status quo. It doesn’t allow us to be progressive.
So, you’re trying to figure out how to stop that, which is really hard.
Well, it’s legally hard. There was a Reuters article about Amazon using its historical data to create an HR engine that just wanted to hire white men. Well, the engineers will say, “Oh. Well, we should just ... Okay. Got it. That’s bad. So, let’s just create a constant to give increased scores for women and minorities.” Well, it turns out Title VII, which is antidiscrimination, a part of the Civil Right Act, says that over the years now, because of all this lobbying, affirmative action is illegal. It’s illegal to put your thumb on the scale.
So today we can still push for diversity by having the HR person lean towards minorities, but the problem is, if your data says we want to hire white people, it’s illegal to put your thumb on the scale. So what we’re seeing now is the law doesn’t protect us, the code isn’t projecting us, and all of these issues are just suddenly becoming ... And, they’re just ... and the problem is you need to understand both the law and technology.
We’re here with Joi Ito, he’s the director of the MIT Media Lab. He’s someone I’ve known a long time and is always talking about interesting things, as usual. But this book, Resisting Reduction: Designing Our Complex Future With Machines, you were just talking about this idea of the ability to intersect humans with tech, with law, with other things. Talk about this book. What was your thinking of wanting to do these essays?
What was happening is, as I was starting to get involved in this conversation about the ethics and governance of artificial intelligence ... I teach a course that’s cross listed between MIT and the Harvard Law School called “The Ethics and Governance of Artificial Intelligence.” And what I realized was that a lot of the engineers who work in AI felt that you could reduce the whole world to a function, and that life, human life, was just optimizing and that the world could be simulated in a computer. And that if we could just ... and that world ... life was just a game that you could win.
Right. There are a lot of sites like this. There were tons of them.
Yeah. Yeah, yeah, but the ... they feel, though, a lot of them feel that the world can just be reduced to a very simple formula, so therefore then if you could create an AI that was smart enough, it would figure out how to win at life, and that we would either lose or this would save the world. And to me ...
This is like that Tom Cruise movie, Minority Report. Pre-crime.
Exactly, yup. The thing is that ... and again, this is almost religious because I think that there are people who have the kind of thinking where they look at their life as a game. Where they say, “Okay. I’m optimizing for money, and this, and how many minutes do I have to do this.” So, for them ... I tweeted out the other day, “Those people who think that we live in a computer simulation are the kinds of people who are most likely to be simulations.”
Wait, explain the computer simulation ... people like Elon, I think Tony Hsieh, lots of people think we live ... this is a game.
Well, yeah. A lot of people approach life like an engineering problem. For them, I could imagine that they could see their whole life being in a computer. But there are a lot of people, and I think they sort of sort out if you go into the humanities or the east coast, there are a lot of people who don’t think like a computer. They live life through experience and only things that happen actually matter.
I think what’s interesting is that a lot of decisions that we make about how we feel about civil rights or gay marriage, whatever it is, those aren’t decisions that are optimizations. Those are these cultural transformations that occur in society. And I think what’s missing right now in the conversation in AI is that, first of all, I think that that’s kind of the point of life, in my own personal opinion. But also, you take things like the notion of fairness. There’s a conference called FATML: Fairness, Accountability, Transparency [in Machine Learning]. I think it’s just kind of FAT* now.
But a lot of the papers that you see by the engineers say, “We’ll just define fairness as accuracy,” or something like that. And this is what I call reductionist, because fairness is really complex, and it’s always contextual. So, the risk ... and so reduction is important. Reduction is Newton’s laws. Reduction is what allows the engineering that created the Industrial Revolution and brought us all this stuff.
But my concern is the stuff that we have, which is efficiency, productivity ... but that’s the stuff that makes us obese. That’s the stuff that creates climate change. That’s the stuff that ... The efficiency that creates income inequality sometimes. And this is an old thing. I’m not the first person to say this, but the problems that we have today are caused by the tools that we created, but I think there’s a lot of people who believe that more efficiency and more productivity will fix everything.
Would change that.
And so this is sort of an almost religious kind of difference. And I think right now there’s a lot of power in the hands of the reductionists. And I would put economists and sort of neoclassic economics sort of in this, which is just reducing everything to just measuring GDP.
So, you’re saying they think they have it figured out.
Yeah, and I think it’s dangerous.
As they have now found. Your point is being proven right now. Look at YouTube shifting back and forth on these things. They can’t figure any of it out, because it’s messy. And in fact, I was talking to a pretty big tech person and they were very frustrated with what was going on with all the platforms and I said, “Oh, life is messy.” They’re not ones and zeroes, I guess.
That’s right. That’s a very Kara Swisher way of saying what I was trying to say, which is life is messy. And that if you ignore that, it’s going to get worse. And the reason that it’s a collection of essays, which I was excited, so I wrote this little manifesto-like thing ...
Called “Resisting Reduction.” And then it triggered a whole bunch of responses from Silicon Valley and all over the place. So then we decided to collect these essays and put them together into a collection. So, I have a group of indigenous people. I have somebody who does the philosophy of science. It’s a really interesting response to ...
So take the side of the reductionists. What is good about that? Because it’s been pushed at us a lot that we’ll get you cars that work, we’ll get you this that works, we’ll get you full communications. None of it has turned out the way they predicted.
And again, for me ...
We have good mapping.
Yeah. And I am and sometimes am still, and was, one of them. And I felt like if we connected everybody together ...
That’s how I feel. I was one of them.
... we would have world peace. It just turned out to be more complicated.
I thought the same thing.
And you know, it’s kind of an old movie now, but you know The Exorcist?
You know that little girl in The Exorcist? That’s what the internet feels like to me. I had this little girl and now ...
Explain that. Take that one out. Draw that one out.
Because you have this little girl and you think she’s going to become this wonderful kid and then she gets possessed and starts becoming this demon.
And we have to exorcize her and we have to kind of bring her back. I’m actually long-term optimistic.
Who gets to do that? You know what happened to the priest in that one.
I know. He had to jump out the window, right?
Several. Several priests went down.
I actually feel kind of responsible for having been involved in some of this. I don’t feel innocent. And I feel that having been part of all of this, I have to do something about it. So, I’m not like a third-party observer in this. I think, though, that ... I do think that the changes that we make have to be technically well-informed, because I think that uninformed laws, whether they’re gun control laws or privacy laws ...
Which I wrote about this week. That’s what I wrote about in the Times this week. That’s exactly what I was saying.
You’re going to whiffle on it.
It’s interesting, because everybody’s like, “Oh, you’re being pro tech.” I’m like, “No, I’m saying we’ve got to be informed or else we’re screwed even worse than we are now with no laws,” that kind of thing.
And they have to be proactively forward versus looking backwards, because it’s hard to not understand where you need to go if don’t understand where you’re going.
That’s right. I think GDPR is well-intentioned. It’s better than nothing. But it completely doesn’t hit the target yet. I think that the tech people have to get involved. I do think this public interest technologist idea, kind of like we had public interest lawyers, getting tech people more involved in coming up with how we move forward.
The problem is also, tech people tend to want to just solve the problem. But the problem with the problem is it’s not like previous problems, where you just solve it. You actually have to keep asking the question, “Is this even the right question?” And this is why it’s much more difficult. I think it’s actually, to be honest, it’s a value shift. I think it’s about the younger generation, your kids, being disgusted with us, being so disgusted that they don’t want to work for that company.
They don’t want to get on a private plane. They don’t want more stuff. And I think the problem is we’ve just started measuring our success with how much stuff we have. “How could he be smart? He’s not rich.” That kind of notion. And I think that the next generation’s at least in some part ...
It’s an interesting concept, because this idea, reduction is actually an excellent word when I think about it because they do, they always have answers. And I’m always wary of people who always have answers to things. And one of the things that has happened is, because they reinforce each other in the society they’re in, they’re in violent agreement with each other, you know what I mean? And then they feel victimized if you start to go at them.
And what’s interesting to me is there’s a lot of people like you, like me, Roger McNamee, who are in, who are like, “Wait a minute.” They know the inside and are saying, “You guys have to stop, because you don’t understand what you’re doing.” And to the outsider saying, we look like attackers when we’re not attackers.
Someone the other day was like, “Why do you hate Facebook so much?” I said, “I don’t hate it. I love it. That’s the problem.” I don’t love Facebook particularly, but you know what I mean, the concept. And so, what’s really interesting is how do you educate these people who reduce into a new way, how do you think about that?
First of all, it’s really hard ...
You’re saying resisting it, so resisting it and then what?
Because reduction is important to get anything done. You have to reduce a recipe to a recipe to give it to somebody, so reduction is important. But it’s resisting over-reduction. It’s the old “make it as simple as possible, but not simpler.” But it’s hard. I was just yesterday arguing with one of my friends in Silicon Valley about the role of advertising. In that community, since it’s so powerful, since it’s so reinforced by success, it’s almost impossible to have that conversation.
Explain that argument to me. Explain what happened. You were arguing. They said, the advertising business by its nature is going to create these mutated situations.
Yeah. And also this idea that things like fairness just can’t be reduced to a formula, and moved on. And that it’s messy. And the problem is that they’ve just been told over and over again that, “Oh, you’ll never be able to do this,” and then you can. So, when you go to Marc Andreessen’s office, it’s “Software will eat the world.” Because everybody kept saying, “Oh, you can’t do that in software. Only humans can do that.” And then they do it. So the experience of being in Silicon Valley over the last few decades is everybody kept saying you couldn’t do it and then they can do it. There’s really this interesting thing of, “There they go again, just telling us we can’t do it, but we can.”
Although being dragged in front of Congress and stuff like that, that wakes you up a little bit. But it’s difficult. I think the solution is not going to be to go into Silicon Valley and convince them through argument. I think it really is about coming up with a different narrative outside.
I do think, though, that there is definitely a movement. You see the “tech won’t build it” movement, which is the employees saying, “We don’t want our companies doing this.” You see consumers starting to say, “I like Lyft because of this.” So, I think values will change, even in Silicon Valley. And it’s interesting, when you look at the companies, the senior people are more oblivious of this than the younger people or the people who are more in the system. And so I think it’s going to change kind of bottom-up and I think it’s going to change from things like the Parkland kids who wake up and tell people what they think.
It is interesting. Defensiveness is fascinating. The defensiveness is just one of my ... I can’t believe the richest and most powerful people in the world are victims. I just won’t listen to them talk about themselves like that. But they tend to do that as a group. Not everybody. A lot of people, it’s interesting, we’re having a panel at Code this year called “Inside Out,” the people who have left, all of a sudden are becoming quite critical in a way because now they’re like, “Wait a minute.”
It’s sort of a born-again kind of thing. It’s more like, “Oh, I see the damage. I understand the damage.” And it’s a really interesting question whether it sticks or we do get these draconian rules from Congress or whoever, or abroad, the GDPR, or something happens somewhere where we get too much of a correction to it all, so that you don’t get the innovation that they desire.
Yeah. And I guess the one thing I will say, though, is that there are a bunch of people who are making a business out of fear-mongering. And I do think that, for example, I recently wrote in Wired, there are great places online for kids to learn. And overreacting to the scary stories about kids having trouble online I think could destroy these public spaces that helped me learn when I was a kid. We don’t shut down churches and schools just because kids have problems, and they probably have more problems in churches than they do online.
I don’t think we should be shutting these things down. I don’t think that we should be shutting even Facebook down. I think what we need to do is go in and make them better, just like we try to get institutions to be better. And I do worry that there’s kind of a whole generation of people who flipped out to the outside and now kind of go around sort of making money scaring people, which I think is also-
Should I be nicer, Joi? I don’t think so. I think I’m being appropriately ...
I think you should poke, but the problem is that you overshoot. One of the worst laws that was ever created, the Computer Fraud and Abuse Act that was created the year after WarGames, it makes it a felony to breach a terms of service. When Julia Angwin shows that you can buy ads for “Jew hater” on Facebook, she’s probably a felon for doing that. And that’s stupid. And that’s because lawmakers overreacted to some dumb movie that scared the pants off of them. So that’s what I’m worried about.
Right. I want to finish up, last thing. When you say “our complex future with machines,” right now, what is our future with machines?
We already have a complex relationship with machines.
Yeah. And Norbert Wiener, who was an MIT mathematician in 1950, wrote a book called Human Use of Human Beings. And he said that organizations are machines of flesh and blood. We already have machines, they’re called corporations. And our future is just supercharging what we already have. We can’t even control companies. To me, it’s like putting jetpacks and blindfolds on and just supercharging in whatever direction we’re headed.
So what I think we need to do is we need to get our house in order before we put these jetpacks on. And it reminds me, I was just talking to my historian friends, and ’67 was 58 race riots across America. Fifty years ago, we had Vietnam protests on MIT campus. There’s a lot of similarities. And I think one of the concerns that I have is that we have a horrible relationship with ourselves right now and then adding more power to it isn’t going to help.
So, the complex ... It really is a complex system. And regulating complex systems — our bodies are complex systems, the earth is a complex system where we’re somehow able to keep our body temperature relatively stable. And it’s not because there’s somebody in charge. It’s because we have a resilient, complex, adaptive system. And I think that’s like having a very good culture and very good norms.
I think we need to create a society that is self-regulating and isn’t so centralized, and I think that, again, I think ... I keep going back to culture. This goes all the way back to the beginning of our conversation. I think it’s a lot like how to build strong, resilient communities and it’s less like trying to engineer a program.
What do you think is key to that?
I think what we’ve done is we have subordinated the humanities. If you go to, especially the places like MIT, you’ve got ... the engineers have all the power, all the money, and everything looks like an engineering problem. And we’ve made liberal arts sort of this sideshow. I think that we need the historians, the social scientists, the anthropologists, the qualitative people involved in setting ... asking the questions. Why are we here? What are we doing?
Mark Cuban just did an interview like this, I was sort of shocked by it.
And that’s, I think, the key. And it’s not just about ... And the problem is, again, vocational education is great, but again, economists are part of that thing, which is ... the reason you’re here isn’t to work. The reason you’re here is to live and ask questions and enjoy. So I think universities shouldn’t just be to get jobs. That should be something that’s important. But that’s like just cranking out widgets for the factory.
That’s a famous saying, do you live to work or work to live?
That’s a good saying.
There you go.
Those are the things that I think we really need to do. I think there’s a new version of the hippie movement that’s going to happen where people are going to say, “You know what? I don’t want to play that game anymore. I’m playing a different game. And the game that I’m playing and the thing that I’m solving for is something really different than what you want me to solve for.” And I think it’s going to be a kind of values rebellion. I think it’s already started.
Sounds fantastic. I can’t wait. I’m hoping to be part of it.
Last question, what is the most ... You always have the most interesting technology idea. What do you think is the craziest thing that you would like to see?
The craziest thing?
What are you like, “This would be so cool”? Or something you’ve seen or you’re doing at Media Lab that’s really ...
I don’t think of it as a specific technology, but one of the things that we’re working on or we see is a democratization of space. We’re seeing the cost of shooting up a satellite becoming the cost of a personal computer, thousands of dollars. Which has good things and bad things, but there are a lot of parallels.
Space trash. Billboards in space. But it’s really similar to the early days of the internet, where we had this government system that became commercialized and we had good things and bad things. So, what I want to see is I want to get space right in the way we got the internet wrong. Whether it’s institution ...
Space right in the way ... That’s nice. That would be great.
Because I think we might screw it up like we screwed up the internet.
I’m sure we will. I’m sure we will, Joi. Anyway, this is great. Thank you for coming on the show.
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.