clock menu more-arrow no yes

Full transcript: Bloomberg Beta partner James Cham on Too Embarrassed to Ask

“A core part of artificial intelligence is simulating personalities ... And we’re very, very good at that.”

Vjeran Pavic for Recode

On a recent episode of Too Embarrassed to Ask, Bloomberg Beta partner James Cham spoke with Recode’s Kara Swisher and Mark Bergen about the future of artificial intelligence and machine learning.

You can read some of the highlights from Cham’s discussion with Kara and Mark at that link, or listen to it in the audio player above. Below, we’ve posted a lightly edited complete transcript of their conversation.

If you like this, be sure to subscribe to Too Embarrassed to Ask on iTunes, Google Play Music, TuneIn or Stitcher.

Transcript by Maya Goldberg-Safir.


Kara Swisher: Lauren is again off this week, still cavorting in Hawaii, but in her place I am delighted to welcome Recode’s expert on all things Alphabet — it used to be called Google — Mark Bergen. Hey, Mark, how you doing?

Mark Bergen: I'm doing really well.

KS: Good. So how's that Alphabet going? They changed the name in the middle of your tenure here.

MB: They changed their name. [laughs]

KS: We know who they are!

MB: It’s still pretty pretty heavy on the Google.

KS: What's the new story that's going on?

MB: The story is that they're trying to prove that Google is where all the innovation is, which is probably true.

KS: Yeah, and that just a little startup could crush anybody.

MB: Just a little startup. But they are doing the virtual reality — all the virtual reality efforts are still in Google, and most of the AI and machine learning are still inside Google.

KS: Yeah, and in some of your recent stories, you wrote about Nest, Tony Fadell and Nest, their home automation efforts, all linked into the same idea.

MB: Yeah, I mean, you know, I think they're still ... it was kind of, when they formed it and learned there really wasn't much of a plan, and that was intentional, they sort of went, "We're gonna just do this."

KS: That's what they tell you! Just so you know. [laughs]

MB: But I mean there was "we're gonna evolve this as it goes," and most people inside of the company had no idea it was coming, and they're still trying to figure out what it means, and they've had a fair amount of big hiccups and mess and had some issues — Verily, their healthcare company, and maybe ... it’s unclear. Two years from now, the whole Alphabet structure will look different.

KS: And the moonshots and everything else.

MB: And all the moon ... they're not calling it moonshots anymore.

KS: What are they calling it?

MB: The "other bets." The bets.

KS: The bets, really? They've gone more earthly on us? They're not going to the moon, you know, maybe?

MB: Eh, maybe.

KS: And maybe like Asia Minor. [laughter] Asia Minor shots.

MB: Asia Minor shots, that's good — they're pinching pennies a little bit. At least that’s kind of what’s happening, and that's sort of the spin they're putting on.

KS: But you did write about Google Fiber, which is their attempt to get in on the cable biz.

MB: Yeah, Google Fiber is I think their most interesting. It’s not self-driving cars, it’s not drones, but it’s possibly the one business that could, that could, be the next big thing after search because that's what this whole big thing is about is finding something else besides ...

KS: Sure, sure, right. Well, it is nice, they are ambitious — someone called it narcissistic, but not me. The ambitions are nice! For Silicon Valley. And one of the areas and themes at Google I/O this year, which you covered so beautifully, was artificial intelligence, and that's what we're talking about today on Too Embarrassed to Ask. It’s also a topic that came up a lot at the Code Conference with Facebook and many others talking about it — half of them talking about a happy, shiny future of artificial intelligence. The single voice who didn’t was Elon Musk, who envisioned a future in which, at very best, we become house cats to our computers — just pets, and a little scary! And he has Open AI and all kinds of things, so I think he's worried that it gets concentrated in very few hands. And I think he was talking about Google very specifically, and if not, slash Facebook, slash Apple, but that very few people control a lot of computing. So, last week on the show we talked about robots, so this is sort of unofficial Part Two to that podcast. In a few minutes, we're going to welcome our special guest, James Cham from Bloomberg Beta. But first, Mark, let's talk about the basics here so people understand: What do companies like Google mean when they talk about artificial intelligence? Is that HAL from "2001: A Space Odyssey"?

MB: Maybe. Maybe sometime soon.

KS: Yeah, I think so.

MB: And you know, there's some that say, and I think it’s true, artificial intelligence is just math.

It's just software, and in some ways it’s kind of the same ... you remember the term big data was very trendy for a while. The biggest application that tech companies use in AI right now is machine learning, which is essentially kind of looking at big chunks of, heaps of data, and then finding patterns in that. So you can do that with speech recognition, you can do that with computer vision ...

KS: Photos.

MB: Right, photos.

KS: Twitter just bought an image recognition company — Silly Monkey? What was it called? Magic Pony? Whatever.

MB: Twitter bought that .... Magic Pony! [laughing] Silly Monkey!

KS: I don’t know why people think Silicon Valley is juvenile. Go ahead. Move along.

MB: Magic Pony. Well, I mean, there's been this mad rush in the past few years, and it just keeps accelerating, for talent. There's only a few, maybe some several thousand people — it’s growing — that are actually experts in this type of AI, and that's where all these companies are, and Amazon in particular has been driving hard on this, and then, last week at WWDC, Apple's developer conference, Apple actually unveiled — I was a bit surprised — they said, look at all these software features, and we're actually doing machine learning, too!

KS: Right, so everybody is — and Facebook obviously. Can't leave them out.

MB: Right, yeah. The rap against Apple has been because they're so committed to privacy and doing everything on their device and that they haven't invested as much in AI as Facebook and Google. And they’re competitors, and so they might be left behind in this world.

KS: Is it a must-have or a little bit of hype? Are they really making strides?

MB: I think for Apple — I mean, maybe James will come on and disagree with me — I think it is a must have. If you look at the future, where Amazon Echo sort of led the way, we're going to be interacting with devices in our home, maybe, you know, no longer having screens, and that's a world in which Apple should be terrified because their premise is basically selling hardware.

KS: Well, they have had Siri, though, but you're right, I know ...

MB: There are certainly people that say Siri is not quite up to speed with where sort of Amazon ...

KS: It is not.

MB: You know, Apple has a lot of money, and there are a lot of smart people there. And they have been investing in this.

KS: And then the down side of AI, what Elon was talking about, and others, not just Elon — uh Stephen Hawking, everyone.

MB: Stephen Hawking. Bill Gates has put in. Right, I mean, I think there's definitely some concern about the idea that machine learning is basically training machines to learn for themselves. And so you don’t have to connect the dots to see what that means.

KS: Well, someone tried to compare it to a car. We just had someone on last week that compared it, that, you know, like a car. If a car is a robot and does things by itself now, and it’s not scary, and that cars, you know, helped us more than killed us, but I don’t think it’s quite that same idea.

It’s incredibly cheap now to build systems that were difficult to build even 10 years ago.

MB: Right, and there's gonna be some interesting ethical debates around self-driving cars — whether if a self-driving car sees a pedestrian on the road, and if it’s up to killing the pedestrian or the person in the car, and how do you make ... how does the algorithm decide what’s ethical?

KS: Right, yeah, that would be interesting. So to help us explain what’s going on with AI, both the negatives and the positives, we're delighted to welcome James Cham, a partner at Bloomberg Beta. Hey, James, welcome to the show!

James Cham: Thanks for having me.

KS: So we have lots to talk about! All of a sudden this has gotten a lot of attention, because of Elon, I think, and others who have been talking about it a lot lately. So let's talk about why there's a big interest in AI. It's been around for 50 years and in our popular culture much longer than that, so what’s driving that?

JC: I think that a big thing that has changed is that we now live in a world where both computing and storage are really, really cheap. And so, in part thanks to the efforts of the folks at Google and Apple and lots of people in Silicon Valley, it’s incredibly cheap now to build systems that were difficult to build about even 10 years ago. And so we now live in a world where we are able to process data much, much faster and cheaper, and so research that was impractical 50 years ago is now very easily doable by some high school student.

KS: Like a lot of things in computing. But what is the interest — is there some great leap they've made or what, or just that it’s the latest …?

JC: I think it’s super incremental, right. I mean, when we talk about artificial intelligence, what are we actually talking about? It's a fair question. I think most computer science guys would say that artificial intelligence is what we can’t figure out quite yet, and the moment that we figure it out, it just becomes another feature. So if you were to talk about toasters … at one point, figuring out how to toast bread was really, really difficult, and people talked about ...

KS: I can’t imagine it was so difficult, but all right.

JC: … a robot, a robot that would actually figure out exactly how to get bread just right, and the moment it’s doable, it’s a toaster, and I think that's true about a lot of technology.

MB: Yeah, but I guess the distinction is the term "supervised and unsupervised learning," right? My very limited understanding, but you have supervised learning, which is essentially the basic programming — if this, then that. And you train an algorithm to do certain things, and then what unsupervised is, you don’t need this and that, because the computers can actually do it themselves, and isn't that ... that’s massive, right? I mean that's a huge shift.

JC: It is important. And I think the critical thing to realize here is that when we talk about artificial intelligence in general, we've been playing with lots of it for a long time, back to Eliza. We've had like these fake psychologists who actually simulated a lot of what a psychologist does quite, quite well, for like 30, 40 years, and if you look at some of the chat bots that we're looking at right now, those turn out to be very, very similar in functionality.

KS: They are!

JC: And yet they work really well in part because we as people want to anthropomorphize ...

KS: And we're also used to the computing experience more.

JC: And now we're used to the computing experience. So that's like the core part of it. But you were talking about machine learning, so what is machine learning? Machine learning at its core is a different way of thinking about building software. Most software is really hard to write ...

KS: Define these for people who aren't ... machine learning — let's go through a few terminologies.

JC: Okay, so if you were to just talk about software to begin with, you've got a set of instructions, and then data, right? And typically what happens is you feed the data into instructions and then something comes out. That's the core of most software. What's interesting about machine learning then is you have a core of instructions and then some data, then you take the data, and based on the results you get from the data, you modify the instructions. And that cycle is part of what makes it possible to get really, really impressive results very, very quickly. And I think that's what we're seeing right now for a lot of different fields where, if you were to talk about a number of categorization problems or perception problems that were impractical to figure out 20 years ago or even 10 years ago or even, to be honest, five years ago, they're now doable, so that you can say that you'll have a guy sort of in his back garage building a self-driving car in four months. So those kinds of things are now doable, in part because of the state of machine learning. And also, to be honest, the state of computing is so cheap and fast, and you're able to pull these things off.

The downside for neural nets is they're really hard to understand.

KS: Right. So, neural network.

JC: So the right way to think about a neural network is ... when you think about categorization in general, you're just gonna say, is this thing bigger or smaller than something else? So let's start there, and that's a pretty straightforward thing to do. And then the moment you add, let’s say, additional dimensions — let's say you have X dimension and then Y dimension, and then let’s say you to go 100 of them — then it’s impractical for our heads to think it through. and neural nets basically pattern on the ways that some people in the ‘60s thought the brain worked — turned out to be a very good abstraction to figuring out how to make decisions and categorize things.

MB: What’s the downside for neural nets? I know not everyone's a believer in them.

JC: The downside for neural nets is they're really hard to understand. So this is something that we've seen for a long time in computer science. Neural nets are almost impossible to decipher in the sense that you really don’t know what weights have changed or what parts of the little neural net have changed to come up with results. And so it’s really hard for engineers — who, to be honest, are typically control freaks — to figure out how to modify or change. And so I think that's really scary, and the problem for most computer scientists is that now it actually works. So the results are really, really good, and the CS theorists in general are still trying to catch up and figure out whether this is something that's a localized phenomena, or whether it’s just new theory that they need to come up with.

KS: And computer vision?

JC: So computer vision is a more generalized version of what we're doing with neural nets, where now you're able to categorize and figure out whether something fits, whether something's a cat or a dog, and you're making ...

KS: So we have to tell them that now? A human just knows ...

JC; So the amazing ... well, a human doesn't know, of course, the human's trained.

KS: Well, initially, but then doesn't need to learn again and again, doesn't need to have data attached to it.

JC: And so the amazing thing now is that like, sort of in the similar way that a baby is trained to know whether a cat is a cat, we can tag a bunch of data and then show them things that look similar to cats and thanks to deep learning, you're able to consistently say that oh, in fact, this is a cat.

KS: Do you have to continually tag it though with computers still?

JC: So as cats look different or things change, then you do need to, of course, re-tag things, or figure out whether something's a dog or whatever.

MB: This is the example that Google's used for years, my theory is, you know, to soften the blow of AI taking over the world is, look at these cute cats.

KS: Hey, cats, that's right.

JC: It does work. It does work. It’s hard to resist!

MB: You guys at Bloomberg Beta have been investing in a lot of machine-learning startups and companies. I'm kind of curious what kind of companies you're investing in and why.

JC: I think that from our perspective, the dirty secret around machine learning right now is that nobody knows what they're doing.

KS: That's the dirty secret of Silicon Valley. [laughter]

JC: Right, that software's hard to write, and then machine learning's even harder to figure out. So it’s really hard for a software developer to project plan around whether or not they're gonna get results, and no one has any idea. It’s still highly, highly probabilistic. And then the harder part there is that like business guys — business guys actually have no idea when to use machine learning or when not to use machine learning, in part because we just don’t know, we don’t have good theories as far as how to think about how to apply this to businesses. And that's when it’s interesting, when nobody knows what's going on. That's the time to go around ...

KS: And invest.

JC: And find smart people looking around at different parts at the market.

A core part of artificial intelligence is simulating personalities ... And we’re very, very good at that.

KS: And so at the Code Conference, Ginni Rometty, the IBM CEO, did say she believes artificial intelligence, which she’s banking her tenure on at this point, will affect every business decision in five years. You're saying they don’t know what they don’t know.

JC: Here's the thing that we know. We know that it’s now cheap enough to store basically every decision that's made by any business, and we also know that in theory, if you did a good job of categorizing things better, you'd probably make fewer dumb mistakes. And exactly when you should do that and when you shouldn't do that, and how hard it is, nobody really knows. And we're trying to figure that out right now, and I think that’s why it’s exciting to be investing in machine intelligence.

MB: Right. I mean right now we're starting to see with Google and Facebook and Facebook Messenger and some with Apple, this machine learning actually kind of reaching consumers in their daily lives, but it’s still not fairly mainstream or widespread. When does that kind of start to happen?

JC: So I disagree with most folks that the problem is a computer science problem. I actually think it’s a product problem. It turns out, if you actually think about it, a core part of artificial intelligence is simulating personalities. It turns out that basically all people do is try to impose personalities onto other things. And we're very, very good at that. Even before Siri, you treated your iPhone almost like a person — you got mad at it when it acted a certain way or it didn’t act a certain way.

KS: I take issue with "almost," but go ahead.

JC: And so to me, we're already at the point where good product people are actually thinking clearly about simulating or making us feel affection toward products, software products. And so I think we're already there for a lot of it, and I think a lot of the problems now with conversational agents or talking to Siri or working through some of these problems are actually product problems. They're not really technology problems. We're actually there for a lot of it.

MB: Yeah, I thought it was interesting about Google ... I mean Apple has Siri and Amazon has Alexa, but Google decided to call it just The Assistant — it didn’t personify it.

JC: Yeah, and I think in some ways that's a mistake on their part.

KS: What a surprise from the robot people of Google.

JC: I think they're very smart and brilliant about lots of things, and I have nothing bad to say about Google at all.

KS: They're so good at social media for example.

JC: I'm very grateful for them.

KS: I think that said, social, they just like ...

JC: But I do think that ... the genius of Siri is not so much that you ask it for things, it’s that you actually feel like you have a relationship now with something that's part of Apple. And I think the genius of Alexa is that my child now thinks that they have a relationship to Amazon in a way that they have no similar relationship to YouTube.

KS: Yeah, i think Alexa's actually the most … I don’t think Siri went far enough. For some reason, Alexa takes it. What do you imagine? What is that? So we're talking about products that people will use ...

JC:Yeah, so I do wonder whether that's because it’s on a phone on which you have to press a button, as opposed to somewhere where you're just talking into the air. I think certainly my children, if you just sort of look at younger people, how they think about it, that actual act of having to do something before you speak to it — like that's not normal, right? Normally when we talk to people, we just start talking and they listen, and you don’t have to call out their name. And I think the fact that Amazon was brave enough to be able to release a product like that is, you know, it’s a very hard thing to do. If you ever talk to people at Google or Apple, I'm sure they all wanted to release something like that, but they were terrified around the privacy implications, and you've got to give credit to Jeff Bezos for saying, I don’t really care, I'm gonna release this, and we'll see what happens!

KS: We're also not quite as terrified of Amazon as we are of Google, which gets into the idea of control of information and who controls all this AI. And I think something that Elon was particularly nervous about was certain companies having all this AI technology and others ... sort of a have and have nots situation.

JC: I think that over the history of this country, we've relied on bad decisions from big companies to unlock lots of value for lots of people. And I will admit that it’s possible that Google and Facebook will make no mistakes and be perfect in their execution. It’s also probable that there will be many, many great engineers and product people inside Google and Facebook right now thinking to themselves, you know if only we did this, then we could really build something interesting, and who, for a variety of reasons, whether their business model or politics, won't be able to build it.

KS: Well, what do you think about the Open AI movement, the idea that everybody will have their artificial intelligence?

JC: Well, you mean Open AI as in the organization or ...

There are huge network effects around owning all of the data.

KS: Just the concept — more the conceptual idea, that it should be more shared because it’s so important.

JC: I think it’s really exciting. I think the set of talent that they've pulled together is amazing. I think that the set of their research agenda ... remember, Open AI isn't so much competing against Google or Facebook ...

KS: No, no.

JC: … as much as it’s saying, you know, inside academia, you have a set of restrictions and so your ability as someone who's quite talented to pull off a multi-year effort, to pull off something really big, is constrained. So Open AI is the ability to go out and say, hey, if you want to really work on interesting research for the next X years, come join us. I think that's what’s actually happening with Open AI. And then as far as the question, I think there are a number of smart people who go out right now and say that there are huge network effects around owning all of the data. And there's the sense that, in part because both Google and Facebook and Apple are in control of much of our communications, that they're able to see a lot and they're able to store a lot. I think that that's true. I think we will also be surprised though, in the next five years, when the next interesting consumer startup comes up that everyone ends up wanting to use, and they suddenly will say to themselves, oh my goodness, here we are, we've not just captured every single text message, but we've captured every single blink that someone's had.

MB: That's what Uber's done in some ways, right? I mean they kind of came out of nowhere and then collected all this transportation data.

JC: That's right, that's right, and I think in some ways, consumer data, consumer behavior leads the way, and so I do worry a little. I mean I worry a little bit less about sort of monopoly effects.

MB: You know, we talk with AI people, especially the ones who have been doing it a long time, and they bring up the — and I love this concept — AI winters, and I think this was, back in the ‘80s, the ‘90s ...

KS: Yeah, there's been a lot of excitement.

MB: … a big trend and then just sort of dies — the funding shrivels up constantly. Are you afraid of that as someone who puts money into AI companies?

JC; You're always paranoid, of course, but what’s different now ... the AI winter in some ways is a very specific phenomenon around both venture in investing but also ...

KS: Development.

JC; U.S. government investing. And for a variety of reasons, they decided that artificial intelligence was a bad idea, or talking about machine learning was a bad idea. Now the truth is, people continued to do interesting research, it’s just that the companies weren't as successful. And now we're at an age where the sort of ideas that were good 20 years ago are now good because it’s cheap! It’s cheap enough to pull off. And I think, sure, we'll have some sort of backlash, and there will be plenty of mistakes that people make, especially in terms of figuring out privacy and figuring out consumers’ relationship to big brands, but long term, I bet you it’s going to be really, really hard to put the genie back in the bottle.

KS: And not just concerns about AI winter — that AI is scary, that it will create permanent winter for humanity. "Game of Thrones."

MB: "Game of Thrones" winter.

KS: Depicted in most of popular culture, it’s all negative, it’s never particularly positive except perhaps the Jetsons.

JC; Right, right.

I worry less about the machines taking over than I do about people being discriminated against because of some bad statistics.

KS: But otherwise, it’s an "all humans die" kind of thing. How do you look at that, and why do you think that is?

JC: I think that there are of course fairly straightforward theological reasons why people think about the end of the world, which is to say we all die, and so, yeah, of course we think about different ways that bad things can happen.

KS: Not if we upload our brains, we don’t! But go ahead.

JC: Right, we'll see about that. I will admit that I worry less about the machines taking over than I do about more straightforward things, like people being discriminated against because of some bad statistics, or decisions about building bad software that cause the electrical grid to go down. And I think those problems are actually the interesting problems that sit before us between the next sort of zero to five years.

KS: Why are so many prominent people so worried then? People you'd imagine know a little bit more than most people.

JC: Because I think that they're thinking a 10- to 50-year time frame, and to some extent we've lived in this time of constant compounding for so long, for the last 15 years, that it’s a little hard for people to imagine things get slower. But everything faces S curves. At some point we do end up in a world where resources get constrained or the thing that was really easy to do becomes less easy. I think that's a big part of it. And I think part of it also is like, we are always worried about the end of the world, and it’s a good thing to worry about the end of the world!

MB: The response from a lot of the researchers has been, well, we're so far removed, we're at baby steps right now. But to your point, that's sort of what Elon Musk is saying. But maybe 50 years from now, will we still be at baby steps, and what does it mean when we've advanced so far and we're not prepared for that?

KS: 2100.

JC: Yeah, there are a whole set of decisions — I think it is hard to do moral reasoning about things that don’t exist yet. I think it was hard for people to ban chemical weapons until people saw the horrific effects of it. I think there will be some small mistakes that will end up happening along the way, and we will end up making interesting policy decisions based on that. And the privacy question is a super-open question — sort of, do you own your life right now; how do consumers think about privacy; how do they think about the lessons that companies learn?

Right now, a lot of companies are getting a free ride ... People don’t realize that the data they're giving them is substantially improving the company.

MB: Do you think there is a trade-off between privacy and effective AI?

KS: Comfort, right, or making your life easier.

JC: I think we've consistently chosen comfort and ...

KS: Yes we have.

JC: ... and sort of better, more valuable products over our sense of privacy, and I think that ...

KS: Because we are old brain creatures, that's why.

MB: Is that going to change?

JC: Well, here's what I think will actually happen: I think that we as a community at some point will realize the value of our data and probably figure out ways to make that cost something for companies, I think in general right now a lot of companies are getting a free ride based on the fact that people don’t realize that the data they're giving them is substantially improving the company.

KS: Yes. Years ago when Steve Case gave a presentation talking about how they're using all the data they collect from their users, it was in front of advertisers, and I got up and I said I would like $40 please because you use my data and you're selling it back to them.

JC: Right.

KS: He didn’t ever give me the $40.

JC: Right. But I think that's up to activists, and there will be a set up for businesses that will make that clear, but it is really hard to pull off right now.

KS: Yeah, no people like these things. So, every week, our listeners send in their questions, comments and complaints about tech topics. You can do that by tweeting us at #TooEmbarrassed. This week we asked our listeners for their questions about artificial intelligence. Mark, do you want to read the first question?

MB: Sure. The first one comes from Michael Anthony — his handle is @michamoore: "I keep hearing the internet won't exist in 10 years because of AI. What does this mean?" And then I think it’s some sort of an emoji face.

KS: No mouth.

MB: No mouth, emoji face, eyes agape.

JC: I have no idea. [laughter]

KS: That it won't be the internet, we won't go to places to search for things, everything will be intelligent.

JC: My bet is that we will be even more connected 10 years from now than we are now and that the ...

KS: But will it be via internet or something else?

JC: it will be some network of networks. It will be over some network, and I'm hoping that it will continue to be of public good and it won’t be owned by a single corporation, but to a certain extent that's a decision that we as consumers make.

KS: But it won't be text based, right? It will be voice based. How do you look at it?

JC: But text is really good! Text is really good at a lot of things, and so until we're able to implant memories or ideas into other people's brains, text is just so efficient and quick and compressed.

KS: Uh huh, so the internets are here to stay?

JC: Well, at least the text. I bet you text will be with us for a long, long time.

MB: And you actually didn’t mention augmented reality at all, which is like ... it’s an AI problem, but I mean are you bullish on the future in which augmented reality will be everywhere?

JC: I think that augmented reality is really, really hard to pull off.

AI makes the internet stronger and tighter.

KS: Requires a device.

JC: Requires a device, and the math around it is really, really hard. It’s gonna take a long time. I think that there are very good questions around whether or not VR is gonna make people sick.

KS: Oh, I think VR is here to stay.

JC: Well ...

KS: Porn ... also, movies.

JC: Right, and so I think certainly those are gonna be key parts of our experience, but I think it’s gonna be a ways out there. But going back to the question, will the internet be replaced by AI? I actually think AI makes the internet stronger and tighter because as people ...

KS: But you think it’s text based or voice based?

JC: Well, it will be a bunch of things. The internet initially was just text, but it’s text and images and video and a bunch of things.

KS: Okay, next one, Mark?

MB: Sure, this one's from Evan C. Sharp, @evancsharp: "What are the fears, if any, of AI that are held by people moving the technology forward?"

JC: That we'll screw it up. I mean there are a whole set of really important decisions both around policy and ethics and how to build the system.

KS: So what do we need to not screw it up in policy?

JC: We need technical people to talk to government people to talk to business people in clearer ways right now. If you were to talk to the average CS professor, I think he or she would say that there's lots of low-hanging fruit, and lots of policy ways in which you have to be thinking about it, and also, particularly, there’s a lot of magical thinking on the part of policy guys as they think about AI, in part because they don’t have a really good understanding of what it can and can’t do right now.

KS: Meaning, what’s their magical thinking?

JC: I mean I think there's lots of fear that, oh, well, one day this will ... you know, you can’t have a world where people are far more concerned about the robots taking over than what to do when, like, part of the electrical utility grid is off by some percentage and does that cause other problems later on. And I think those problems, those are actually the interesting problems going on right now.

MB: Well, I know the people inside Google do have debates about the robots taking over, and they sold a company, Boston Dynamics, the humanoid robots, and that was part of the issue. That's a separate issue, but ...

JC: But I think that's a bigger issue around like weaponizing.

MB: Yeah, they were military robots, right.

JC: Yeah, so I think weaponizing AI is like a really good question that's incredibly unclear right now because on the one hand you'd say gosh ...

KS: What a good choice, it works well, right?

JC: Well, no, you'd say it’s terrifying.

KS: Because there's no human interaction. I think that was the movie "War Games." But Matthew Broderick saved the world! With a simple game of tic tac toe.

JC: Yes that's all we need.

KS: A game of tic tac toe and then the computers — you know what would really happen? They'd play tic tac toe and say now I'm gonna kill you anyway, thank you for the lesson. [laughter] Okay, next question is from Mishra Naveen, @urstrulymishra: "Do we need a definition of intelligence or can we abstract it to the Turing test? Should we ask the neurobiologists?"

JC: So you guys know what the Turing test is, right?

KS: Yes.

JC: No. There are so many different types of intelligence, and there are so many different ways of thinking about what is smart or not smart that I think that for CS folks to try to go after a human definition of general intelligence sort of misses the point in the same way that the Wright brothers, if they built a bird, they would have missed the point of aircraft.

KS: Right, very good. I'm gonna just leave it at that. So we should not ask the neurobiologists. Go ahead.

MB: Um, sure, the next one is from Julia Clavien, @juliaclavie. She sounds much smarter than I am because her question is: "Do you anticipate AI-fueled analytics progress will lead to causal determinism being generally accepted. If so, how soon?

JC: Let’s think about that. So, causal determinism meaning that we now will think that the world is not random, I guess, right?

KS: Patterns.

JC: And I think what we'll actually end up seeing is that there are so many different inputs to making a decision that we'll actually have a better appreciation of how random the world actually is. Remember, a core part of machine learning is a bunch of probabilities, and that these probabilities are gonna shift constantly as we try to improve the algorithms.

KS: And as we put in more data. Interesting. Alright the next one, from Thomas, @dcrck: "How will AI change media and broadcasting?" I'd like to know. I'd like to have an AI doing this.

MB: When are we gonna be putting up jobs?

KS: I can’t wait.

JC: No, it’s gonna make you guys all the more important. I think that one way to think about AIs is that they're really, really good at making certain things more efficient, so ...

KS: So give us an example.

JC: So the moment that you can decide exactly the right way to compress or summarize this podcast that we've just done, that there will be a whole set of instructions around doing it, and there will be some way of measuring whether or not this actually communicates the highlights of the talk or whatever. And once that happens, then you let the machines take over and sort of the research that you need to do, parts of that will be summarized and easier to do and annotated.

The moment that Facebook no longer feels like it’s giving us real and interesting results, we will move onto something else.

KS: Yeah, too much of journalism is anecdotal, and it’s usually false — it’s often false.

JC: Yeah, and like the process of going around and talking to people, you'll have more time to have smarter conversations with people to figure out what’s actually going on.

MB: But there's also a fear ... I mean you could say that machines have taken over with the Facebook newsfeed algorithm, and there's certainly a fear in media that Facebook, that the newsfeed is so powerful.

JC: I think we as consumers and people are constantly looking for something real, and so we're always on the search. And so the moment that Facebook no longer feels like it’s giving us real and interesting results, we will move onto something else. And I think that that search for the real really means that search for the human, and I think that's constantly dynamic.

KS: All right, last one.

MB: Optimistic. All right, this one's from Michael, @mindhealer111, not really a question but more of a point.

KS: It’s a good point

MB: It’s a really good point! "Artificial sounds so fake, they should change it to Android Intelligence." I know a company that would be happy about that, and then pour more into my phone for upgrades. [laughter]

KS: Why is it called artificial intelligence?

JC: You know it really is ...

KS: I know who named it.

JC: You know the history, right, but a lot of it is like computational statistics.

KS: What would you call it if it wasn't artificial intelligence?

JC: I bet one day a lot of it will be statistics, and a lot of it will be called programming.

KS: Okay, but that's not really a cool name.

JC: I just mean it’s gonna be mainstreamed.

KS: How about "big brain" or something like that? [laughter]

MB: Yeah, no, no, that's true — Google had like the Brain Team and they dropped that.

JC: I think there's this idea of machine intelligence that I think is a better term, though, because it’s actually in that case driven by computation.

MB: No, I mean like, AI has a branding problem. right? I mean I think ... I keep going back to Google because I can’t help it, but the Go game in Seoul, Korea, was a pretty good marketing success, because they were able to say here's this really fascinating way of applying AI. They beat the world champion in Go, which had never been done before. Its sort of benign, like they're not destroying ...

JC: Right, and I think in some ways the right way to think about it is less the fact that they beat a great Go player, as much as that it taught all the people that know how to play Go a whole new set of strategies that no one had ever thought about before. And I think that's the exciting thing, and I think that's what machines are gonna be really good at.

KS: I think we should call it SkyNet … no, [laughs] but in a lot of ways it also replaces jobs. I mean I've been talking about this a lot, the idea, we've had people coming on talking about how, once these systems get into place, you don’t need radiologists — they can look at 500 or 600 photos of someone's brain and figure out what the diagnosis is with much more accuracy than with radiologists. You can think of job after job after job classification that gets replaced, and I know what tech people’s answer is: We'll have new opportunities, just like when the car came and there were no more horses, or something like that. But it seems like in this case, they can do a lot of the jobs, and there are no replacements necessarily for many people, because a lot of it is repetitive, and between robots and artificial intelligence, no one's gonna be working.

JC: First, let me make a couple observations.

KS: Okay, please do.

JC: First, if you were to talk to my grandfather about what I do right now, he'd look at me and say wait, so you're not working in some field, and he'd say, you know you're just playing. It sounds like you spend your whole day playing!

KS: Yeah, yeah.

JC: And so our notions of what work is radically shift generation to generation — first observation. Second observation: Economists don’t know how to apply machine learning and when it replaces people and when it does not replace people. It does replace a whole set of tasks — there’s a whole set of tasks that become easier in your day, or fundamentally different, and the implications of that are, in general, positive. The bigger question around labor displacement, on the other hand, that is a really real concern.

KS: It is indeed.

JC: Right, and I think that, we, both as a country and as a society, like that is the actually interesting policy question sitting before us — what do we do about that and how do we think about it?

KS: How do we train people to do something else?

JC: Sort of how do we train people to do something else, or how do we think about that widespread displacement in sort of ... if you thought about the promise of NAFTA, I think people, economists all over the world, would now say that they underestimated the cost on people and how hard it was going to be for generations. Long-term, two generations from now, we’ll be better off because of NAFTA.

MB: Do you think that people in tech are thinking about this problem enough?

JC: I think that policy people, I think that Google, I think that big companies, tech companies are all seriously thinking about this because I think that ... like that question, which is not will the robots take over, but rather ...

KS: What will happen to society when nobody has jobs?

JC: Yeah, what is the industrial policy for the United States? That is a serious question.

MB: That's the real takeaway.

KS: You're already seeing the effects of NAFTA on the election right now.

MB: What's Trump's policy on AI?

JC: That's right, that's right.

KS: Does he even know what it is? I will be asking the Trump campaign. I will be asking all the candidates. I don’t think they probably do. I can’t imagine they're thinking about it for one second, but maybe they are. No comment — it’s okay, you don’t have to, I'll do it. They aren't thinking about it in any way whatsoever, I'm pretty certain. We'll see about that, but I'll be asking the questions. But anyway, thank you so much for coming. This has been fascinating. It’s a really fascinating area and something that's gonna be … I think the next wave of computing is really around a lot of this stuff.

JC: Yeah, and it’s hard, and no one knows what they're doing.

KS: Well, I like this idea. I'm getting into VCing now. If that's the criteria, I'm perfectly positioned to do it well! Anyway, thank you much for coming in. Thank you, Mark.

This article originally appeared on Recode.net.

Sign up for the newsletter The Weeds

Understand how policy impacts people. Delivered Fridays.