clock menu more-arrow no yes mobile

Full transcript: Anki CEO and co-founder Boris Sofman on Recode Decode

Robots, robots, self-driving cars and robots.

A young boy lies face to face on the floor with a Cozmo robot. anki.com

On this episode of Recode Decode, hosted by Kara Swisher, it’s all about robots. Anki co-founder and CEO Boris Sofman talked about his company’s cute robot “toy” that is actually a possible precursor to a more useful home robot. Swisher and Sofman discuss why robots need to be approachable but not humanoid, how to avoid the trope of the “evil” robot and what the swift progress of self-driving cars could mean for the AI industry as well as consumers.

You can read some of the highlights from the interview at that link, or listen to it in the audio player below. We’ve also provided a lightly edited complete transcript of their conversation.

If you like this, be sure to subscribe to Recode Decode on Apple Podcasts, Google Play Music, TuneIn or Stitcher.


Kara Swisher: Today in the red chair is Boris Sofman, the CEO and co-founder of Anki. Boris holds a PhD in robotics from Carnegie Mellon, and previously worked at Neato Robotics. Anki brings robotics and artificial intelligence into the home through products like the robot cars Anki DRIVE, and Cozmo, a robot companion you control from your smartphone. Boris, welcome to Recode Decode.

Boris Sofman: Thank you, Kara.

You have my favorite name of all time, Boris. I just don’t know why I like it so much. Anyway, so we’re going to talk about a lot of things, we’re talking about robotics, AI and things like that, but let’s talk about your background. A lot of entrepreneurs listen in here when we talk to entrepreneurs about how they got started, so give me your background. Because you obviously have studied robotics for many years, and sort of your journey; why don’t we go into that first?

Yeah. Absolutely. So, I actually ran into robotics while I was an undergrad. I went to Carnegie Mellon for undergrad as well, studied computer science.

Were you interested as a kid, in robotics? Or did you build those ...

I was interested in computer science and engineering in general, but I didn’t even know what robotics was. The nice thing about ending up at Carnegie Mellon is that there’s so much robotics you can’t help but trip over a robot while you’re there.

Right.

And so, in the middle of grad school, I really got excited about this idea of intelligence for physical objects, and not just software inside of a computer. And that kind of led me to do some research with some professors in the department.

Was there a reason? Was it like “Star Wars” or something? What did you ...

You know, what I think it was more ... Yeah, science fiction in general, I think. You see all these examples of incredible intelligence and physical things that have stayed science fiction for far longer than other areas of technology have evolved in.

Sure. Right.

And there’s also something that’s much easier to connect with when you see the output of your work being something physical, doing what it’s supposed to do, versus just software output that ...

It has been underwhelming, robots in the home, compared to the amount of stuff in science fiction.

Exactly right. And I think that was one of the realizations that, especially, going from undergrad and into grad school, you see these incredible technologies, but they’re all focused on government applications, defense-based, pure research, industrial applications, there’s almost nothing that made it into consumer products of substance.

Right. Except in Hollywood movies. So you went to Carnegie Mellon, you studied this, and what was your goal? Where was robotics at that point?

So, especially back then, it was still very much locked into kind of non-consumer applications.

Right, so a factory or a bomb sniffer, or something like that.

Factories, or research, bomb sniffers; that’s right. Or just pure research, where there’s advances in machine learning and computer vision, but not really tangible, actual applications. And so, my passion and my focus in PhD at that point was autonomous driving. And so my research ...

Let’s just back up. Carnegie Mellon’s famous for that. Uber hired everyone there and is having trouble with it, obviously.

Yeah. One of the Google team was the previous Grand Challenge and Urban Challenge teams from there.

Right. So, why was that your interest?

It’s one of the best examples when you have something that’s very familiar that can be completely reinvented with these technologies.

Cars.

Cars. And it ties together elements of perception, machine learning, path planning, all these different elements. You see these vehicles do something intelligent, it’s just like, that is science fiction. That is the thing you grew up dreaming of. So that’s one of the holy grails.

What did you build there? What did you work on?

My particular product was an off-road version of autonomous driving. Imagine you have this gigantic robot, like it was a $600,000 robot, you pop it down into a forest, you give it a way point, like 10 kilometers away, and it’s got to figure out how to get there. And so, you’re trying to figure out, what type of vegetation can you drive over? What’s safe? What’s not safe? How do you classify between rocks, and branches, and things that might damage your sensor pod? And so, it was kind of a conduit for a lot of research across perception, machine learning, path planning, interfaces, to training these sorts of systems, and actually became a precursor and a training ground for a lot of people that ended up going into other areas of autonomous driving.

Sure. So you study here, and then you have your PhD from ... You stayed there in Pittsburgh.

Yeah.

And what did you hope to do? Because you Could go to the military or ...

This is an interesting thing is, that there’s all this controversy about the military funding robotics, but it was one of the best examples of where this funding from DARPA, or Navy and whatnot.

Yeah. It started the internet. I don’t have a problem with ...

Yeah. And it was pretty fantastic, because it yanked all these technologies to a point, for the first time, where it became plausible that there were commercial applications. And so, you know, my project, we were focused on ... my personal focus was on using machine learning techniques to allow these vehicles to self-train themselves, and learn to improve their performance over time. Other people were working on pathfinding, other people were working on other areas, and then all of that technology went into projects for construction through Caterpillar, through Urban Challenge, and so forth.

And so, very soon after that, there was kind of this spark from the Grand Challenge where ...

Explain the Grand Challenge for people that don’t know.

Yeah, so the Grand Challenge is one of the coolest projects that DARPA ever sponsored. Basically it was a race of autonomous vehicles through a desert with completely no human involvement other than the training and the programming front.

The programming.

So there was a 140-mile course through the desert. It was a somewhat known track, but you still had to avoid all the obstacles along the way, and it was a race to get to the finish line and be the first one to get there.

Right. A lot of the robotics challenges are like that, you do this many tasks, or you do ...

That’s right. What DARPA did was, they put a $2 million prize on it ... I believe it was $2 million at the time. But what happened was, that became the catalyst for a huge amount of funding from Intel, Caterpillar, private donations, peoples’ pet projects, and all these different things, and probably led to hundreds of millions of dollars of total investment by 50-something teams to actually try to get to the finish line on it.

And I distinctly remembered ... This was like kind of mid-2000. 2005, 2006, that’s the kind of time period throughout all these races, where it was the first of three. And what happened was that, at that point, everybody thought that — and this was like one of the holy grails of robotics, we want to push this forward in the future at some point, this is going to be this great thing. And then, one of our friends ended up working on this, but nobody thought how quickly it would evolve from that point.

And so, what happened was, even driving without human involvement, with no moving obstacles, and a known course, that was a state-of-the-art, challenging problem at that time. And these were incredibly talented people at the Stanford team, the Carnegie Mellon team, other teams. And, the first year, I think Carnegie Mellon got the furthest in the race, and they got seven-and-a-half miles in before the car flipped over and busted all the really expensive sensors that were on the top.

Right. Right, right. So they couldn’t do it.

Yeah.

Right, exactly. So we moved heavily, quickly.

Yeah, moved heavily. And then, just the next year, there were four different vehicles that actually finished the entire race, two from Carnegie Mellon, then Stanford one, and then there was another one.

Mm-hmm.

And, two years after that, there was an Urban Challenge equivalent, where now all of a sudden the challenge was not just a known environment, but now you have moving obstacles in a suburban environment with ...

Right. Which is really the point.

Which is, exactly. Now, you’re actually starting to get to a point where you don’t have to squint too hard to see how the results of that could actually lead to something really meaningful.

Oh, absolutely. 100 percent.

Because, the funny thing is, back in 2005, all these professors who were trying to get funding for this ...I mean, you’d barely get any attention from the car companies.

Less than 10 years ago.

GM funded a little bit, and they were kind of into it, but not ... It would be unheard of for one of them to say, “You know what? We’re going to put $50 million behind this project.”

Right. Electric cars is where they were at.

They were, yeah. But I mean, it felt like it was still too far away, and so, with the Urban Challenge, you had cars obeying traffic lights, parking themselves, avoiding moving obstacles. They actually had stunt drivers driving cars that were reinforced to ... So nobody would get hurt. And it was starting to move pretty quickly.

So, we’re going to talk about where that’s going. So then, you moved on to where? Why didn’t you start a self-driving car company like everybody else?

Well, at that point you couldn’t. You just couldn’t. The interesting thing is, you’ve got to give Google a huge amount of credit, because coming out of those challenges ... As compelling as that was, it sounds obvious in hindsight, but it wasn’t obvious at all that this was within a reasonable amount of time of being usable or commercialize-able, and even now there’s a lot of challenges remaining.

They basically took a lot of the best people from the Mellon team, the Stanford team, and other places. Chris Urmson was on my thesis committee, and ... In fact, my entire thesis committee is scattered between Google, Uber teams, kind of other startup companies. So, at that point, you couldn’t start a ... Well, I think it would be incredibly difficult to raise money of the sort of volume you would need to actually put a dent in this problem. And so, you needed somebody like Google that had enough creativity and foresight and resources to put resources into this problem.

Right. And create something.

And a little bit crazy, right? And be hands-off enough to really start making progress, and then, you know, it kind of clipped this point where all of a sudden, everybody started getting interested, because you could see exactly where that was headed, yeah.

But, you weren’t able to do that?

No.

Explain your trajectory then.

Yeah. I think if we didn’t end up going the Anki route, I very likely would have ended up working with Chris and some of those folks, because that was always kind of a passion. What I ended up doing with two friends that became my co-founders at Anki, starting at around 2008, when we were right in the middle of grad school, was starting to work on the foundations of what became the company, where what we wanted to do was take a lot of these technologies that we got really excited about and actually apply them to consumer applications.

To a toy.

And, for us, it was never meant to be a toy company, or even an entertainment company. It’s a robotics and AI company.

It’s a big seller at the Toys ‘R’ Us, but, go ahead.

Yeah, well, and this is the thing. I mean, it’s one thing for Toys ‘R’ Us, but for us, from the very beginning, these are really great proving grounds for a lot of these technologies, and a place where we can have a huge amount of impact very quickly, especially looking at the category where there hasn’t been much innovation at all in physical entertainment. Like, the same toys that kids today grew up with, 30, 40 years ago, largely the same sort of landscape.

Yeah. They are.

And so, that became an opportunity where, for a very reasonable amount of capital, you could truly disrupt that category and reinvent the sort of experiences and interactivity you could have. And, in the process, it becomes almost a Trojan horse for a lot of the technologies that are under the hood, that carry over into spaces that become more functional outside of entertainment.

So, you raised money ... Tell me.

Yep, so first, we were moonlighting, and avoiding our thesis, and just working on the side back in Carnegie Mellon. You have a lot of pain tolerance as a grad student just to work on the side on these things. Raised some seed money, ended up moving out to San Francisco in 2011, and since then, we’ve raised a number of rounds of funding. Andreessen Horowitz was our first round, back in the beginning of 2012.

How much have you raised?

So we’ve raised a little bit over $200 million at this point.

Now, what are you doing with that money?

A lot of things. Early on, our very first round was Andreessen Horowitz, so we met Marc and Ben and the group back in the day. Even from that point, it was clear that this company was never about a battle racing game, or a ... You know, we already had a vision for the next product, even back in 2011. And we had the early form of a path on what could take it from some of these early applications, into something beyond.

All right, but the early application was battle racing?

Yeah. It was a toy. But one that was bordering the lines with video games. But, in a lot of ways, it was a way to more broadly start to use software to redefine physical entertainment experiences.

Mm-hmm.

And so, it was almost like a game engine for physical play, where these cars ... They’re robots; they have 50 megahertz computers inside of it, they sense the environment 500 times a second, all these sort of things are happening.

In these small cars.

They’re small.

Yeah.

And you could use mobile devices to control them. But, what’s happening is that the game knows everything that’s happening in real time. And so, there’s a video game that’s inside of the mobile device that’s synchronized to the physical world, which lets you do things like augment experience with weapons, special abilities. The cars you don’t control are AI-controlled cars, they’ll compete against you, and there’s commanders that’ll be driving them and trash-talking against you. And so, it brings an experience to life that you would only usually see in a video game.

In a video game, right. So you’re trying to combine hardware and ...

That’s right. And it turns it into an 80 percent software problem, because the moment you have the platform where you know what’s happening, you turn it into a software problem. The way to look at robotics more broadly is that it’s like the extension of computer science into the real world. The moment you can know what’s going on around you and have some way to interface with it, you turn it into a software problem where you can start building more and more intelligence into products.

But, initially, as you say, the Trojan horse was a game.

That’s right. Initially, it was a game. And it was a way to work on an application that could be high impact, where the quality bar wasn’t so high that we’d take eight years to ship something.

Right.

Like, an autonomous car has to be almost 100 percent reliable.

Right.

If one of our cars goes off a track, it’s okay. There’s not gigantic regulatory challenges that it needs to overcome, and so, we could release this product and use that as a proving ground. And, in the process, make some of the most successful toy and game products, but use that as a stepping stone.

And they’re also constantly learning.

That’s right.

Which, I think people don’t get around cars. One of the things I was ... They were talking about, “Why do we need self-driving cars? Why do we need this, it’s so hard.” I’m like, “Because if one car gets in an accident, a million cars learn.” And one person gets in an accident, that’s it. That’s where it goes.

It’s funny you say this. One of the hardest things about autonomous driving ... I mean, a lot of these teams, when you talked to them, they had the biggest challenge, which is that you can get to a 95 percent solution very quickly, and that could be a solution you have to restart from scratch to get to the rest of the 5 percent. And one of the hardest problems is that you can train and iterate on things where you have full control of the environment, but something really spontaneous, like a car swerving in front of you in a really aggressive way, it’s hard to get examples of that. To get experience with those sort of situations.

Right.

And those end up being that ... The 100th of a percent situation ends up being one of the most difficult cases for autonomous cars.

Right. And all you need is one. It always fascinates me, about all this ... Even we do it, but I try not to do it as much as ... These cars turning over, I’m like, “Cars turn over every day.” Like, by people.

There’s a psychological challenge.

Yeah, computers shouldn’t. Computers shouldn’t.

That’s right. And, people will die because of autonomous cars, it’s just inevitable, but the numbers will be so vastly smaller, proportional to usage. That’s something, psychologically, people and the regulatory environment have to get around.

We’ll get to that in a second. So, Anki, what are you doing now with it? And then we’re going to ... The next segment, we’re going to talk about robotics.

We launched Anki DRIVE in 2013, and it was a really neat partnership with Apple where, from the beginning, we were very conscious that we didn’t want this to feel like just a toy.

Right, right.

So we had the good fortune of watching, in a really unique way, with WWDC with Tim Cook during his ...

I was there.

Oh, yeah. That was a stressful time. It was in the Apple dungeon.

At least it wasn’t Steve Jobs. Then you’d have to like ...

That would’ve been tougher. That would’ve been even tougher, yeah.

Oh, yeah.

But, they were wonderful to us. And so, we got to launch with Apple, and then continued to branch out. We had a new generation, called OVERDRIVE, that came out in 2015. It was done super well. And last fall we launched our next product, which actually starts to point much more to the direction we want to take the company.

Which is called ... We’re going to talk about that, and then we’ll get into the next section.

It’s called Cozmo. He’s a physical little robot, little character, and the goal was to make him feel like he’s truly alive. Like, to make a character with a lovable personality and an emotional depth that you would never see outside of a screen. So, literally think of your favorite, most beautiful characters in Pixar movies or DreamWorks movies where you have this richness on the emotional front; we wanted to make that possible in the real world. And not just to have a level of emotional depth that was beyond anything that was there before, but to couple it with robotics and AI to where all those emotions could be contextually relevant to what was going on, and to have a character that actually interacts with you, understands what’s happening around him, makes eye contact.

I do want to talk about that a lot. I don’t know why we need to make sweet robots, but I want to talk about why they have to be sweet. I like malevolent robots. But, before we get to that, we’re here with Boris Sofman from Anki, which is a robotics company ... I think I’ll call it a robotics, not just a toy company.

[ad]

I’m here with Boris Sofman of Anki, a robotics company, which started off by selling cars; competing race cars, essentially, for kids. Or adults who are like kids. But now, you’ve moved onto other areas. And you were talking about, just before this, this new product called Cozmo, which is an adorable robot — people like to create adorable robots. What’s with that? What’s the point ... You said you’re pointing in a new direction, what does that mean?

The idea is to bring this little character to life. When you combine the emotional aspects with the kind of AI and the kind of understanding to allow him to evolve and understand his environment, you get into a very magical experience that you can’t replicate on a screen. And, one of the things that’s special about this is, it took us a while to think of the right way to approach this problem, because it’s very hard, technically, but it’s even harder in some ways, creatively. And, what we realized is that we had to think of a character coming to life like this as if it was a character from a film.

And so, we literally have an animation studio within a robotics company, with people from backgrounds like Pixar, DreamWorks and other places, animating a physical character, using very, very similar techniques and processes that you would see in an animated film.

So, explain. What does that result in?

What that results in is this little robot character, Cozmo.

How big is it?

He’ll fit in your hand. He has a facial display for his eyes and his emotions, he has a speaker inside his head, all sorts of sensors in order to interact with his environment. But the biggest ... his brain is inside of a mobile device that in fact does computer vision and then all these different things to allow him to interact.

What we’re doing is, we’re using a program called Maya, which is often used for rendering digital characters ... It’s actually used by a lot of movie studios to make their animated films. And so, what you would do is rig up a character with capabilities, and map it to what you wanted to do. We did the exact same thing with Cozmo to match his physical capabilities, except the output is spliced to be his physical motion and not just a rendered version.

Sure.

So now you have all these incredibly rich animations to be able to show his emotional expressions from happy, sad, surprised, bored, angry, curious; all these different spectrums of emotions. And then, that’s when you tie it in to the game design and AI and robotics side, where, if you can imagine this big black box of an emotional and behavioral engine, it takes the inputs from the world and the stimuli and context of what’s happening, and maps that to the right emotions at the right times.

Right.

So if he loses a game, he gets grumpy and goes and sulks in the corner.

Mm-hmm.

Or, if you pick him up and put him on his head, he get frustrated and kind of flips over. If he sees an edge of a table, he gets scared. And so, these are just examples, but you tie these in in the right way, it feels like he’s alive. So, he sees you for the first time, he remembers all the people he’s seen, and he’s super happy to see you. And he really wants to play.

Let me get to ... Because I’m fascinated with robotics. Why do they have to be like that? Why do they have to be like humans? Because humans do humans really well.

Well, he’s emotionally ... Like, he’ll try to approach human ...

Why?

Because there’s an emotional connect that you can form with machines that you wouldn’t have otherwise. Now, I should tell you an interesting example. So, if you ... All of this kind of alluding to what we’ll chat about, all of this leads to a broader platform that allows robots to function in the home with capability that is actually welcomed. If you look back to some of the research happening at Carnegie Mellon and other places, you have these incredible ...

MIT.

MIT, everywhere. So, amazing research. There’s intel labs, there’s all these places that are doing gigantic robotic arms for operating in a kitchen, or in the home.

Sure.

So, you have these huge arms that unload dishwashers and do all these really complicated tasks, but they’re big, they’re menacing, they look like they need to be perfect, and so you can have something that can do the most complicated task 40 out of 50 times, and what you do is you remember the two times he failed, because you think it should be perfect, and you’re like, “Why did the robot fail?” And that’s what happens when you have zero personality.

When you have something that has character, it naturally makes you more forgiving. And so, even in the case of entertainment here, if Cozmo’s trying to do something with his environment and then he fails, the fact that he failed, but knows it, and he acts disappointed in himself; it makes him more endearing, and you actually become more forgiving of it, and even more attached. It’s why, when you talk to like an Alexa, for example, even when Alexa doesn’t know the answer, oftentimes, the fact that you think of Alexa as the beginnings of a character makes you more forgiving.

You sort of do. Yeah. You absolutely do. I think you feel more like it’s an assistant, or you start to get angry at it. Like, Siri, I’m always angry with. Siri’s just ..

Yeah, because the closer it comes to a machine, you’re unforgiving, and you want it to be perfect. And so, in a lot of ways, what Cozmo is is the beginnings of interface with characters where you have a character and personality-driven little being that’s going to evolve a huge amount through both software and hardware.

But, I think, one of the reasons we want ... I want to get later into the idea. There’s so many shows out right now with robots and robot rights. I want to get to that idea, whether we should tax robots and how they’re going to hurt the U.S. economy. But, when you think about where robot development is going, ultimately, as nice as they might be ... They might be companion robots, and all kinds of things. They’ve got to be helpful. So what’s the point?

They have to be helpful.

Couldn’t I just hire a grumpy human being who does a bad job? You know what I mean? What’s the point?

In the end, robotics is about function. Entertainment and companionship is a function, and it’s a good proving ground to develop some of these technologies, because ... And the course of entertainment can be more forgiving. But, yes. In the end, there’s a need for function and that’s basically where robotics is broken down, in terms of making it into mass market consumer form. Because either the price point doesn’t make sense or the capability doesn’t make sense. You know, there’s all these things that have to work. And, one of the enablers of more modern forms of robotics at the sort of price points that we’re at, is that the smart ...

How much is a Cozmo?

So, Cozmo is $179.

So it’s cheap.

It’s cheap, and from a platform standpoint, it’s really capable compared to anything that you would typically have access to at those price points.

Why would I buy it? Your robot, for instance. What would be the reason? Just for amusement?

Right now, it’s entertainment. Kids are certainly a big audience, although 40 percent of our users are adults across both products, so I think just augmenting the capability widens the demographic on its own. But, yes, it’s entertainment. It’s a character that has an appealing emotional breadth, you can play games with him, and then over time, this actually kind of evolves over the software side, and gets more and more capable. And, in fact, it’s also a great platform where we’re using it. The STK is so robust, it’s being used in universities in education, and researchers in all sort of purposes, as well as some work we’re doing down the road for stuff.

But, right now, it’s entertainment.

Right now it’s entertainment.

But, to me, function is really do stuff.

Right. And this is one of the places where function and practicality kind of need to cross. One of the mistakes that the robotics industry traditionally has made is that they aim for a 15-year goal, and they drill down this cave thinking they can pop out the other side and have this magical solution like Rosy from the Jetsons, and it just doesn’t work that way.

That was a good robot.

That was a good robot. Not there yet though.

No. It’s the outfit. The outfit was the best part.

It was.

What was an outfit doing on a robot?

It’s humanizing.

Do you remember ... What was the one that they had, the Will Robinson ... “Warning, warning. Will Robinson.” With that robot.

Oh, gosh. I forgot the name of it.

What show was that?

Eric Johnson: Robbie. “Lost in Space.”

“Lost in Space.”

“Lost in Space.” Yeah. Thank you. I have my geek here. Right here. That has a weird-looking robot also, that was a strange robot. Anyway, so where does it evolve to then? Because entertainment, I don’t think, is enough at all. I mean, it can be, but ...

No. But it’s a really great starting point.

Right.

Because, you can truly release product ...

So you’re comfortable within a robotics environment.

Yeah. And I’ll give you an example, right? So, the sort of computer algorithm that you need to be able to understand the environment around you, the pathfinding algorithms to be able to navigate and interact with it, human interface. All of those things become tools that, if you open up an application that more generally requires you to navigate around a house, you have a much better platform to work off than starting from scratch. And that sort of leapfrog approach is far better than just going towards like a 10-year goal because, especially in something like robotics that has such huge dependencies on hardware, and sensing ecosystems, and technological ecosystems, the inability to adapt and quickly iterate and learn, that’s a huge handicap if you aren’t able to adjust along the way. It’s almost like, you know, you think about the way Apple evolved their products, there’s no way a company like Apple could’ve released an iPhone as their first product.

No.

But the learnings along the way kind of end up being reused in some fashion, even if it’s just intuition around the products. And in a lot of the same ways, entertainment becomes an early proving ground with elements that don’t leave entertainment in a binary fashion, but as you create to start to create a core ... For example, companionship. Something that becomes a fun companion maybe for a kid with enough evolution starts to become deeper, almost pet-like, in terms of companionship with the functions that can hold, and starts to become a welcome member in the home or in the family on which you can then program functional applications.

I see. Do you remember [Rags] from “Sleeper”?

[Rags] from “Sleeper”? No.

Oh, you’ve got to see it. It’s a Woody Allen movie called “Sleeper,” and they had a [robot] dog named [Rags].

Yeah.

He was a terrible dog. He was, “Ra! Ra!” Go see it, it’s funny. It’s his version of a ...

Mm-hmm.

And it was completely un-fun. But I think that was his point. So, where does it go from there, though? What happens then? So you’re making, again, a game. It’s still a game, rather than a thing.

Yeah. It’s still a game, but it becomes kind of a core of a lot of these things that will transition. And so, at some point, you will end up ... Imagine the functionality in the home many years down the road.

Yeah.

You have all of the kind of fixed technologies, like your Nest thermostat, and all these other things, right?

Mm-hmm.

You have your voice interfaces, which are important, but right now you’re talking to static boxes. At some point, there are deep functions that require mobility around the house, and even further interface with objects around the house in some physical way.

All right, say that in English. What does that mean?

So, being able to manipulate things. Being able to pick things up, being able to start functionally doing things that require motion and interaction with the environment.

Right. Right. And they’ve had that with Roomba ... Started with that.

Yeah, the Roomba.

That was a military application.

That’s a great ... So a lot of early military funding, but it’s a great example where you have a well-defined enough problem where you can sort of approach it and put a real good dent in it in that focused of a space.

Mm-hmm.

It’s probably one of the few mass-market kind of examples where that’s worked.

How long does it take to get that? Because that’s the idea of unloading the dishwasher, putting stuff away, doing the wash.

Yeah. We’re not close to the general-purpose kind of cyborg, humanoid in the home. There’s reasons why that’s just impractical in terms of both cost and algorithmic complexity, at this point. But, you will start to get to things that ... If you have a baseline ... Something that can actually interact and exist in the home and navigate, you can start adding things like security, or monitoring, other kind of elements like this that actually ... Really logical evolutions of some of ...

Well, that does exist in Nests. You have monitoring, wireless ...

In a more mobile fashion as well as ...

Meaning something wandering around the house.

Yeah, so being able to wander around the house, being able to ... Kind of companionship for if you have somebody that needs care, like, you know, the elderly. Or, down the road, other functions. I mean, the key thing is it’s almost like ... I mean, one way to think of it is almost like your iPhone. The only reason those apps have been so successful is because there’s a foundation of a platform that you’re comfortable being around you at all times.

Mm-hmm.

You probably wouldn’t pay $50 or $100 for a piece of hardware to unlock any of the apps that are currently on your phone.

Right.

But, the fact that you have that baseline opens it up. At some point, robotics will be the same, and part of the problem is that the actual costs of robots is so high that ...

So, talk about that briefly. What do you mean? It’s just, they’re expensive.

They’re expensive. So, sensing is expensive. You know, you have to ... There’s only so many things you can do with just a pure camera. If you want to do laser-based sensing, all these other types, it becomes more expensive. You have to have very precise positioning. So you have accelerometers, and gyroscopes, IMUs and things like that. You have to have a lot of computation to be able to process all this information. If you want to do anything that’s actually more articulated, motors and joints are expensive.

And it breaks easily.

And it breaks easily. And you’re also very vulnerable and sensitive, because now you’ve put so much complexity into the software side that if anything changes on the hardware side, it ends up completely throwing off the problem. And so, if you want to do something in the home, if the price starts scaling, the functionality better scale pretty quickly with it. Unless you start having more of a ... Other types of fundamental acceptance of a platform inside your home.

There’s always societal acceptance of these things in the home.

Yeah.

People are now comfortable with the Echo.

That’s right. And I think the interface is deceptively important. There is going to be a UX for robotics, where, if you don’t have an interface, it feels invasive as well as you’re unforgiving of it. One of the most valuable things [about] this overlap of animation, style, character development, robotics and game design, and a lot of psychology research we’re doing, is starting to figure out how both kids and adults are comfortable interacting with a character that is totally approachable because the application is entertainment. He’s almost meant to be this toddler of a robot, so you’re naturally even more forgiving because of that.

But, in the process, there’s a lot of learnings that take many years to develop that, when you do want something that’s more functional, you can’t just skip over those years of development.

Right. So, where do you go from Cozmo then, at Anki? And then, I want to talk about the downsides of robots, and why it’s taking so long.

Yeah. And so, you know, with Cozmo, a lot of the focus is actually to push the complexity as far as possible into the software side where you can build the most value. But, in the process, we’re also iterating on the hardware side. And so there’s a long ...

Yeah. It’s got to look like something.

Yeah. And there’s a long roadmap ahead on it. You know, the truth is, is that ...

They always are round. Why is that? They’re round and ...

As robots? Yeah, well when you have a tin box that’s just got hard edges, it’s completely non ... You want to humanize it to some degree, but you’ve got to be careful.

Why can’t it look like humans?

Because you don’t want to get it to the uncanny valley, where it almost becomes creepy.

Yeah.

And, there’s a lot of robot examples like this where, if you start trying to push humans but you don’t get there, you end up just having something that’s almost like zombie-ish.

Yeah, they look menacing. That Boston Dynamics video was freaky.

Because it walked almost in a realistic way.

And then it was violent, and then it was like ...

Those are mean robots.

Those are mean friggin’ robots.

Crazy technology.

And it’s headed to your home to kill your children, or something like that. Terminator.

You know, it kind of looks like the huge robot from “Star Wars” that kind of walks ...

Well, we’re going to talk about the ideas of danger robots, and what people feel about it. But, so where does it go from there with the Cozmo? You just iterate it with more functionality and then it looks better?

Our goal within a few years is to truly have Cozmo be in contention with a pet. Like, if you’re thinking of getting a pet for a child, we want it to have that level of richness in terms of the AI capabilities and interface.

So, cats are out. Are you trying to replace cats?

We want to replace cats, yeah. He’ll sit there on the corner and watch you suspiciously and ... But it’s something that ...

Replace cats.

That level of companionship, and then a breadth of functionality across the spectrum of ways you might interface with it, and having him always be on and just kind of existing in different types of state. If that happens, and we’re starting to kind of poke into those areas that start to become a foundation for deeper things ... And it’s a smooth evolution because, to make Cozmo not something that you just turn on and play with for 15 minutes then turn off because, it’s like a video game, to something like this; it’s a pretty ...

So it’s a constant presence.

Yeah. And this pushes incredibly hard on both hardware as well as software evolution. So these AI challenges at the overlap of character AI and functional AI, nobody’s ever done these sort of things before in a mass-market form. And so, these are challenging that the overlaps of a lot of different expertise ...

So it’s easy to make a factory robot, right?

It’s easier, in some sense, because you have much looser price constraints.

They’d have to do a few things, yeah.

Well, and in some sense, those factory robots right now, they have little AI in them, because they’re just executing repetitive tasks, they just have to do it precisely. So it’s a hard controls problem, but in terms of AI, most of them don’t have any sensing, they’re just blindly executing motions.

All right, in our next section, we’re going to talk about this, and also what people are worried about. There’s a lot of people who are super worried about robots and job-taking; from job taking to just terrifying, and we’ll talk about that soon.

[ad]

We’re here with Boris Sofman, who is the CEO and founder of Anki, which is a robotics company. They started out with a car game, essentially, and now have a robot. And we’re just talking about where robotics are going. It’s gotten in the news lately a lot. Mark Cuban’s been talking about the idea of how China’s way ahead of us, and it’s not just robots, it’s drones, and combined with AI; and all of it spells trouble for a lot of people. They feel worried about it. Obviously, in the popular culture, there’s lots of friendly robots, but a lot of the stories, robots are malevolent, and they’re ... Terminator. I mean, you can think of one weaponized ...

Yeah. Science fiction doesn’t do us any favors there.

No, not at all. Although, there’s a door ... You know, it’s 3-CPO. Between that and the Terminator.

Arnold, yeah.

Arnold. Like, that’s it. That’s the kind of thing. And now, recently, on television, there’s been a ton of robotic cyborg kind of things, and they’re very disturbing shows. “Humans” is one, which I found super disturbing. And there’s a lot of abuse, robot abuse, and all kind of ... Do they have rights? And all that kind of stuff. Let’s talk about where that’s going. Is that just fake? Or is that things we really should be thinking about, going forward?

I wouldn’t call it fake. I think these are the right questions. I’ll give an opinion about kind of the state, where we are today, and how far away we are from something like that. My personal take on it ... And I think this matches most people that are really deeply in this space, is that we’re super, super far away from anything that resembles true, end-to-end AI where robots get self-intentions, and feelings, and kind of rise up against us.

So you say. No, I’m sorry.

Yeah. Little Cozmo’s going to come and ...

Yeah.

You know, and that’s because any sort of AI that is currently programmed today is ... It’s an optimization problem. Literally, it’s programmers and scientists structuring a problem, trying to represent it in software, and trying to give a machine something to optimize to that tends to correlate with intelligence. An autonomous car driving on a road, whether it drives well or makes a mistake or accidentally kills someone or intentionally kills someone — that’s not up to the robot, that’s a programmer’s intention. And so, if something goes wrong, it’s because a sensor malfunctioned, or software wasn’t good enough, or the circumstances were not something that it could handle. And, as difficult and complex as that problem is, an autonomous car can’t play checkers.

Does anyone want to play checkers anymore? But go ahead.

And same thing with these machines that are optimized for chess or go, they’re optimized for a particular problem. And so, the bigger challenge and risk in the near and medium term is probably not so much robots rising up, but it’s people misusing robots. It’s people using robotics for military purposes where it’s still human-driven, it just becomes a dangerous tool that opens up ...

Right, drones.

Exactly. It’s not that drones inherently come to life, it’s the people controlling them.

Or, robot warriors.

Exactly. And so, from that perspective, becomes a dangerous tool just like other types of technologies, or a gun. And there needs to be careful thought on how we regulate, and what sort of approaches or safety standards we put on them. But an autonomous car that kills somebody is never because of malicious intent by a robot, it’s truly just a program that didn’t handle ... It’s just like your phone crashing. An app crashing, right? It happens, it’s just the consequences are bigger when you think about robotics.

So, you were saying the opposite is what humans do to robots?

Yes, and how they use robots. In the long, long, long term, obviously, nobody here is ... Like, I’m not qualified to speak about what AI’s going to do in 100 years, but in the near term, all of the concerns are about the use cases of AI and robotics as tools. But right now they’re fully human defined.

Why are so people so scared? Lately, everyone’s terrified about AI. The idea that ... I was at a speech in Nantucket at some conference. This guy was talking about super AI, and how it’s getting smarter and smarter, and now it’s like a dolphin, and then in five years, it’s going to be like a ...

And “deep learning” just sounds mischievous too, right?

It does, it does. I think, really, I hate to say, but James Cameron had a big effect on people. It’s the idea that ... Or, you look at a movie more benign, like “Her,” they just get smarter than us. They just move beyond humanity.

A lot of this is culture. I mean, you go to Japan, and there’s actually a far better acceptance of robotics, because they don’t have that Terminator kind of thought ingrained in their minds. And so, a lot of this really is cultural. And part of it is, people are afraid because when you have something that you seemingly don’t have control over, it’s scary. No matter how reliable people will say autonomous cars are, and how many studies will prove it, people will be very nervous, especially early on, driving them, the same way people were still nervous about flying. No matter how many statistics ...

Then you get comfortable very quickly. I remember Chris took me in his car and I got comfortable.

That’s right. But then, there’s people that are still, to this day, scared of flying even though, statistically, it’s far safer to fly than to drive. And so, even when you get to a point where, technologically, we’ve solved to a reasonable level the problem of autonomous driving, there would be huge barriers in just human psychology and regulatory environment in order to get over that finish line.

But what about where AI is going? There’s so much investment being made, especially, by Google.

Yeah.

And Facebook. I think those are the two leaders in that area. What should people be thinking of as we’re moving into this and, you know, Google bought Deep Minds, Facebook’s been doing a lot of ... I mean, because a lot of people feel like input ... It’s the same, mostly men, mostly white men, creating these things. What should we be thinking of as we’re moving into deep learning? What are the important, key things? Or, is anybody thinking at all?

I think people are thinking about it. There have been kind of agencies and so forth thinking about the ethics around robotics. I think it’s very case-specific. One of the reasons entertainment is great is that nobody’s intimidated by the products we make, so we’re kind of bypassing a lot of these challenges on our way to something else. In the case of something like AI being used for medical applications, at some point, there will be computer programs that can diagnose X-rays, and ...

Oh. They already do, right?

Exactly.

Better than doctors.

And better than doctors.

So, job losing is another thing.

But, even before we get to job losing, there’s going to ... There’s a danger in somebody proclaiming that this is a better solution than humans, until there’s a true ...

It probably is, isn’t it though?

Not always. I mean, you’ve got to get to that point ...

Don’t you think it probably will be?

It probably will be.

Yeah.

But, for example, there’s a lot of autonomous cars that are currently being tested on the road that are scarily not qualified to be fully autonomous. Right?

Right.

And so, you’ve got to be careful about that. You know, one thought that I heard that kind of resonated well was that, it’ll be almost like ... At some point, government’s going to have to be involved in order to green light and accept the risks for the betterment of society of some of these technology.

Are they qualified to do that?

Well, there needs to be a partnership to get qualified, because there’s no other way. So, for example, FDA and these administrations that have to like, vet drugs and safety; by no means are they perfect, but there’s kind of ... Or, CDC and what not. You have a vaccine that can save lives. There’s side effects, but once you pass a generally accepted bar of safety, the good outweighs the bad.

Right.

And it will be the same thing for autonomous cars in that it can’t be like 17 companies fighting for their own arbitrary metrics of what’s accepted, there has to be some standardized ...

But is the government qualified? I mean, Trump doesn’t have any science people. There’s two people, and one of them does the xeroxing at this point.

Yeah. So, different branches of government, right? And, to some degree, there’s a long way to go.

Who is key in that? The department of transportation?

It will probably be the department of transportation, and there’s a few folks ... Within the kind of deeper branches of government, there’s actually some good examples of folks that have cross-pollinated from academics. And, where you have qualified people coming in and really thinking about these challenges. And there’s a genuine intent to try to solve these problems.

Because you think about, say, this infrastructure stuff. They should be thinking about sensors in the roads, correct? Because you’re not going to have self-driving cars unless you have sensors in the roads.

It’s kind of like the chicken and the egg problem, right? The reason the internet took off is because the phone lines were already there.

No, you have to have sensors in the roads, right? I just don’t ... You have to be thinking about what the ...

The alternative that most of these companies are approaching is collecting, ahead of time, really dense data around the environment to be able to facilitate it.

Yeah. The changes.

But some things don’t change. So like, the side of the road, the traffic lights, and some things like that. And so, you use that as a tool to localize and understand where you’re going. But the worst case, you’ll never have an instantaneous transition from non-autonomous to autonomous, and so, you have to handle the worst case, which is you have a mixture of autonomous cars with non-autonomous cars.

Right.

If some state or city decides to make a deep investment to spend billions of dollars outfitting itself to be autonomous friendly, to be a proof case, for their own benefit and for the benefit of the rest of the country, that’ll be a great thing, because some of these technologies could legitimately help. And there’ll be places that are green lit or not. And there’s a lot of debate on how that would play out as well.

Right. But is there the ... There was always the courage around the moon programs, the courage around the internet programs, is there that courage right now in government? It seems like they can’t agree on lunch, so ...

Yeah. I would say it’s a little bit chaotic right now. And so, there’s certainly people within that are deeply trying to push this, but there’s only so much you can do.

But it’s got to be government, things like phone lines ...

It’s got to be. At some point, the government has to be involved because, otherwise, it would just be chaos. Yeah.

What do you think ... Mark Cuban’s recently written about this, saying China and other countries are pushing these robotics and AI stuff quite hard, and this is something our government should get behind.

I completely agree, frankly. And there’s a pocket where they’ve gone very, very aggressively, and for good effect. There were just really large investments in research funding for automating manufacturing, manipulation, kind of robotics, arms, being able to manipulate things. That research is probably roughly around where autonomous driving was 10 years ago. So there’s areas that are gigantically meaningful. And he’s right, if the U.S. were able to invest and try to reinvent manufacturing to where you could turn it into an AI problem and increase the library of things that could be done in an autonomous fashion, that’d be world changing. The reasons manufacturing’s in China ... I mean, one, they’ve developed an expertise there that actually ... It’s not just labor, it’s expertise as well. They’re better than ...

Mm-hmm. Logistics.

Logistics, but just the techniques and technologies to do it. That’s a skill set. But, yes, they also have the labor advantage. If you can increase the level of things that can be automated, you knock out the labor advantage, suddenly that becomes a competitive playing field.

Competitive playing field, right. Absolutely. So, when you think about that idea of making investments in robotics, who are the key players? I mean, Amazon bought Kiva.

That’s a beautiful company.

It is.

Yeah. One of my favorite applications of robotics ever.

I think people don’t pay enough attention to that purchase.

Oh, it was wonderful.

I think you’re going to see Amazon ... See, I was thinking, instead of creating Amazon Web Service where everybody’s catching up to them now, they could do Amazon Logistics Service ... Oh, that’s ALS, that sounds like a disease. But, think about it.

I didn’t think about that one, yeah.

Amazon Robotic Services, that’s a little creepy.

But it’s such a ... That purchase is one of the most brilliant purchases.

Explain that.

So, Kiva Systems is a warehouse management robotics company. They were started in Boston. I actually first encountered them at a conference back in like 2006. And so, what they were basically doing at that time is, they had this robot that’s about the size of ... Maybe two feet, three feet wide, kind of a zoomed up Roomba is a way to think about it. And it has a platform at the top that’s meant to dock with shelves that contain items in a warehouse. And so, what their system does is, if you equip your warehouse with the Kiva System, which involved robots and shelves, and an inventory management system around it, you can basically have hundreds of these robots running like an ant farm, where you have a check-out counter, and somebody places an order that includes seven of these widgets, two of these widgets, and five of those widgets, and the robots automatically get the shelves and line them up at the check-out line, and you’re ready to just fulfill that order.

And so what they did that was brilliant, is that the overall fulfillment process is incredibly complex. Navigating and finding objects is a database problem with a physical incarnation, basically. And, navigation can be well understood, and especially if you structure the environment; meaning, it doesn’t have any humans, and you use shelves that you recognize, with a system that you created, it makes it easier. The hard part is still picking things and packing things.

Right.

But it’s not that time consuming. And so they solved the part that takes 90 percent of the time but it’s like 10 percent of the complexity, and left humans to do the rest. And what’s interesting is ... I’m actually super excited because I got to tour one of the Amazon facilities recently, and it’s beautiful to see these things at work. What it actually opens up is, in the process of automating these things, you can actually be far denser in terms of how you can pack the warehouse itself. And you increase the yield.

No, it’s brilliant. I think people aren’t paying nearly ...

It’s wonderful. And so Amazon bought them for, I think it was $800 million, plus, minus, or so. Which is ...

Nothing. And then they pulled all the ... Now they’re doing it just for themselves.

That’s right.

For now.

Because they were working with Staples and all these other companies. And when you think about this as a way to automate fulfillment in the general case, and just how uniquely valuable that is to Amazon where, if that gives them a unique advantage in terms of cost and speed and efficiency of fulfillment ... Because the other thing it does is it re-optimizes the locations and stuff. It just adds to the advantage that they have.

Yeah, you’re not going to buy anything except from Amazon.

That moat is an ocean at this point, right?

Yeah.

It gets harder and harder. And so, it’s brilliant because it’s just a beautiful use case of robotics that solved a problem. And, in the end, it gives them a big advantage.

Yeah, I was talking to someone from Walmart, and I was like, “You’re so dumb about ... Kiva was the smartest,” and they’re like, “Huh?” And I’m like, “Oh, forget it. Just give up.” Just, give up right now, immediately.

And they’re investing a ton in robotics. I mean, it’s fascinating.

Well, Jeff loves that stuff.

Yeah.

He always has.

And it’s smart, because they’re finally at a scale where you can truly rethink the end-to-end system.

Sure. And who else? Google?

Yeah. Google had a lot of ...

Early.

So they had the autonomous car company, but then they had a lot of other robotics companies, but they were kind of scattered and mis-matched and dumb. There were Boston Dynamics that didn’t really ...

What’s going to happen to that?

I mean, some of them ... I can’t remember. I think Boston Dynamics ...

Why do they want to sell it? Why do they want to sell it, in your perspective?

All their contracts were military contracts, which I think is kind of at odds with what Google wanted to focus on. My guess is that it was hard to find a clear path from a lot of technologies into something that’s commercialize-able.

Who’s going to buy that company?

Maybe defense. Or, maybe it spins back out and kind of recapitalizes somehow.

And then, who else ... What do you think of Uber’s efforts? And then, the car companies’ efforts? Or Tesla?

This is interesting. I mean, the transportation industry is kind of a ... A good parallel where robotics is not an industry. Robotics is like this tool that reshapes different industries. It’s almost like the internet. Like, you don’t call internet an industry. It’s just a new technology where every industry has to rethink how they operate, and if they don’t, they can get left behind. And so, in the case of transportation, a lot of car companies are certainly involved, but most would consider the front-runners are the software companies from Silicon Valley, right?

Right.

And so it’s not a coincidence, it’s just different DNA. Similar to how the sort of products we make, you would never ... It would be hard to imagine traditional toy companies coming up with those sort of products. Yeah. And so you have Tesla, you have Google, Uber and so forth. And then, some of the new companies that are starting up as well, some of which are ...

Chris has a company ... I forgot the name of it. How do you assess what’s going on ... But they’ve had a lot of troubles though, with integrating the Carnegie Mellon group with the group from the auto ...

Yeah, it’s tough, and I’m in certainly some public controversies, I guess I’m not in a good position that I could kind of comment on those.

Yeah, it’s separate.

But it’s tough.

That’s people being humans, but go ahead. They should act like robots, actually.

The other thing is these are also sometimes difficult projects to have separated, because it’s hard enough to ... There’s a reason why teams are often in one location, because, you know, cultural reason, communication and so forth. Robotics is even tougher because there’s such ...

You really have to work, too.

Yeah. And it’s R&D through the final launch. It never stops being R&D. So that makes it a little bit more challenging.

Right. Who else is someone we haven’t heard of that’s been great? What about the car companies? Do you think they’re really committed to this? They’re spending money on it.

It’s an existential threat for some of them, and so they’re certainly committed, the problem is, is that ... You know, you almost saw this sort of an arms race back when apps first became big and there weren’t enough app developers. The difference is, is that somebody that’s a really good developer could spend six months becoming a pretty reasonable app developer.

For something like robotics. you truly need at least like a half decade of experience of working with physical projects and problems and dealing with the uncertainty of the real world. It really builds an intuition that you can’t just immediately think that you can absorb.

Mm-hmm.

So there’s just a gigantic shortage of talent at this point. Every car company would gladly spend billion of dollars investing in this, given the value in the future, but there’s only a handful of full-fledged teams that are available to actually do this.

Anyone we haven’t heard of that you think is ...

I’m a big fan of Chris’s, so are you, obviously.

His new thing. Yeah, we’ve written about it quite a bit.

I think he’s fantastic. I mean, Google ... Having that sort of a big lead ... Obviously there’s challenges with keeping some of this talent, but it’s still a pretty big lead in the kind of technology that they have. I mean, I think, you know, kind of the major ones in enough ... Of course there’s the rumors about Apple.

Mm-hmm.

Always.

Where are they? They seem out to lunch at this point.

Yeah. I mean, they’re ...

And Tesla?

And Tesla. Tesla’s an interesting one because they’ve ... There’s different levels of autonomy. But, you know, they have the most actually commercially available autonomous models out there, which says something.

Yeah.

But, again, it’s really, really hard to go from ... Highway, in some sense, is the best application of autonomous driving, because of it’s structure.

Highway in the desert is, actually. Right?

Yeah. And the highway in the ... Yeah. But any highway, you know, you have very well-defined lines, traffic flow, everything. The hardest part is urban environments, where you have dogs running in the street ...

Which is where you need it.

... people, people crossing randomly in front of you, and so forth.

So, two more questions, and we’ll finish up. When is there going to be a fully robotic, autonomous city? Is there a city that should do that? Well, San Francisco should.

Meaning like, for driving?

Yeah. For everything.

For everything?

Yeah.

Oh. But it’s not like an instantaneous transformation because it’s just going to be a huge flow. It’s almost like, when computers came out, there was never like ... We’re still in the process of stuff being impacted by computers, and internet, and stuff. It’ll be very gradual. In some sense, robotics is one of the biggest revolutions that’s in the early stages of hitting humanity, where you saw with computers, you saw with the internet, you know?

Mm-hmm.

You’ll see it with robotics, but it will be a little bit slower and gradual, because there’s constraints.

So will it be cars, or robotic helpers, or drones first?

Cars will be first.

Cars will.

Yeah. I mean, it’s just because, at this point, there’s already kind of a foundation for it, and even though the problem’s definitely not solved yet ... There’s work on the sensor side as well, but there’s better understanding of the software that needs to happen. Something like a helper, again, it’s cost, it’s capability, it’s all the kind of mechanical components and so forth; that’s still a long ways off. Drones is very multi-dimensional. So, drones started as entertainment. Then ...

Possible delivery. Military. Photos.

Possible delivery, there’s uses of them for surveying of farms, surveying of roofs, for insurance purposes. Amazon obviously, with delivery and so forth. So, dropping medicine in Africa. There’s a lot of great applications that are going to be widespread, but it’s not like suddenly drones are in, it’s what applications of drones.

The same thing is going to happen with different forms of robotics where, I think we’re still far away from the full “I, Robot” humanoid ... The movie “I, Robot” humanoid.

Damn.

But it’s always kind of ... One of the best phrases I’ve heard of is, you know, “Stuff’s AI until it starts working.” And now, you call it voice recognition, right?

Yeah.

The moment it actually starts to function in a realistic way, it kind of gets a name, and it’s not AI anymore.

Right.

Like, a lot of stuff that we call different things used to be AI. And so, I think it’s going to be kind of like that.

Yeah. AI. What did they want to call it? Someone was telling me, who was going on it ... A very prominent AI person was like, “You know, the human brain is the smartest,” you know, “the most complex thing that we have. We don’t even understand how it works.”

Oh, we can’t even ... Anybody who’s seen like a 2-year-old, kind of ... It’s crazy.

Yeah. You can’t teach computer ... It’s interesting. Like, “Don’t be so scared, the human brain is,” I’m like, “Yeah. But then, it’s attached to a human.”

Yeah. And that’s the thing is just, even just deep learning, it’s not the human brain that’s replicating, it’s just better tools for understanding gigantic sets of data.

Just the visual stuff.

Or whatever purpose it’s applied to. When you have tons of data, it’s better tools for finding the patterns within that data.

Right.

And now, it’s almost like we have this hammer, okay, can we help with financial challenges? Health care challenges? Computer vision? Some of these work well, some of these won’t work well, but it becomes a tool.

All right. Last question. What would you like to be the most crazy example of a robotic thing would you like to have be invented?

Oh, wow. The craziest robotic thing.

I want a time machine, but that’s different.

Yeah. I mean, honestly it’s ... Gosh, sports, I think. That’d be pretty awesome. To have kind of like ...

So, robot football teams.

Yeah, just like robot ...

That was a movie.

Yeah. Tennis, football. Can you imagine the sort of competitions you could have with full-fledged ...

And nobody gets hurt.

Yeah, just like everything under the sun. That’d be kind of cool.

All right. All right.

That’s even harder than the other stuff we talked about.

Well, get on it. Get on it.

Yeah.

All right. Thank you so much, Boris. This has been fascinating. Thanks for coming by, I appreciate it.

Thank you Kara, my pleasure.

My kids love your race car thing by the way.

Oh, thank you.

I haven’t gotten them the robot thing yet, we’ll see.

Oh, they want that one. Yeah.

I don’t know. I’m not so sure. They like skateboards still.


This article originally appeared on Recode.net.