On a recent episode of Recode Decode, hosted by Kara Swisher, we heard from Numenta co-founders Jeff Hawkins and Donna Dubinsky.
You can read some of the highlights from their interview with Kara at that link, or listen to it in the audio player above. Below, we’ve posted a lightly edited complete transcript of their conversation.
Transcript by Maya Goldberg-Safir.
Kara Swisher: Today in the red chair are Donna Dubinsky and Jeff Hawkins, who I've known for a dog’s age. They're the co-founders of Numenta, along with a lot of other companies. Numenta's a machine-intelligence company that tries to help other companies take action based on data. They've been at it for a long time, well before this trend had become a big one in Silicon Valley. They first teamed up in 1992 at Palm, which is one of the pioneering smartphone companies, where Donna was the CEO. And they also co-founded Handspring, a company that made the Treo smartphone. Donna and Jeff, welcome to the show!
Donna Dubinsky: Happy to be here.
And you're still together — after all these years!
Jeff Hawkins: Yes, it's true.
How long have you been working together?
Hawkins: Since '92.
What is your secret to your partnership success?
Dubinsky: I would say tolerance. I think we complement each other very well. Jeff has always been the driver on the technology, the market side. I've been much more the business person, if you will, and looking at how to create businesses around these ideas. And I think it's that complementary nature that has been able to persist through several different companies ...
Yeah, absolutely! Ups and downs.
Dubinsky: ... And even one nonprofit! I helped Jeff when he created the Redwood Neuroscience Institute, as well. So we have a great respect for each other's strengths, and are able to work together. Plus, we have fun!
Yeah, it’s unusual in Silicon Valley, long-time partnerships.
Hawkins: Yeah, maybe — I mean, you think about the partnership that led Intel at the beginning ...
Hawkins: That lasted for a long time, but maybe it's not too common now.
Yeah. So let's talk about Numenta. You know, at the Code Conference we got a lot of talk, a lot of statements out of a lot of people about artificial intelligence. One side was the Facebook people and others talking about the happy, shiny future of artificial intelligence. On the other, it was Elon Musk saying, at best we can hope to become house cats of computers, which was somewhat disturbing, and at worst, of course, they'll kill us. Explain Numenta, for the listeners.
Hawkins: Oh, wow. This actually goes back a long, long time ago. When I was right out of college. And I fell in love with brains. I just wanted to dedicate my life —
So that's when we started Palm. It was something to do for a few years until I could get back into neuroscience!
To people's brains.
Hawkins: Yeah, the human brain, how it works. And just as a scientific endeavor, for humanity to understand how we think and how our brains work, ‘cause that's what humanity is.
So you were not a medical student ... ?
Hawkins: No, actually I ended up being a graduate student at Berkeley in biophysics. But anyway, it goes back a long time, and at that time, you really couldn't do this at all. It wasn't possible, scientifically, to do this. So that's when we started Palm. It was something to do for a few years until I could get back into neuroscience! Believe it or not, that's how I viewed it. And I think the first day I met Donna, I told her this.
Dubinsky: That's what he told me when I interviewed with him for CEO of Palm — he basically explained that the long-term vision was to be able to get back to understanding the brain and how the brain works.
The greatest computer of all, right?
Hawkins: Yeah, of sorts! And so it's a long, long history. Me personally, and Donna as a partnership. The time at Palm and Handspring was really always exciting — we're working on mobile computing and all this stuff. But everyone knew that I wanted to get back into this. So 15 years ago or 13 years ago, I can't remember now exactly, I left the mobile computing space. I started this nonprofit that Donna mentioned a moment ago, Redwood Neuroscience Institute, which was just focused on brain theory, cortical theory. And Donna was chair of the board, and helped organize that. That research entity had a close partnership with Berkeley. And so it's now continuing on the Berkeley campus, it's part of the Helen Wills Neuroscience Institute. But we left and started Numenta, which is sort of a hybrid between a for-profit company and a private research lab. And so we continued doing the neuroscience research but also building technology.
Donna, talk a little bit about the business part of that. What year did you start Numenta in, what was the funding ... ?
Dubinsky: I think 11 years ago, so —
So, early-early. Before a lot of people were thinking about these issues.
Dubinsky: Oh, absolutely! Though! If you look at the history of what's called artificial intelligence, it goes back 50 years. So you know, a long time, people have been struggling with this issue, of how can we make machines that are intelligent. But, you know, we've talked a lot about what's the right business model for us along the way. When we created Numenta, we debated, should we make it a nonprofit? But the reason we decided to make it a commercial entity was 'cause we have a huge belief in platforms. We've seen that through our days at Palm, at Handspring, my earlier career at Apple. If people can add value on top of what you're doing, and creating specific solutions, you can have a much, much bigger impact.
Dubinsky: And if people are excited about that, it's a way to get more of a support for what you're doing —
Hawkins: It's kind of a recognition that, ultimately, this is going to be a huge industry, and just to acknowledge that fact and say, how do you play a role in that?
Right, but when you started it, from a business point of view, was it difficult? ‘Cause artificial intelligence and machinery have been around for a while, but it really was just recently —
Hawkins: Well it was funny because, Donna, when we started, I think you led an effort to decide what terms we were gonna use. And you did focus groups on AI, remember that?
So let me hear what happened.
Hawkins: Well it was the worst thing in the world!
Dubinsky: Well, AI has a negative kind of a meaning from back in the early days. You may even remember, they talk about the AI winter that we went through ...
This has been going on and on again, where people get excited and then all of a sudden, "eh, it didn’t really turn out the way we thought it would be."
Well "The Terminator" didn't do so much for it.
Dubinsky: Well, yeah, there's been a lot of sci-fi movies that have incorporated AI. But you know, there was a real disappointment! People in the early years tried to do very interesting things, and and realized that they were up against a wall. And they really didn’t know how to move forward.
Well, talk about those early days. What were they trying to do?
Hawkins: This goes back to the '50s when the term AI was coined, and at that time, they were saying —
That was at MIT, right?
Hawkins: Yeah, MIT, and other places as well. But they had the AI lab back at MIT, so there were all these promises, all this stuff is gonna be amazing, we're gonna solve all these things in a few years, and it fell flat. And then in the '80s, there was a huge interest in neural networks. And there were all these conferences, and companies, and funding, and all the stuff was great! And then all that went away. And so we're actually sort of like on the third wave right now. There's a hyping of interest again, it's happening here. This has been going on and on again, where people get excited and then all of a sudden, "eh, it didn’t really turn out the way we thought it would be."
We still think the current wave of it right now — AI right now — is built on the wrong technology base. It's not even close to AI. It's not even close to what human intelligence is. And we're going through another wave of interest right now. But it's interesting how, when we started the company, the public association with the term AI was very, very negative. And now, here it is today, everyone can't wait to call themselves an AI company!
Donna, what is the actual business — Numenta — and then let's talk about what the product is.
Dubinsky: We've looked a few different business models throughout the years. We've tried enterprise sales.
Meaning what, what would that do?
Dubinsky: Like going directly to a big company and trying to solve a problem for them with consulting services as well as with our technology.
Sort of what IBM is kind of trying to do.
Dubinsky: Yeah, a lot of big companies do this kind of approach. Another approach is a product approach. We created a product called Grok for AWS, which monitors servers on a network and gives you anomalies. That was actually well-received, but we found that where our heart was, was the core technology, and every time we went down that path, we felt we didn’t want to invest resources and what it took to get it from where it was to a real, vibrant, big deployed product.
So at this stage, our business model, which I think is very interesting, is modeled after a tech transfer model of a university or a research lab. And the idea is that we work with partners, and those partners take the products into a specific domain. They add the part on top of the technology that we don't do. They have domain expertise; for example, that Grok product has been taken over by a company — they brought it to a market, they're building out a whole product around it, and we're supporting them in doing that.
So, the core technology. Talk about that, Jeff. What is that?
Hawkins: We start with a scientific mission. And that is how to reverse-engineer the neocortex, I mean in detail.
Right. But not the stupid parts, say, the Trump parts. I call it the Trumpian section of the brain.
Hawkins: Well I like to say, the brain can be easily divided into the new parts, which is the neocortex, literally meaning "new layer," and the old parts of the brain. All the things that we think about — of emotions, of anger, or lust or fighting — these were all the old parts of the brain. The neocortex is sort of the rational part of the brain that builds a model of the world and understands itself. If you want to think about math and physics and humanity and all these things, it’s the new part of the brain. If you think about the baser parts of humanity, that's the old parts.
The way you see and the way you hear, and the way you think and the way you do language, and the way you engineer things is all the same basic learning algorithm.
So the brain’s at war with itself.
Hawkins: It is! It’s like those little pictures of the cartoon where the devil and the angel are on two sides. It’s really like that.
Dubinsky: It’s like when you see that cupcake, and your brain is at war with itself!
Hawkins: Yeah! Neocortex says, "I know I'm not supposed to eat the cupcake! It's obvious, I've read all the reports, I understand why it's bad for me." But the other part of the brain goes, "Eat, eat, eat, eat!" And you eat the cupcake.
Yeah, guess which one wins.
Hawkins: So anyway, we're focused on the neocortex, which is 75 percent of a human brain. It’s what makes us unique. All mammals have a neocortex, and non-mammals don’t. So it’s unique to mammals. And in humans, it’s particularly large and particularly well-developed. It turns out that it’s an amazing organ, and everything that you and I do as humans — our language, our art, our engineering — these are all done by the neocortex! And the surprising thing about it is, the neocortex uses a common method for all these things. This was discovered 50 years ago. The way you see and the way you hear, and the way you think, and the way you do language, and the way you engineer things is all the same basic learning algorithm. It's hard to believe, but its true.
So our goal is to understand that, and to reverse-engineer that, to understand exactly how the neocortex works at a biological level, at a mathematical level. That theory is a biological theory, but it also gives you the functional basis of how this thing works, and then you can build machines that work on that principle. And that's a very different approach than what other people use for AI.
So what do you mean — what is the other approach for AI?
Dubinsky: The other approaches are much more mathematical, statistical. And they're often focused on a specific problem. This is much more of a generalized approach. It’s about a learning machine. Your child is born, and the brain doesn't have stuff in it. And then it learns from what it's exposed to. So the brain could learn Japanese, it could learn English, it could learn Swedish, whatever; it learns from what it's exposed to. You're not programming a Japanese brain. And so this idea is a much more generalized approach.
If you're focused on solving a particular problem, you'll engineer a solution to that problem.
So the way it's approached right now is that you're programming a computer —
Hawkins: Well, no. Current interest in AI is not programmed. That was the old way, but today it's learned. But it's a very simplistic learning method. When someone wants to look at a picture and say, "What's this picture?" ... Facebook does this, Google does this. They train it on millions and millions of pictures. Humans don’t do that at all. We build a model of the world. We understand, we're in a conference room right now, and we understand what all the objects here on the counter are: The microphones, the calculator, the walls and the windows. We have this very intricate model of the world! And we've learned through interacting in it. It’s a very different approach than just to say, "We're gonna recognize a picture, let's train it on 16 million pictures, and we'll have a picture recognizer." It's a little bit more complicated than that, but basically, that's the truth.
So there's very different approaches. And I think Donna mentioned a moment ago, if you're focused on solving a particular problem, you'll engineer a solution to that problem. What biology tells us is that everything we think about our humanity, and humans, and our capabilities, it’s a learning process based on the same algorithm. And that algorithm is us moving through the world and touching things. We believe that those principles, and they're very different, those principles by which the brain works, will be the foundation for what we think in the future of AI.
So why the fear around computers right now?
Dubinsky: Well Jeff wrote a great article for Recode that you guys published on this question. I think it’s like any new technology. It’s fear of the unknown, where can it go. And let's face it, every new technology can be used for bad as well as good. Cars give us great transportation, and cars give us accidents, as well.
The problem of how the neocortex works will be completely solved — hopefully in our lifetimes.
Well this is a little different. I mean, a car is a passive thing that requires a human in it.
Dubinsky: But we've seen it with computers, you know. Early computers, of course, instilled fear in people. So I think there's always some great fear, and I think part of that is justified, and these things do need to be regulated and understood. An example I like to use is drones. I mean, drones are so interesting! I think they're gonna open up all sorts of interesting areas of business and ideas and viewpoints of the world. It’s fascinating! But they need to be regulated.
So let's talk a little bit about the science of this. What is difficult about doing that? I mean, people still, after all these years, don’t know a lot about how the brain works.
Hawkins: Yeah, it is a very difficult problem. We need to separate out things that are hard to figure out from things that are hard to understand.
Hawkins: The things that at first seem very complicated and difficult, but then afterward, once you understand it, it’s not so bad. That's the way the brain is, too. And one of the reasons why it’s so hard to figure out is that it’s a very complex organ made of cells, and it’s very hard for neuroscientists to actually get the data properly. So we work with lots of partial studies. There's thousands of papers published every year that are bits and pieces, and we have to sort of assimilate all of that. But if you really go through the effort, you can do it! And you can make progress. And we're making excellent progress in this. The problem of how the neocortex works will be solved — it’s partly solved already, and it will be completely solved — hopefully in our lifetimes.
What is the most interesting thing that's going on in that area right now, from a research point of view?
Hawkins: I think one of the things is the recognition that this is a problem that can be solved in the near term, is elevated. It used to be that people thought this was like an impossible thing, would take 500 years.
Because there's so many connections, so many ...
Hawkins: We didn’t have enough data, and its hard to get the data. There's been so many new techniques for extracting data from brains now, that we have an embarrassment of data, and lack of theory. There are very few people in companies or labs that are really doing sort of this theoretical integration. We're one of them. But all kinds of people around the world have decided that understanding how the brain works is one of the next big things. It’s a grand challenge for the National Academy of Engineering. We have a grand challenge in Europe called the Human Brain Project. In the United States, we have the Brain Initiative. So we've got the data, we understand this is one of humanity's most interesting problems of all time, in forever, how our brains work, and this is the time that this can be done.
Donna, talk a little bit about the other companies that are addressing this, because now everyone seems to have jumped in heavily, and they're all around machine learning.
Dubinsky: Well, there's a lot of companies working on machine learning and solving a lot of important problems.
Break that down for people who don’t understand it quite properly.
Dubinsky: Machine learning is more the technique that Jeff described earlier, of using labeled data sets and trying to take away the statistics from it in order to solve a specific problem. And it’s been successful with doing that. So you see, the captures, the dog-cat kind of things, is a problem that, that people couldn't do at all 10 to 20 years ago, and now, they can do a relatively good job! You go on Facebook, and how it identifies some of the faces ... I think those are approaches that are very good at solving specific problems.
There's very smart people, whether its Elon Musk or Jeff Bezos or Bill Gates, or you know, Stephen Hawking, who have said, "Oh my gosh! This is really dangerous." But these people, as smart as they are, actually have no idea how brains work.
With heavy data.
Dubinsky: They are highly reliant on labeled data, not just heavy data, but labeled data. Someone has to go through and say, "This is a cat, this is a dog." You can’t have unlabeled data and be able to figure it out. And they are highly reliant on, you know, significant computing power to solve these problems. But it’s very different from our approach. It doesn't mean that it’s not interesting and important, it’s just different.
So why are people scared of it? Right now? Because, again, we had that sort of dichotomy at Code, and then it degenerated into, "This doesn't really matter, this is all a computer simulation, anyway."
Hawkins: That was an amazing uh ... claim by [Elon] Musk.
There's many people that agree with him, just so you know, they come out of the woodwork, and they're prominent people.
Hawkins: There's very smart people, whether its Elon Musk or Jeff Bezos or Bill Gates, or you know, Stephen Hawking, who have said, "Oh my gosh! This is really dangerous." But these people, as smart as they are, actually have no idea how brains work. They don’t have any sort of base theory about what intelligence is, and what machine intelligence is.
They're also forgetting that humans are very dangerous things, too!
Hawkins: Well, maybe.
No, they are!
Hawkins: Well, but it's based on sort of a lack of knowledge in the field, and so they extrapolate from what people are doing with machine learning today and add some hyperbole to it, and stir it around a bit, and add some science fiction, and all of a sudden you're imagining these things become crazy, alive and taking over the world. It is so far from that, it's crazy! Ridiculous!
Meaning ...? So far?
Hawkins: A, we're not close to doing that. B, the current technology's not even on the road to true intelligence. The current approach of people using machine learning are not on a path to building intelligent machines. We think the approach we are is. But if you understand what we’re doing, and earlier we were talking about the old brain and the new brain ... It’s just not gonna happen. So the fears are based on lack of knowledge about the field, from people who are very smart! And therefore they think, since they're very smart, they can apply their smarts to this field they don’t really know much about.
Dubinsky: I think it's worth this distinction of old brain and new brain. It’s very important here, because the old-brain part is what's got the motivations, so I think they keep confusing that there's some old-brain-ness somehow in this work, that these machines would have desires and goals, and want power and have needs. And they won't! They won't have any of those things! Nobody really is replicating the old part of the brain.
Just the logical part.
Dubinsky: It’s the logical part, which is just very different! And I think it’s hard for people to not separate those two things.
Imagine you can build a brain that is a million times faster than a human, never gets tired, and it’s really tuned to be a mathematician. We could advance mathematical theories extremely rapidly, faster than we could otherwise. This machine isn't gonna do anything else!
Because they're focusing on their own, that this is what I would do!
Dubinsky: Because you define yourself as a human! You think of both those pieces when you're human. And that's true, those are both important pieces as a human, but that's not what the machine is gonna be.
Hawkins: There was an unfortunate thing that happened many years ago. Alan Turing, the famous computer scientist, proposed this thing called the Turing Test. And he actually proposed it because he was tired of people asking, "Could machines be smart?" But we've accepted this Turing Test as the gold standard of, "Can it be like a human?" But that's not what intelligent machines are gonna be like! They're not gonna be human-like at all! They're gonna have the new brain, the neocortex part, but not the old brain, and they're not gonna be human. They're not gonna live human lives, and have human emotions and human experiences! And so there's this idea that, you know, a lot of people feel like the goal of AI is —
— is to create a human, and are they slaves?
Hawkins: Yes, exactly! But that's not gonna be what the future machine intelligence looks like, it’s gonna be so different than that.
So what does it look like?
Hawkins: I'll give you two examples of things that we can look forward to. Imagine that you want to do mathematics. This is a very abstract thing that's done by the neocortex. It’s done on the same principles that we do everything else with. We didn’t evolve to do mathematics. But our brains can do it. Imagine you can build a brain that is a million times faster than a human, never gets tired, and it’s really tuned to be a mathematician. We could advance mathematical theories extremely rapidly — faster than we could otherwise. This machine isn't gonna do anything else!
Doesn't want to eat.
Hawkins: Doesn't want to eat. Doesn't want to have sex. It's not jealous that you got a tuna, I got the whatever. And that's the kind of thing that we can look forward to! Machines are just brilliant and smart.
All they do is work.
Hawkins: But they apply these things in other ways. I'll give you one other example. A lot of people, including Elon Musk, want to send humans to Mars. I think it’s a great idea. But Mars in its best day is gonna be worse than Earth in its worst day, to live.
Yes. I get that sense.
And if we're gonna do this, we need to have an engineering robot. Robots that are smart at solving problems, they have tools that break ... You take a really smart engineer and put 'em some place and try to solve some problems, and figure out new solutions to these things. We gotta have machines that are like that! And they're not gonna be human-like machines, but they're gonna have to be very flexible and intelligent at solving problems because humans are not gonna be able to go outside all day long in hostile environments trying to fix things.
So Donna, talk about that idea. I had Anne Wojcicki from 23andMe on, and she was talking about this — similar to what you were talking about — that these machines will, say, replace radiologists. Radiology is an area of great controversy, because a lot of radiologists just don’t read the things right, and computers can do it. They don’t get tired, they get it right. There's a whole job-creation thing. They do replace people, 'cause they're better at it.
Dubinsky: Well, they're gonna replace some things, but they're gonna create some other opportunities ...
I know, all you tech people say that, but actually, in this case, I think they're just gonna replace them. I can't think of what is the job that humans have, then, if they can do ... ?
Dubinsky: If you just take Jeff's example of sending a smart thing to Mars and exploring there ... Somebody's gotta create that, somebody's gotta manage that, somebody's gotta take the data from that. I think in the genetics area, I'm very excited about that. There's all that data that Anne and people like her are collecting, and people are trying to figure out what the heck do we do with this, and to the extent that can be used for finding ways to address diseases that we cant address today, to understand them better, and new treatments are developed, and new protocols, and so on. Those are all opportunities for jobs, as well, as for improving humankind, so ... I think there's gonna be as many opportunities as there's gonna be, you know, disconnects. But people have to be re-trained for those things. I mean, it’s not to say there isn't dislocation — there could be enormous dislocation! But I think it means they will be different things going forward.
So what is the product, then, Jeff? What are you making right now? This platform?
Hawkins: Well, as Donna talked about earlier, we've been through a series of ideas about what our product is, and currently today, it’s a licensing model.
Of the core technology.
Hawkins: Yeah, we have over 30 issued patents, and some of them we believe are fundamental, and we have many more coming along. We've had a lot of luck in this regard, because we're really doing things that no one has ever done before. So in the end, from a business model point of view, it's all gonna revolve around some method of licensing. We have an interesting strategy right now, combining open source with licensing —
Dubinsky: Yeah, I think its important to mention the open source. You might scratch your head and say, "How do you have a business model at all if you've got an open source ... ?"
Yeah, I was just thinking that.
Dubinsky: It’s all there, and we even put our research software there, so that people can see what we're working on. It’s the most open of the companies working in this space, I would argue. So how can we do that and reconcile that with a commercial mission? Well, the way we do that is through, kind of the magic of the open source license. It’s open for any kind of noncommercial use: For academics, for researchers and companies — anybody. But at the point at which somebody makes a commercial product, they can choose to do it under the open source license, but many will say, 'I don’t want to build a commercial product on this open source, this specific open source license, I'd rather get a commercial license from you.' So in the end, we can do both.
So you're licensing the core technology —
Dubinsky: And the intellectual property. Which may come into play very importantly in the future. Because one of the possible directions for this is that it’s gonna take a new kind of a semiconductor architecture, a new kind of a piece of hardware that's gonna accelerate it in a different way than today's hardware does, and it could be that it ends up like a Qualcomm or a CDMA, where there is specific hardware that needs to be deployed.
Seeing that the future was mobile 20 years before it happened is the hard part! Seeing that the future is mobile, once you're in the midst of it, that's not the future anymore!
That you would make?
Dubinsky: That we would license.
Hawkins: We wouldn't make that.
Dubinsky: We're not gonna be a semiconductor company, but we've had quite a few talking to us and reporting our algorithms to their architectures, and I think that's gonna be an important part of the business.
Hawkins: And that's an interesting story there, I've given numerous talks recently at semiconductor conferences, been invited to give keynote speech talks. And everybody in the semiconductor space is looking for what's next. What's beyond Moore's law? And when you look at that space, they all eventually come around to looking at us, because we have these really interesting, unique architectures, which come from the brain. And they're different than everyone else is doing, and if you kind of buy their argument, this is the way its gonna work! Then you've got a model for thinking about what new future semiconductor architecture's gonna look like. So its a fascinating sort of field right now, where they're all trying to figure out whats gonna come next.
We've talked a lot about changing software, changing hardware, what’s going on. You guys have a long history in Silicon Valley, and I'm gonna bug you and make you pundits, essentially, of where you think you are. You're moving into an area that's all fresh and new, you're talking about creating the next big thing, and where chips are going, and where other things are going. You've created several companies, and while Palm and the Handspring didn’t survive the change, they were at the pioneering edge of these things. Can you talk a little bit about where you think we are in the cycle?
Dubinsky: Well, it’s such a good point about, you know, looking back at these prior generations, because I find it very amusing now. We walk around, people say oh, "The future is mobile!" No, no, no — seeing that the future was mobile 20 years before it happened is the hard part! Seeing that the future is mobile, once you're in the midst of it, that's not the future anymore! And I think it takes a very particular ... skill set and mindset to say, where's this stuff going? It takes a deep technology understanding. When Jeff figured out the future was mobile, he understood about screens, he understood about processors, he understood about memory! It wasn't just a theory, it was based on real understanding of the technology, intersected with an understanding of people and their uses and their needs and how this stuff could be applied. Not that you could totally predict it. I like to say when we created the Palm Pilot, we didn’t imagine Uber, so you know, you don’t know exactly where it’s gonna go, but you know enough to know that it needs to go in that direction.
Hawkins: Well, we did know, and we've claimed publicly many times, that mobile computing would be the driving dominant force of personal computing. At the time I used to say that, our VP of marketing used to say, "Jeff, don’t say that! It sounds crazy!" But it turned out to be true.
Dubinsky: People would just kick us out of their offices, they thought it was it so nuts!
Yeah, I think Walt Mossberg and I were saying, I think 10 or 12 years ago, that Web 3.0 was all mobile, and we got so much pushback, from lots and lots of people! They were like, that's ridiculous, and we were all like, no we think it isn't ...
Hawkins: Yeah, and we're in a similar situation right now! We're saying, look, the future of machine intelligence is gonna be based on brain algorithms. And that is a minority opinion today. I am so certain of it. I'm a hundred percent certain of it. Twenty years from now, we're all gonna look back and say, 'Oh yeah, that was obvious! Why would you think anything else?' But today, we have to battle that.
So, as the big brain here, what do you imagine — where are we in the mobile development right now?
Hawkins: Isn't it over by now?
Is it over? I don’t know!
Hawkins: Everyone has one!
You're not interested in phones anymore —
Hawkins: Not particularly.
Why's that? Just dull, been-there-done-that?
Hawkins: I remember when I met Donna, she told me the story of why she went to work at Apple early on. Because she said, "Oh my God, I had this epiphany that personal computers are gonna be huge!" And we were both attracted to what’s gonna happen, what’s gonna be obvious 20 years from now. It’s like, okay, what do you have to do be successful in mobile right now? Well, how good is your camera? How good is your low-light this, and how good is the screen, OLED versus ... I mean, that's boring!
Dubinsky: It's incremental work now. It’s incremental! I mean, maybe there's some big advances out there that are gonna dramatically change it but —
Do you think the form factor's gonna change?
Hawkins: I'm happy with my form factor! Why would it be any different?
On your head or ...
Hawkins: No, I don’t think so.
Google Glass, even though it’s a bad execution, it might be a good idea ...
Hawkins: Well, look, something will probably replace it in the future perhaps, but at the moment, I dont think so.
Dubinsky: To me, the much more interesting aspects of mobile have been the applications on top of it! Things like Uber, or Airbnb. There have been fascinating applications, software directions, so I think there will continue to be that — things that we never thought of — that people will do with these things that'll be world-changing and important.
This is going to be the dominant driver in technology going through the remainder of the century.
Right, so what do you imagine happening, or do you think that's also played out, Jeff? Because some people feel the app economy is sort of gonna collapse.
Hawkins: You know, its so hard to predict these details. What you can get right are the really big trends. And then the details, you’re never gonna get those right.
So what does a phone look like in ... ?
Hawkins: You know, I don’t think about this space much, and I'm very hesitant to speculate —
Well, it could be full of brain information, right? Smarter phones is what we're looking for, a smarter smartphone!
Hawkins: If you wanna summarize Donna and my relationship for 20-something years, it starts out, Donna understanding that, before we met, that the future of computing was personal computing. Then, together, it was the future of personal computing with mobile computing. And now, it’s the future of computing is intelligent machines that work on the principles of the brain. This is going to be the dominant driver in technology going through the remainder of the century. And I'm so certain of that, so confident in that, that it makes you just put up with all the crap that happens every day, and the things that you have to deal with, and the difficulty of our task. But we are attracted, me from the technology side, her from a sort of development/business side, we are attracted to the really big ideas. So it’s less interesting to be working on the old stuff.
Dubinsky: For us, it’s really, it’s about impact in the end. You know, where can you invest your short time here on Earth to have the most impact for the good and, you know, what are you gonna work on where you can move the needle? And to me, what’s been remarkable about this journey is that — and it’s rare that you can say this — you can actually see more clearly 10 years from now than two years from now. Usually, you're gonna look out two years and say, "Okay, I know whats gonna happen, but I don’t know what’s gonna happen in 10 or 20." I think it’s the opposite in this case. We believe this will be ubiquitous.
So give me a use scenario. Is it a car? Beause the human brain is good at driving a car.
Hawkins: Yeah, but I think that’s thinking too narrowly. The human brain is amazing!
I have a small human brain.
Hawkins: No you don’t. It’s beautiful, and it’s just like everyone else's! So the human brain is incredible for the number of things it can do that it didn’t evolve to do: Engineering and designing this building, structures, materials, science, physics, mathematics, space exploration... The things that we do, as a species, that no other species has ever done, or could do, is just incredible! And we didn’t evolve to do these things. So we have this universal learning algorithm, this has been known for a long time, that the brain works on this universal learning algorithm that can applied to all these different types of problems that no one would ever guess. Think about it this way: Today's computers work on a Von Neumann or Turing architecture, which says you can program a machine to do anything. And the brain shows that there's another algorithm that says, you can learn to do anything. And that anything doesn't overlap much with what computers do. Computers are not good at designing buildings and doing physics and mathematics and poetry and so on, but humans are. And we can build machines that learn to do an incredible number of things that we've never thought of!
Here's another example. Humans are limited by our physical presence. We have to have a body, and someone tells us that machines have to have a body. But we have a certain set of sensors — we have eyes, ears and skin. And we have a physical embodiment of those. Intelligent machines don’t have to be limited by that. We can build intelligent machines with sensors that are incredibly different, that look at nanoparticles, that look at outer space. They will think in "n" dimensions, they will think in protein-folding. We have limits to what we can do because of the impedance between our sensors and our bodies and the things we wanna learn about.
So Donna, we're gonna finish up soon, but give me something — what would be an application? I know it sounds crazy, but regular people need to know, what does that mean?
Dubinsky: Well, I'll give you an idea of what some people are working on today. It’s a wide variety. I don’t think its necessarily reflective of, "What are the big applications in 10 years?" But we have people looking at crop sensors and trying to understand what's going on in a field, and can we improve what the farmer’s able to do in the field? We have a lot of work being done in language, so reading contracts with the machine, so it can understand whether contracts are in compliance or not in compliance. Searching manuals to find answers on technical support problems that, today — they sit there with 80 manuals from various versions of the product and they simply can’t look them up in a very easy way. We have one company that's monitoring beer taps and trying to get the quality right in beer taps.
Using brain algorithms?
Dubinsky: Using brain algorithms! Because they don’t have to be programmed, they're learning from the data. They're looking at the sensors, the sensory information. Its temperature, its flow ... it’s a bunch of sensors.
What any old bartender can do.
Hawkins: Well, actually, Donna didn’t mention, we haven't mentioned so far, but the brain works on time-based data. Things are always changing in time, where most machine-learning algorithms are not time-based, they're static. And so, all these applications Donna just mentioned, actually, like the beer tap, people are sensing the temperature, the flow, over time — every minute, every 10 seconds, whatever. And so you model the data over time.
Dubinsky: And they don’t have to set up thresholds in advance. "Oh if it hits that temperature ..." It learns what’s normal. It learns Saturday night is different than Tuesday night. I mean, it’s just a fun example. But, you know, we're monitoring servers, as I said, to say, "Is there something unusual going on with a server?" There’s maybe some latency in the networks, or some bottlenecks that we can predict in advance. We're learning that temporal data comes in, we learn the pattern, we can make a prediction, we can find anomalies. There's a lot of applications today that are very interesting. Again, we’re not focused on any specific application, but an amazing number of developers are working on this and looking at interesting ways to apply it.
Last question, Jeff. So if you're working on these brain algorithms for machines, what about our brains ourselves? Do they get better? Does the "old brain" go away? It doesn't seem like its going away this season. It seems like the old brain is having a resurgence!
Hawkins: Yeah, this is interesting, I actually wanna write a book about this topic.
It’s a great idea! Beause literally the old brain is, like, doing pretty good this year.
Hawkins: As a species, what makes us unique, and what we wanna be proud of in the longest term, is the things we do that are intelligent, intellectual — our ability to learn and understand the world. That's what makes us unique! Yet we are saddled with our old brain, as well. And I think, if I wanna leave you with a very long-term thinking, as a species, it will be very interesting to see if we as a species can transition to the point where we embrace more of our intellect and somehow shed the problems that we've inherited as an animal species. And that will determine whether we survive as a species for a long time or not. And the things we should be most proud of as a species are not that I, you know, killed this person over here, or stole their land, or got better food. It’s going to kinda be like, what did we leave from an intellectual point of view? What is our knowledge that we've achieved about the universe? And that's a problem for humanity to figure out.
Is the brain physically developing like that?
Hawkins: The brain is already developed. It’s not changing much now. We continue to evolve as a species, like every other animal does. But right at the moment, you know, our technology is evolving so much more rapidly than any kind of evolutionary process that biology could handle. So the question is, can we as a species figure out how to embrace our better side, our intelligent side, and deal with the fact that we —
Can we help that along? I mean, one thing Elon was talking about was "neural lace."
Hawkins: I don’t know what that is.
Well, you stick it in the back of your veins and then you suddenly get smarter. You turn your brain into a computer, I guess, and a serious computer. So if you can do math better, you can —
Hawkins: I think its unlikely that we will be able ... Certainly you're not gonna be able to upload your brain to the computer.
I think they were talking about uploading things to your brain —
Hawkins: I think that's gonna be unlikely too, unfortunately. I wouldn’t count on that.
All right. Damn, I was hoping that would work out! And then I could finally understand quantum physics! Damn!
Hawkins: Maybe it will happen, but we shouldn't count on it! We shouldn't sit around saying, "Oh, yeah, that will solve our problems!"
I wanted just to stick something in — I'd learn karate in two seconds.
Hawkins: Yeah, we'll go right to "The Matrix," isn't that what they did? I don’t think that's gonna happen.
Damn! All right, thank you so much. Donna Dubinsky and Jeff Hawkins of Numenta. They're working on how computers get brains and how brains evolve. Thanks very much.
JH & Dubinsky: Thank you.
This article originally appeared on Recode.net.