clock menu more-arrow no yes

He co-founded Skype. Now he’s spending his fortune on stopping dangerous AI.

Jaan Tallinn explains why AI alignment is such a crucial problem.

AI visualized through code samples.
AI is very difficult to represent visually without resorting to cliché code samples and whatnot, so here’s some code samples!
Getty Images/iStockphoto

If you’ve ever used Skype or shared files on Kazaa back in the early ’00s, you’ve encountered the work of Jaan Tallinn.

And if humans wind up creating machines that surpass our own intelligence, and we live to tell about it — we might have Tallinn’s philanthropy, in small part, to thank.

Tallinn, whose innovations earned him tens of millions of dollars, was one of the first donors to take seriously arguments that advanced artificial intelligence poses a threat to human existence. He has come to believe we might be entering the first era in human history where we are not the dominant force on the planet, and that as we hand off our future to advanced AI, we should be damned sure its morality is aligned with our own.

He has donated more than $600,000 to the Machine Intelligence Research Institute, a prominent organization working on “AI alignment” (that is, aligning the interests of an AI with the interests of human society) and more than $310,000 to the Future of Humanity Institute at Oxford, which works on similar subjects. He’s also co-founded two new organizations studying AI and other extinction threats: the Centre for the Study of Existential Risk at Cambridge and the Future of Life Institute.

These are, suffice it to say, not typical research interests for a philanthropist, even a tech philanthropist. It’s much more common for the newly wealthy to spend their fortunes as patrons of the arts, or donating to their alma maters, or maybe, in the best-case scenario, on fighting global poverty.

So we at the Future Perfect podcast wanted to talk to Tallinn about how he arrived at this approach, and why he thinks donating to protect us against AI is worthwhile. Our full episode on Tallinn and AI safety philanthropy will come out next week, but for now, here’s a taste of our conversation with Tallinn.


Dylan Matthews

When did you first start thinking about doing philanthropy whether through AI alignment or other issues?

Jaan Tallinn

Ten years ago, there was this lawsuit between eBay and other involved parties in Skype, including the founders co-founders of Skype.

So around that time, I started looking for ... new things to do, and then I stumbled upon Eliezer Yudkowsky’s writings on the Internet, that I found really compelling.

[Editor’s note: Yudkowsky is an autodidact who writes widely on AI and related topics and authored the popular Harry Potter fanfic Harry Potter and the Methods of Rationality, which reflects many of Yudkowsky’s somewhat eccentric views on rationality and science.]

Dylan Matthews

For readers who might not have heard of Eliezer Yudkowsky — what did you start reading by him, and what caught your interest?

Jaan Tallinn

When I discovered him he had written about 1,000 essays.

And before I discovered that he can write faster than I can read, I had already planned and written a software to scrape everything that he had written and format it for easier reading.

So I spent about a year just reading his stuff ... reading his essays about rationality, about human psychology, about science, and about AI. And why we need to put effort into making sure that the long-term future is going to be good.

Byrd Pinkerton

Was there a particular passage or idea that kind of shook you or reformatted how you were thinking about the world?

Jaan Tallinn

The overall idea that caught my attention that I never had thought about was that we are seeing the end of an era during which the human brain has been the main shaper of the future.

And as we hand over that future to non-human minds, the future might be really, really weird. We are so accustomed to humans being the king of the planet that it’s kind of unimaginable for us how different it might be.

Dylan Matthews

I know there were — especially 10 years ago when you’re describing this — some computer scientists who were very dismissive.

I’m thinking of people like Andrew Ng, who had that famous quote about how worrying about human-level AI is like worrying about overpopulation on Mars.

What was it that made you think, “Eliezer’s onto something. This isn’t just a crank-ish thing to worry about. We should be really really concerned about this.”

Jaan Tallinn

I’m a programmer and I read what arguments they make as if they were a program — at least when I want to be careful about them. I was just reading, and I just couldn’t find any bugs in [Yudkowsky’s writing on AI].

Dylan Matthews

So you read these arguments. You became convinced. What was your next step?

Jaan Tallinn

So the first thing I did was, I just started supporting Eliezer’s organization, which now is known as MIRI — Machine Intelligence Research Institute — and I’m still a significant supporter of that organization. I’m pretty enthusiastic about what they’re doing.

But then I also started asking for introductions. I went over to Silicon Valley and spent a couple of weeks there just having like meeting after meeting after meeting.

I had this policy where I gave just small sums to people and saw like what can they do, and then eventually increased support. Kind of like taking an investment approach to philanthropy.

Byrd Pinkerton

As a lay person, I wonder if promoting economic growth or fighting inequality might be even more important as ways to protect the future?

Jaan Tallinn

I think there’s a fairly solid argument why addressing existential risks is what is really important right now.

Open Philanthropy, which is the largest philanthropic organization that supports existential risk reduction, have this system where they say that the cause has to be important, it has to be tractable, and it has to be neglected.

And existential risk kind of checks all those three, at least to a significant degree.

The reason why it’s important is ... [sighs] The other causes sort of assume that we will not go extinct. Like economic growth. What’s the use for economic growth if we don’t have people?

It seems to be neglected because a friend of mine calculated PhD’s working on existential risk reduction as of last year full time. I think the answer he got was ... a dozen or so.

It’s unlikely that this is the optimal number for humanity.

The third criterion was that it has to be tractable. We don’t really have any evidence that we are doomed by default. It definitely seems something that we can move the needle on.

Sometimes I compare specifically AI risk and alien risk. Think about if we are going to get the news this superior race of aliens are on the way and that in 20 years, they’re going to be here.

The AI situation has some similarities, but it has one really important difference: it’s us who’re going to build that AI. We have this degree of freedom: What kind of AI we are going to get on this planet?

And so in that sense, this is way more tractable than preparing for alien risk.

Dylan Matthews

To give an alternative comparison: When I tell people I’m a little worried about AI, the number one answer is, “Well, why aren’t you worried about climate change? We know that’s coming. We know that’s going to be really bad.”

I wonder if you hear that a lot.

Jaan Tallinn

So, you make a table: like tractable, neglected, and important.

Important? Yep, check.

Neglected? Not really, no.

And is it tractable? Kind of. But I think I would argue much less tractable than the AI risk is at this stage. So, it’s just like a price performance situation.

Dylan Matthews

Another argument that I hear people skeptical of AI alignment make sometimes is it seems like a group of programmers and computer scientists have become convinced that programming and computer science are really, really essential to preserving the future of humanity.

And there’s an obvious question about bias there. People might be predisposed to think that what they’re doing is important.

Jaan Tallinn

A lot of people don’t evaluate arguments. They try to psychoanalyze the people who make those arguments.

Sure, there have been doomsayers, but is that going to give a guarantee that nothing bad is going to happen? No.

Sometimes, I say that there tend to be two categories of people: “people people” and “atoms people.”

People people think that like everything’s going to be fine as long as like you know the right people and make the good deals, etc. I mean, I’m exaggerating, obviously.

Atoms people are people who think that, no, the world is fundamentally made of atoms. People are made up of atoms. Bad things happen to atoms. We’re not going to be safe from that.

Byrd Pinkerton

Is there stuff you see being reported about AI that makes you want to hold your head and scream? Like people just being way too gung-ho about a new technology or not considering the risks?

Jaan Tallinn

Yeah. So I think the problem with media is that your incentives are not really aligned with humanity. That’s why you have like editorialized headlines. That’s why you have this Darwinian situation going on there. You have to survive.

That’s why I kind of try to talk to people directly rather than over media. I don’t have a Twitter account, for example. That would be just silly.

Byrd Pinkerton

I’m curious if there are like movies about AI ending the world that you think are well done, versus movies that you’re like, “this is probably misinforming people”?

Jaan Tallinn

Personally, I really like the movie from the ’70s called The Forbin Project, about an arms race between Soviet Union and the US in military AI.

And basically how it ended was that the AI on both sides negotiated the deal and kind of brushed humanity on the side.

The second one was Ex Machina that just portrayed what an indifferent AI looks like. And in a very kind of bizarre way, which was very interesting.

Dylan Matthews

So with the proviso that any given situation is unlikely, you don’t think people should be worried about the opening scene in Terminator where there were machines and the humans shooting at each other in a battlefield that looks like World War II?

Jaan Tallinn

Yeah, that just doesn’t look realistic.

Like, think about when we want to terminate cockroaches. We don’t build small cockroach-like robots and then send them out there with lasers and stuff.

No, we just do something that looks way less interesting from cockroaches’ perspective.

Byrd Pinkerton

In this scenario, are the human beings the cockroaches?

Jaan Tallinn

Yeah.

AI has no reason to care about the atmosphere. And so it’s like when we terminate cockroaches, that looks like an environmental catastrophe … to them.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.