/cdn.vox-cdn.com/uploads/chorus_image/image/64132389/GettyImages_898172236.0.jpg)
Most charity is focused on the near term — it goes to universities educating people now, or arts organizations putting on shows and exhibits now, or food pantries helping the hungry now.
So what happens when you try to only give to charities that will help humans a long time from now — not just in 100 years, but in a million years?
That’s exactly what Jaan Tallinn, a founding engineer of Skype, has done with his fortune. He was one of the first donors to take seriously arguments that advanced artificial intelligence poses a threat to human existence — not now, maybe not in 50 years, but certainly somewhere in the future. He has come to believe we might be entering the first era in human history where we are not the dominant force on the planet, and that as we hand off our future to advanced AI, we should be damned sure its morality is aligned with our own.
He has donated more than $600,000 to the Machine Intelligence Research Institute, a prominent organization working on “AI alignment” (that is, aligning the interests of artificial intelligence with the interests of human society) and more than $310,000 to the Future of Humanity Institute at Oxford, which works on similar subjects. He’s also co-founded two new organizations studying AI and other extinction threats: the Centre for the Study of Existential Risk at Cambridge and the Future of Life Institute.
Tallinn came on the latest episode of Vox’s Future Perfect podcast to talk about the rationale behind his philanthropy and how he was persuaded to care so much about AI:
To people unfamiliar with the argument that AI poses an existential risk, Tallinn’s actions might seem bizarre. So we had Kelsey Piper, a Vox reporter who’s written extensively on AI risk, walk through the argument. And Robert Reich, a Stanford philosopher who has been highly critical of big philanthropy, explains why this kind of big experiment in giving might be the best the charitable sector has to offer us.
Read more
- Tallinn explains his concern with AI at an effective altruism conference
- Kelsey Piper explains the risks of unconstrained AI
- AI experts on when they expect AI to outpace human intelligence
- Ted Chiang’s critique of concern with AI safety
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.