Modern AI powered by deep learning only caught the world’s attention these last few years but some academic researchers have been working to develop and understand its potential for much, much longer than that. Among the uncontested leaders of the field is Yoshua Bengio, scientific director of Mila Quebec AI Institute, a scientific hub that houses what is believed to be the world’s largest concentration of academic deep learning researchers.
In 2018, Bengio (along with Geoffrey Hinton and Yann LeCun) won the Turing Award — the rough equivalent of a Nobel Prize in the field of computing — for their contributions to modern deep learning.
Over the last year, Bengio has expanded his focus to include thinking and writing about societal risks, including existential risks, from AI. In doing so, he has distinguished himself as a brilliant thinker once again, attacking a complicated and fraught issue with unusual clarity. Indebted to no large commercial labs, deeply respected in his field, and in conversation with leading academics who agree with him as well as those who don’t, Bengio has been uniquely positioned to move the conversation about societal and existential risks posed by AI progress.
In testimony before the US Senate this summer, Bengio laid out the case for concern over the rapid progress of AI capability toward matching human levels of intelligence: “Previously thought to be decades or even centuries away, [Bengio and his fellow Turing award winners] now believe it could be within a few years or a couple of decades,” he told Congress. “None of the current advanced AI systems are demonstrably safe against the risk of loss of control to a misaligned AI.”
Bengio’s writing and speaking about existential risks from AI, why to take them seriously, and what can be done is some of the clearest out there. But he’s done more than just publicize the problem; he’s also started working on proposals for solutions.
In a recent paper in the Journal of Democracy, Bengio laid out a proposal for coordination against AI concentration of power and potential existential risks. In dialogues with other academics, he’s sought to clarify the case surrounding serious risks and identify what work can be done to mitigate them — and sought to mend the counterproductive divide between those who fear AI harms related to discrimination and bias and those who fear human disempowerment.
In a recent interview with the Bulletin of the Atomic Scientists, Bengio said that he thinks the challenge ahead of us is solvable — but that it’d require international coordination to do the right thing, and that’s not easily achieved. If we do achieve it, it seems very likely to me that Bengio’s patient, clarifying, outspoken work will be part of how we got there.