top of page
White and Grey Simple Light Beauty Retail Website (7).png
HR_BENGIO_Yoshua_Credit- Amelie Philibert_2022.jpg

Yoshua Bengio

Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute

John Urschel est un fellow junior à Harvard. Auparavant, il était un membre de l’Institute for Advanced Study, et avant cela, un doctorant en mathématiques à MIT. En 2017, Urschel était nommé à liste Forbes’ “30 under 30” de jeunes scientifiques exceptionnels. Ses intérêts de recherches incluent l’analyse numérique, la théorie de graphe, et la science de données/apprentissage automatique.  

Welcome and Opening Remarks at 4:45 pm
Public Lecture: 5:00 pm - 6:00 pm

Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2022, became the most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France, Officer of the Order of Canada, and a Canada CIFAR AI Chair. Concerned about the social impact of AI, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.



Mathematical challenges towards safe AI

Advances in algorithms and computational capabilities of AI systems based on deep learning have been impressive and herald possibly disruptive transformations in coming years and decades, with great potential for both benefits and risks for humanity. The three winners of the Turing award for deep learning (2018) expect that broad human-level capabilities are likely to be achieved within just a few years or decades and industry is investing billions of dollars per month which are likely to accelerate this process. However, we do not yet know how to design provably safe and controllable AI systems, i.e., systems that behave as we intend. This misalignment could threaten democracy, national security and possibly our collective future either due to malicious actors or a loss of control to runaway AIs. Worse, arguments have been made suggesting that the state-of-the-art AI methodology, based on reinforcement learning, would yield less and less safety as computational power increases. This presentation will argue that there may be a way to design AI systems with probabilistic safety guarantees that improve as we increase the computational capabilities of the underlying neural networks. This would rely on efficient and amortized Bayesian inference in learned causal models, designing AI systems inspired by how scientists and mathematicians come up with theories that are compatible with reason and observed evidence.

bottom of page