Interview with Hugo Larochelle, from Google DeepMind | More intelligence for more solutions

A Quebec authority on artificial intelligence (AI), head of Google’s Montreal laboratory since 2016, Hugo Larochelle is a researcher who loves “understudied subjects” and campaigns for AI to serve the common good while calling for vigilance. The Press met him, while he will be a speaker this Wednesday at the Palais des congrès during the ALL IN event.



Your conference will focus on what you call “artificial intelligence for the common good.” Do you have any concrete examples of this?

Sure, I’ll mention AlphaFold, developed by Google DeepMind. It’s a tool that provides a prediction of the structure of proteins, the basis of life. It provides key information to determine what would be cures for different diseases, how to develop drugs. There are a lot of people who use it.

You have been working on AI since 2004 and now two years ago a revolution has come along: generative AI. How do you feel about the arrival of this new superstar, when you have been working in the background for a long time on something, perhaps, less spectacular?

In fact, if I go back to my PhD, there are many of the early ingredients that we developed. With other students as well, we contributed to these fundamental bases that ultimately led to this generative AI that is now usable by almost everyone. I think it’s always attractive to say that there are only a small number of people who have revolutionized the world. The reality is that technological development is the work of really many, many, many people. I’m not proud enough to start saying that what I did was the most important element that led to these developments. It’s a continuity. I see signatures, in a way, in what we see in generative AI and what I did with Yoshua Bengio during my PhD, for example.

PHOTO ALAIN ROBERGE, THE PRESS

Hugo Larochelle in the offices of the DeepMind laboratory in Montreal

But you don’t work on generative AI as such?

I, in the research that I choose now, tend to stay away from that. I’ve always liked to look for topics that are understudied at a certain point. A little bit for my own sanity. When you’re in a field where there’s a lot of competition, it’s demanding. I’ve always found that it’s a bit of a waste of time for the community when there are too many people interested in the same problem.

What are these understudied topics?

When I started the group that was formerly called Google Brain, one of the people I hired was Danny Tarlow. He was really interested in artificial intelligence for programming development and software development. At that point, it was pretty niche. I made sure that we made room in my team for that research.

So, an AI that helps programmers detect their mistakes.

Yes, well, that’s it. I supported the initial research on that and now it’s very successful.

One of the topics that interests me a lot, which I will talk about briefly in the conference, is related to conservation ecology. We have a bioacoustics project called Perch. We take an AI model that we train on sound data recorded in natural environments, in forests, to say which species of birds are heard.

And the reason we’re interested in this particular issue is that birds are considered “indicator species,” providing information about other elements of the health of that ecosystem.

What’s fun is that we already observe that training a model on bird species detection data also gives us the basis for detecting other animal species that are not birds. We have shown that this could transfer and then allow very easy training of models for other species, such as bats or even underwater species.

It seems that after the enthusiasm of the first years, AI is now arousing more mistrust and questions. Do you find these concerns exaggerated or normal?

To me, the divergence of perspectives in our community reflects the fact that there is a lot of uncertainty about where we are going. There is a lot we don’t know; predicting the future is impossible. In a situation like this, my perspective is to be somewhere between the two extremes. The reason we develop AI is for its potential. What I am interested in is working to make sure that there is a positive impact.

Now, I also think it’s important to be vigilant, to pay constant attention to how developments are done. Keeping a critical eye on this seems completely healthy to me.

For reasons of brevity and readability, this interview has been edited.

Who is Hugo Larochelle?

  • Bachelor of Computer Science and Mathematics from the University of Montreal in 2004, obtained his doctorate in 2009
  • Assistant professor then assistant professor at the University of Sherbrooke from 2011 to 2020
  • Research Scientist at Twitter in 2015 and 2016
  • Associate Director of the Machine Learning and Brains program at the Canadian Institute for Advanced Research from 2017 to 2019
  • Research scientist at the Montreal laboratory of Google DeepMind (formerly Google Brain) since 2016
  • Associate Professor at the University of Montreal, in the Department of Computer Science and Operational Research, since 2017


source site-63

Latest