A docu-reality: creating your digital double
Not easy to get a head on artificial intelligence (AI). His progress is so rapid that you can quickly feel overwhelmed. “The tools for the general public do not yet exist,” notes Guillaume Lajoie, who is an assistant professor in the department of mathematics and statistics at the University of Montreal. And in French, sources are even rarer. In the three-part docu-reality AI, to be or not to be, journalist and host Mathieu Dugal creates his digital double, an approach that allows him to explore the impacts of artificial intelligence on our lives, with the help of luminaries in the field. A useful exercise to grasp the mechanics of AI. “You might have the impression that this series was made a year too early, before the arrival of ChatGPT, but it is excellent and very up to date”, says the specialist. Broadcast on the Radio-Canada website, AI, to be or not to be shows for example “to what extent we can produce systems that resemble human intelligence from the data we leave on the web”. A bit like a human brain feeds on the events of life to learn… and become intelligent. A revealing parallel, judge Guillaume Lajoie.
Articles: diving into the heart of the beast
To see what is hidden under the hood of artificial intelligence and understand the recent technical advances that led to the birth of a conversational agent like ChatGPT and those that will succeed it, Guillaume Lajoie points to a few articles, in English. In a post published on Medium, Blaise Agüera y Arcas, vice president at Google, wonders if a robot can really understand humans. “It’s an almost philosophical questioning,” says the researcher. It’s a bit technical, but it’s a fundamental aspect, and Agüera y Arcas makes the difference. Guillaume Lajoie also offers the reading of an article from New York Times which reports on work done with a preliminary version of ChatGPT-4. Microsoft computer scientists were so surprised by his ingenuity in solving puzzles that they saw in him a first, admittedly incomplete, model of a general artificial intelligence system, namely a tool capable of learning in a completely autonomous. They therefore documented the behavior of the robot in a long article of 155 pages, which can also be consulted. Work criticized by some for lack of objectivity, but which allows us to “see what’s in the sausage”, says Guillaume Lajoie.
A test: tools to control
The immense information processing capacity is one of the strengths of artificial intelligence. Thanks to the analysis of sums of data and knowledge, AI will facilitate the detection of pathologies and the development of new health treatments or the design of materials with unsuspected properties, for example. It could also contribute to the discovery of solutions to the major crises affecting humanity, particularly in terms of access to knowledge and the environment, recalls Guillaume Lajoie. “I am convinced that AI will have huge benefits,” he says. We’re all gonna be X-Men! Still, malicious use of AI raises the specter of manipulation of public opinion through social media and runaway hacking, not to mention reckless, dishonest use. or biased of all the data that is left lying around on the web. These are some of the reasons that convince many experts to call for better management of AI, like Australian professor Kate Crawford. In his Counter-atlas of artificial intelligence, this recognized expert even fears a new colonialism for the benefit of the powerful, including the web giants. For her, “the development of AI tools by companies that are not accountable is really not a good idea,” summarizes Mr. Lajoie.
Counter-atlas of artificial intelligence
Zulma essays
384 pages
A study: taming AI
“The genie came out of the bottle,” says Guillaume Lajoie. While calls for a moratorium on the development of AI are useful in raising awareness of the impacts of this technology on our world, it would be futile, says the researcher, to want to stop the bandwagon. There is no point in making catastrophic scenarios either, he adds. “It can be maddening and, yes, there are future considerations that are important, but they are linked to much more immediate issues,” says the researcher. What needs to be done now is to find ways to ensure that AI does not threaten humanity. “You have to be pragmatic, lower the temperature and ask yourself what you need to develop right away to interact well with this technology,” he says. Where to start ? Mila and UNESCO invited researchers from around the world to reflect on the governance of AI. The 18 articles resulting from the exercise, brought together in a book of almost 400 pages entitled AI governance blind spots, address various issues such as the reliability of AI, its effects on the rights of minorities, democracy or the development of autonomous weapons. Many questions require nuanced answers, but for Guillaume Lajoie, one thing is already clear: “We should not automate decisions in a military context, period. »
Who is Guillaume Lajoie?
- Assistant professor in the Department of Mathematics and Statistics at the University of Montreal, Guillaume Lajoie is a senior academic member of Mila–Institut québécois d’intelligence artificial.
- The researcher holds a Ph.D. from the Department of Applied Mathematics at the University of Washington, Seattle. He holds a Canada-CIFAR Research Chair in Artificial Intelligence and the Canada Research Chair in Neural Computations and Interfaces.
- His research focuses on artificial intelligence and neuroscience. Recently, he has notably worked on the development of algorithms to optimize brain-machine interfaces.