This text is part of the special Research section
A recent study published in NatureCommunications sheds light on the brain’s learning process. Advances that could help improve the performance of artificial intelligence.
“Our starting point is that all cognitive processes that occur in our brain, including learning, are physical processes that can therefore be simulated and analyzed using computers,” says Eilif B. Muller, assistant professor of neuroscience at the University of Montreal. The researcher has been working on modeling brain processes for twenty years using mathematics and physics to better understand how the brain learns. “Learning is one of the most interesting processes: it’s where our adaptability comes from,” he adds.
It was in Switzerland, in the international team of the Blue Brain Project, that Professor Muller began to explore the neocortex. This initiative was launched by Professor Henry Markram of the Swiss Federal Institute of Technology in Lausanne in 2005 with the aim of simulating the neocortex, this area of the brain that resembles the shell of a walnut. It is in this 2 to 3 mm thick surface that adaptive learning takes place: development of language, understanding of mathematics, learning by error, processing of images, sounds, smells, etc.
Virtual simulation
The circuits of the brain are made up of neurons, connected by synapses. For several years, scientists in the field have believed that what we learn is written or stored in our brain as a long-term change in the connections between neurons, via synapses. This synaptic plasticity would be the substrate of learning.
Eilif B. Muller and his collaborators at the Blue Brain Project performed a physical simulation of learning processes in the neocortex. “We have been trying for years to find rules, patterns for these changes. It’s like having several puzzle pieces, with some missing, and no reference image,” says Muller. To do this, they built a virtual copy of the neocortex of a young rat that they trained using data, and then compared their predictions with reality.
The results, published in NatureCommunications, are surprising. “We found a single rule that unifies the experimental observations we have right now, and our model can make predictions,” says Muller. The team now hopes to repeat the experience.
At the intersection of neuroscience and AI
This research on the brain, although fundamental, opens the way to several practical applications, helping, among other things, to understand the mechanisms of neurodevelopmental disorders such as autism, schizophrenia or Down syndrome. But better understanding this important region of the brain could also help build more advanced artificial intelligences. “Artificial intelligence researchers are actively trying to build systems that share the powerful learning capabilities of the neocortex,” notes Muller.
Montreal is also an important environment for this research, at the intersection of neuroscience and artificial intelligence, with the presence of Yoshua Bengio (scientific director of IVADO) and his colleagues, several organizations and the support important to governments. This is also why Professor Muller decided to come.
“The objective of our research is twofold,” says Muller: as we better understand the mechanisms of learning, we can implement these in new approaches to artificial intelligence. But the mathematical language of AI can also help neuroscience: “AI has developed mathematical concepts and languages that can describe the rules of learning in artificial systems,” says Muller, who aims to bring these two search fields.
Artificial intelligence… more human
But what would a “smarter” AI do? At the moment, AIs are trained at once, using enormous amounts of data, so that they learn to make classifications (differentiate, for example, a dog from a cat). But once all this learning has been done, the machine fails to adjust along the way (if, for example, we notice that the Chihuahua has been labeled as a cat rather than a dog). The algorithm is unable to do this continuous learning and will forget everything it already knows when presented with new data.
The way our synapses react could inspire new ways to address this and other issues. “We see big differences between how algorithms learn and what we know about synapse changes in the brain. It’s a contradiction that fascinates scientists and it’s very fertile ground for the cross-fertilization of ideas,” Muller concludes.