Posted at 5:00 a.m.
AI, environment and behavior
“It is impossible to predict and therefore to program in advance all the situations that an artificial intelligence (AI) is likely to encounter: it must therefore necessarily learn from its environment”, says Blake Richards, professor at the School of Computer Science and the Montreal Neurological Institute of McGill University.
This learning is based on a “reward” system, like a candy that we would give to a dog to congratulate him. “The purpose of the machine is to learn what actions to perform based on its observations, to maximize the sum of the rewards it receives over the long term,” explains Doina Precup, also a researcher at the School of Computing at the University. McGill University. However, like a dog that is a little too greedy, the machine can sometimes try to do everything to obtain these rewards, even if it means adopting disproportionate or unexpected, even dangerous behavior.
Let’s take the example of a machine that has “poorly” assimilated a behavior. “If she has learned that laughter is good, she may start laughing more and more often, and especially at inappropriate times,” says Marc Lanovaz. What to do in this case? “You have to retrain her by playing on her environment, to teach her that she should stop laughing in such and such a situation. »
Playing on the environment to modify behavior is something that Marc Lanovaz knows well: he is a researcher in behavior analysis and psychoeducation at the University of Montreal. What he concretely proposes here — and in an article published in Perspectives on Behavior Sciencein March 2022 — is to apply the study of human behavior to machines.
AI and humanities and social sciences
Like behavior analysis, there are many fields of research in the humanities and social sciences where AI has caused a lot of ink to flow in recent years: philosophy, ethics, psychology, etc. These fields could help engineers to testing and training the AI in different situations it might encounter, or even detecting and correcting potential biases or misbehaviors. However, although such interdisciplinary collaborations are desirable, each field generally works in “silos”, without really communicating with each other.
“The majority of philosophers, for example, love to study topics like machine consciousness, which doesn’t require deep knowledge of the technical details of machine learning, or even knowing how to code,” says Thilo Hagendorff, an ethicist in artificial intelligence at the University of Tübingen, Germany.
“But I think it would be necessary for researchers in the humanities to acquire this knowledge and this common vocabulary, to understand the problems that AI engineers face. »
On the engineering side, most of them are in a situation where they are pressed for time, they have to quickly release a product on the market or publish a conference paper, and communication with the humanities is an extra effort. for them. There is no incentive to do so, so in practice there are not many bridges between these areas.
Thilo Hagendorff, artificial intelligence ethicist at the University of Tübingen
These obstacles that complicate collaborations are also well identified on the side of AI researchers. “The frontiers of knowledge in our field are moving very quickly, so it’s difficult for other disciplines to know where we are on a particular subject,” says Blake Richards. Moreover, the different disciplines do not publish in the same way, which further accentuates this compartmentalization between fields. “In AI, there are a lot of publications made in conferences, but they do not count for researchers in philosophy or social sciences, who mainly publish books or articles in journals with only one or two authors”, affirms Doina Precup.
AI and the well-being of humanity
Yet all of these researchers agree that more interdisciplinary collaboration could be beneficial for AI and for society.
Ultimately, we are not trying to find out if the machine has a personality, emotions or intentions. Rather, the question is whether the behavior of the machine benefits the well-being or the advancement of humanity.
Marc Lanovaz, researcher in behavior analysis and psychoeducation at the University of Montreal
Studying the machine’s interactions with its environment means not only preventing it from behaving a bit clumsily — such as when it might laugh inappropriately — but also and above all ensuring that it does not negatively interfere with human life. “An algorithm can develop all kinds of behaviors: it will depend on how it is optimized,” says Matthieu Dugal, scientific journalist and popularizer, who recently had the opportunity to explore these questions in the documentary AI, to be or not to be.
“For example, TikTok is an algorithm that very quickly understands where our weak point is: it plays on the part of our brain that activates the small lever of gratification. So, it is certain that this type of algorithm will give a very time-consuming system, which will make us develop reflexes of overconsumption. »
“It’s important to ask these questions now and put in place a framework to lead AI in the right direction,” says Blake Richards. Because in ten years, when AI will be at the heart of our everyday lives, it may be too late. »
Some areas where the study of machine behavior would be relevant
- Social networks (censorship, “filter bubble”)
- dating apps
- Chatbots
- Justice (discrimination)
- Autonomous vehicles
- Autonomous lethal weapons
- Financial markets (speculation)
Source : Nature