This text is taken from the Courrier de l’économie of July 4, 2022. To subscribe, click here.
How will virtual reality affect our lives? Should we fear the artificial intelligence that is coming very soon like in the movie terminator ? This (double) question from Marcel-Nicolas Murillo is timely. A Google engineer has just been temporarily suspended for breach of confidentiality after he published on the Internet a finding that does have a hint of science fiction: he said he was convinced that the software he helped to develop dot is “sentient”. Sentiment is the faculty of a living being to be able to feel emotions, pain, etc. The Google researcher goes so far as to compare the ability of his software to think independently to that of “a child of 7 or 8 years old”.
The artificial intelligence (AI) in question, called LaMDA (for “Language Model for Dialogue Applications”) “understands the laws of physics” and would be able to “escape the control” of its programmers, fears Blake Lemoine. In an interview with American media at the end of June, the American engineer went a step further by admitting that Google’s AI “could do malicious things”, alleging that, left to its own devices, this AI could threaten humanity.
Of course, to be a real threat, the AI would probably have to acquire a physical form first, something that seems quite realistic in the context of a sci-fi movie, but very unlikely in real life. . An AI capable of talking as naturally as a 7-year-old child is not a hacker and probably does not have the ability to control hardware systems that go beyond simple computer networks.
It must be said, however, that virtual reality environments are just that: computer networks. They are presented in a visual and sound form that deceives the eye and in which software can effectively adopt a physical form. The good or the bad that this avatar will be able to do to humans therefore remains completely relative since it will never be more than virtual… unless human-machine interfaces are developed which make it possible to operate hardware and physical well.
More broadly, there are industries where ill-intentioned or clumsy AI could do real damage. Consider the world of intellectual property. It is already possible to entrust software with the mandate to create new software from scratch. Others can write poems or songs as finely as the finest songwriter. Still others are able to produce longer and more substantial texts than if they were authors or journalists.
These systems were created from pieces of code that were themselves created by others. Then, they consult tens, if not thousands of texts, articles and publications, from which they then draw inspiration for their own creations. The question both technologists and lawyers are asking is: who owns all these creations? Intellectual property laws are completely overtaken by the situation.
After all, AI may not yet be as aware and autonomous as we would like to imagine (if by any chance we like to imagine disaster scenarios…), but it is already sufficiently developed to influence the world. human. And responding immediately to this influence, it goes without saying, will probably prevent slippages where, in effect, an artificial intelligence will want to take control if not of all of humanity, at least of a virtual environment where humans will want to cohabit with it. .