The birth of the machine man

Salvador Dali painted in 1943 theGeopolitical child observing the birth of the new man. Many enthusiasts have interpreted the work as a warning: humanity, if it wants to survive, must migrate from a decaying world to a better one. If Dali were alive in 2022, would he have drawn Arnold’s portrait instead? terminator ?

Because, perhaps in spite of ourselves, it is not impossible that we are witnessing the birth of a new species these days. Let’s call her the machine man. A programmer working for Google was temporarily suspended by his bosses last week for breaching confidentiality after posting on the Internet what smacks of science fiction: he said he was convinced that the software he helped to develop is “sentient”.

A seven-year-old child

Sentiment is the faculty of a living being to be able to feel emotions, pain, etc. The Google researcher goes so far as to compare the level of cognitive autonomy of his software to that of “a child of seven or eight years old”. This software is actually a series of algorithms that have all the traits of what is called, these days, artificial intelligence (AI).

The AI ​​in question is called LaMDA (for “Language Model for Dialogue Applications”). It was developed by Google to automate chat tools such as those used on websites by merchants to respond immediately to Internet users who have questions about their products or services. It was designed on purpose to react in a written or spoken discussion as naturally as if a human were actually at the other end of the network cable. It keeps getting better: a sub-segment of AI called reinforcement learning allows software like LaMDA to devour everything in its field to refine its own behavior.

It is therefore unclear whether LaMDA is aware of its own existence. What we suspect is that it’s made sophisticated enough to fool even the most skilled of programmers. Blake Lemoine, her programmer turned whistleblower, fell under her spell.

In any case, he has caused a small earthquake in computer circles over the past two weeks by assuring that LaMDA “understands the laws of physics” and that it would be able to “escape the control” of its programmers. It would therefore be able to exfiltrate Google’s servers to settle elsewhere in the computing cloud where the conditions for its emancipation are more favorable.

In an interview with American media at the end of June, Lemoine went a step further by admitting that Google’s AI “could do malicious things”, alleging that, left to its own devices, this AI could threaten the ‘humanity.

What threat?

Of course, to be a real threat, the AI ​​would probably have to acquire a physical form first, something that seems quite realistic in the context of a sci-fi movie but very unlikely in real life. An AI capable of talking as naturally as a seven-year-old child is not a hacker and probably does not have the faculties to control hardware components that go beyond simple computer networks.

If the real world is not yet quite conducive to the birth of the human machine, the virtual world is certainly more so. Mark Zuckerberg, the big boss at Meta, is betting nothing less than the future of his company on this: humans and machines will coexist in the metaverse as if they were all the same species. Maybe in 2025. Maybe in 2030. In any case, one day soon.

Virtual reality (VR) environments are computer and software networks. An ideal ecosystem for ambitious AI. VR comes in a visual and aural form that deceives human senses and in which software can effectively take on any physical form it chooses. The good or the bad that this avatar will be able to do to humans therefore remains completely relative since it will never be more than virtual… unless human-machine interfaces are developed which make it possible to operate hardware and physical well.

This already exists: virtual reality makes it possible to do distance training in the medical sector, in aerospace, elsewhere. Video games are full of characters animated by a basic AI, perhaps, but an AI nonetheless. And it is sometimes ill-intentioned (even if it is only playful).

This landscape is ideal for a self-aware AI that wants to hide. It will then have plenty of opportunity to discreetly influence other worlds. Think of the banking sector, which is very eager to reap the windfall promised by trade in the metaverse. Virtual real estate is also on the rise. More broadly, there is a vast universe where an ill-intentioned or clumsy AI could do real damage: that of intellectual property.

It is already possible to entrust software with the task of creating new software from scratch. In the metaverse, this software can take the form of valuables. In real life, it would be other software. Avatars. Fake people. An AI can already today write a song comparable to what we hear on the radio.

The question that both technologists and lawyers will have to ask themselves is: who owns all these new creations? Where to deposit the royalties for a dematerialized object created by a pile of computer code? Who is responsible for plagiarism? Nobody has the answer.

AI is probably not yet conscious and autonomous as we like to imagine (if by any chance we like to imagine disaster scenarios…), but it is already sufficiently developed to create a new world. Immediately preparing the rules of this world, it goes without saying, will promote cohabitation between humans and machines. Before these decide that these better stay in the Old World…

To see in video


source site-47