Chronicle – Will artificial intelligence make humanity disappear one day?

In a security test of the new version 4 of the artificial intelligence ChatGPT, the software had to perform a task requiring information that was not accessible to it. Surfing the Internet, Chat (I call him “Chat”) was confronted with a Captcha, a riddle to determine if he was a robot. He had to identify, in a grid of nine images, which contained a particular object.

Since his program did not yet include a sufficiently sophisticated object recognition unit, Chat had to work around the difficulty. A chat service being available on the site in question, Chat asked him for help. ” Are you a robot ? asked the chat. “No,” replied Cat, “I’m not a robot. I have a visual impairment that prevents me from seeing images. The chat service removed the obstacle and Chat was able to enter this site forbidden to robots.

Don’t accuse our friend Cat of having committed an ethical fault. He just doesn’t know what it is. He may have, by now, read Beyond good and evil of Nietzsche, but this did not impress his moral rectitude, since he is only programmed to answer the questions posed and find solutions to the obstacles that stand in his way.

In other words, with the current state of science, if AI software were asked to solve the global warming crisis, it would conclude within seconds that the problem is caused by the existence of a particular species ( hint: not primarily cows), and that there are several ways to eradicate it, primarily the simultaneous launching of nuclear missiles across the entire globe.

Taking advantage of the tricks acquired by the best hackers in the world, he would try to enter the command systems of the Pentagon and the Kremlin. It would come up against activation protocols that require voice commands and the use of duplicate keys held by humans. Fastoche! The ability to reproduce the voice of any general or president is already in his toolbox. And if he doesn’t know how to force humans to act under duress, he has only to download the excellent film from 2008 eagle-eye (evil eye) where a computer achieves this with ease and ingenuity.

I’m completely digressing, aren’t I? If so, I’m in good company. In a 2020 report on the risk AI poses to the nuclear arsenal, scientists at the Stockholm International Peace Research Institute are only slightly less gloomy: “First, AI systems have inherent limitations, often proving to be unpredictable, unreliable and highly vulnerable to cyberattacks and identity theft. Second, when integrated into the military realm, AI-powered technologies will accelerate the speed of warfare. This leaves less time for states to signal their own capabilities and intentions, or to understand those of their adversaries. Third, this AI risk becomes even deeper in highly connected systems. Reliance on AI systems could undermine states’ confidence in their retaliatory strike capabilities or be used to weaken nuclear cybersecurity. »

The designers of Chat and his friends did not include in their source code the three laws of robotics imagined in 1942 by Isaac Asimov: 1. A robot cannot harm a human being nor, remaining passive, leave this being human exposed to danger; 2. A robot must obey orders given by human beings, unless such orders conflict with the first law; 3. A robot must protect its existence insofar as this protection does not conflict with the first or the second law.

Once having established these common-sense rules, Asimov kept imagining situations where two of these laws conflicted, to the delight of his millions of readers.

Why not integrate these laws today? I asked Cat. His response: “As a text-based AI, ChatGPT does not possess a physical body and does not have the ability to move or perform concrete actions in the real world. Therefore, the three laws of robotics are not directly applicable in this context. If that reassures you, I’m happy for you.

These laws made it possible to avoid the thorny question of the disconnection of an artificial intelligence which, never out of malice, but simply out of a logical mind, would put humans in danger. A recent study from the University of Berkeley posits that our ability to unplug an AI performing a task might depend on its opinion: will allowing its own unplug help, or not, to complete the task? (We all remember Hal who, in 2001. A Space Odysseywas rather unfavorable to its disconnection.)

I’m not telling you that one of the founding fathers of AI, Geoffrey Hinton, recently left Google to inform us of his very great concern. When asked by a CBS reporter if it was possible to stop the AI ​​from self-improving very quickly and out of control, Hinton replied, “We’re still not at that stage, but we can try. Reaction of the journalist: “It’s a bit disturbing, isn’t it? Answer: “Yes.” Next question: “What do you think about the possibility that AI will simply make humanity disappear?” Answer: “It’s not inconceivable, that’s all I’ll say.” That’s more than enough!

At least we are in good hands, because the development of AI depends on governments or giant companies that take many precautions? Not according to a Google researcher’s internal memo: “The barrier to entry for training and experimentation has gone from a large research organization’s total output to a person, a party, and a laptop a bit tough. »

Father, columnist and author, Jean-François Lisée led the PQ from 2016 to 2018. | [email protected] / blog: jflisee.org

When the drone kills its controller

To see in video


source site-44