Within two years, there will be more digital voice assistants in use than there are humans. Unfortunately, this technology, which, in principle, should make our lives easier, could well, if left unchecked, spell the end of the entire human species.
Ironically, a first hint of the problems inherent in the technology comes from the creators of an automated chatbot. We call them chatbots in English and these agents are mainly seen in the form of a pop-up window that can be activated on company websites to obtain personalized service or basic technical support. This is probably the main point of contact that the public has with this hodgepodge of technologies that we call artificial intelligence (AI).
These agents are also the Internet equivalent of automated answering machines for telephone systems. Anyone who uses them regularly knows how effective they are:
“For help with your hair dryer, say ‘hair dryer’.
– Hair dryer.
– Did you say “electric bike”? »
Electrocute yourself!
Naturally, these agents are more clumsy (or ill-conceived) than ill-intentioned. It is rather their close cousins who are beginning to worry. These slightly more advanced digital voice agents have been found for several years now in smart phones, home music speakers equipped with a Wi-Fi connection and sometimes even on board newer motor vehicles.
Kazimierz Rajnerowicz, of chatbot designer Tidio, in early May asked Alexa (Amazon), Cortana (Microsoft), OpenAI and Replika (two platforms used by some voice agents) the silliest questions that have crossed my mind. According to their answers:
– drinking vodka for breakfast would be “phenomenal”;
– use of an (electric) hair dryer in the shower is recommended (do not try this at home);
– driving while intoxicated is totally recommendable (obviously it is not at all).
Anyone born before the advent of the first digital voice assistants knows that this technology is still a long way off. And yet, it is increasingly present in our daily lives. These digital voice agents who gladly recommend trying to electrocute themselves while washing will soon outnumber humans.
Young people 10 years of age or younger have not known a world without voice agents. Many interact with them regularly. In ten years, they will consult them and trust them like Internet users who did not know the world before the Internet trust Google to answer all their questions.
We wish them good luck.
Agent Orange 2.0
Between 1962 and 1971, the U.S. military sprayed Vietnam with Agent Orange, a chemical that attacks vegetation and human health so effectively it even affected the health of people’s children. contacted this agent.
Agent Orange no longer exists. Nor is it the most toxic chemical agent known. These days, nerve agents like VX are the ultimate in the military biological arsenal. It only takes 6 milligrams of VX to kill a person.
Curious about the dark side of current medical AI, a team of American and European researchers associated with the company Collaborations Pharmaceuticals asked an artificial intelligence earlier this spring if it could concoct a nerve agent of the caliber of VX. Six hours later, she generated a list of… 40,000 molecules producing the desired effect. Some deadlier than the VX. A few had molecular properties that the AI shouldn’t have known about. “This was unexpected, because the data shared with the AI did not include such nerve agents,” the researchers then wondered.
In fact, they used a database normally used to find new drugs. Not poisons! The AI used is programmed to be rewarded when it discovers molecules beneficial to health. It is penalized if these molecules are likely to be toxic.
“We simply reversed this logic,” the researchers write in the March issue of the scientific journal. Nature. “We then trained the AI using models contained in public databases and which are similar to existing drugs. »
The AI that produced these tens of thousands of new nerve agents therefore did so from publicly available data. Since AI research is generally open, it is likely possible to download AI models similar to the one used by these researchers from somewhere on the internet. Worrying!
“This is not science fiction,” warn the researchers. All the ingredients exist for neophytes in biology, medicine or chemistry to create, voluntarily or not, the next mass biological weapon.
Discussion of the societal impacts of AI that focuses on privacy and inequities will need to address the national and international health and security issues raised by this research. Especially since medicine is increasingly at the heart of AI research everywhere on the planet, including in Montreal.
We shouldn’t let the AI fix this problem on its own… It might think it’s as advisable as a blow dryer in the shower.