Experts warn against AI, but who benefits from the worst-case scenario?

Follow the money, goes the saying. In their recent high-profile public outings, a number of tech experts have likened the risk posed by artificial intelligence (AI) as we see it to a pandemic or a nuclear war. Should we take them at their word?

The fears linked to AI seem to relate to a loss of control of its evolution. The rapid emergence of chatbots like ChatGPT and image generators like MidJourney surprised everyone, including their creators. And this is just the beginning of technological acceleration, they say.

Eventually, an AI will emerge that has its own consciousness and can improve itself without outside help. What she will decide to do to humanity is what apparently keeps several AI experts awake these days.

It’s the trombone optimizer syndrome, illustrates Philippe Beaudoin, founder of the AI ​​company Waverly, in Montreal. “A machine creates paperclips with iron. Unrestrained and short on materials, she ends up extracting iron from human blood to continue making paperclips. »

In a short letter published Tuesday, AI experts say we need to better control the technology, otherwise we will end up creating a trombone optimizer, for real, explains Mr. Beaudoin.

We may have created the International Atomic Energy Agency, but if nuclear power is accessible to people who are out of control, it is the very existence of the technology that is the problem, continues the AI ​​entrepreneur , which takes a more moderate stance on AI than its current critics.

“Their position may be motivated by some fear, but there is no suggestion of action and it is proposed by people who take full advantage of it. »

The double game of OpenAI

A young Silicon Valley multi-millionaire fears the worst. According to him, the future of the human species is threatened by a super virus created in the laboratory, a nuclear war, or a super artificial intelligence which will attack humans by all means. Or all three at the same time.

This young multimillionaire already has in his possession enough weapons, antibiotics, water, batteries, gas masks and even a huge private piece of land where he can take refuge in the event of an imminent apocalypse. This person, whose portrait was drawn by the New Yorker, is now 38 years old and is called Sam Altman. This is the big boss of OpenAI, which last fall launched ChatGPT, spearheading the technological revolution that is happening these days.

The emergence of ChatGPT and other AIs of its kind has been worrying leaders in the development of AI technologies for several weeks. Big names in scientific and academic research warn that the very existence of this technology should already be better regulated, because it could fall into the hands of ill-informed people and have significant consequences.

“Reducing the risks of AI should be a global priority on the same level as pandemics and nuclear warfare,” several of these researchers, including Montrealer Yoshua Bengio and Torontonian Geoffrey Hinton, warned earlier this week.

Sam Altman is a co-signer of this very short open letter.

Yann LeCun, another highly regarded AI researcher who leads research at Meta, recently noticed that wealthy independent entrepreneurs like Mr. Altman are showing some form of hypocrisy in signing the letter.

“Sam Altman is not saying that he is going to stop his work”, sums up Philippe Beaudoin. Perhaps he hopes that the legislative frameworks expected in Europe and Canada in particular – the United States could join them very soon – will limit the development of AI in a way that will benefit OpenAI. His business, after all, is far from profitable and may not adapt well to increased competition.

California longtermism

A current of thought very popular with the monks of Silicon Valley is “long-termism”, according to which everything must be done immediately to ensure the survival of future generations. This includes the conquest of Mars, the creation of an international financial system sheltered from central banks, the establishment of a super intelligence capable of making the best decisions in the short term to ensure the well-being of humans in a more distant future.

Followers of long-termism fear the worst-case scenario: that, left to itself, humanity finds a way to scuttle itself. Hence the proximity of this current of thought with that of survivalism.

Prior to running OpenAI, Sam Altman ran an influential new technology incubator in California and began funding very risky, very long-term projects that coincided with these two philosophies.

“These philosophers of the apocalypse do not have such a brilliant track record and we still listen to them”, remarks Philippe Beaudoin. “Longtermism is only a school of thought. What is new is to see reputable researchers joining this current. The opponents of this vision, we should perhaps listen to them more too. »

To see in video


source site-43

Latest