Is there a pilot on the plane ?

In less than two minutes, the matter was heard.


The plane of “Badger”, an American F-16 pilot with extensive experience in aerial combat, was hit four times by his adversary without managing to return the favor even once.

Most of the shots that hit the mark occurred during daring maneuvers where the two aircraft were heading towards each other, which testifies to the “superhuman precision” of the winner… an artificial intelligence system developed by a firm from Michigan.

The exercise, carried out in 2020 with a flight simulator under the aegis of a division of the American Department of Defense, convinced a number of high-ranking officers of the interest of artificial intelligence (AI) for the military domain.

Four years later, the same division is experimenting with the idea of ​​integrating an “AI co-pilot” on board a real fighter plane capable of taking charge of the flight while the flesh-and-blood pilot devotes himself to other tasks.

Paul Scharre notes that it is also a question of testing the possibility of a pilot flying in formation with drones capable, on his order, of launching an attack on specific targets.

IMAGE PROVIDED BY THE PUBLISHING HOUSE

Four Battlegrounds – Power in the Age of Artificial Intelligenceby Paul Scharre

“It’s moving slowly because they want to be sure not to lose any planes,” notes this analyst, who wrote an in-depth book in 2023 detailing how AI is likely to change the way war is waged.

Even though new technology is on everyone’s lips, American military leaders too often continue, he says, to think in terms of numbers of planes, ships or soldiers.

This attachment reflects the fact that it is easier to quantify the military power of a country by the number of aircraft and soldiers at its disposal. However, it does not reflect the importance of the “paradigm shift” underway, as important as that which occurred in the industrial era, notes Mr. Scharre.

China and Russia, which like the United States devote significant resources to the production of traditional weapons, are also investing to take advantage of the revolutionary possibilities of AI.

PHOTO MIKHAIL METZEL, ASSOCIATED PRESS ARCHIVES

Russian President Vladimir Putin

Russian President Vladimir Putin said a few years ago that the leader in this area would become “the master of the world”.

The transformations are already underway and are likely to profoundly modify, in the long term, the balance of international power, with the “most technologically advanced” countries leaving with a head start.

In an analysis published a few years ago, Kenneth Payne, a military strategist at King’s College London, warned that the effectiveness of AI systems would supplant traditional military capabilities in importance.

“Even a marginal technological advantage in artificial intelligence could have a disproportionate effect on the battlefield, since slightly superior decision-making capacity, particularly in terms of speed and precision,” can lead to a form of domination, he noted.

What role for machines?

Speculative scenarios about the future of warfare must also take into consideration the role humans will play in the process.

As autonomous military systems such as drones capable of shooting at a target without direct supervision appear, the debate is intensifying on the need to regulate this type of practice.

The debate will intensify, notes Mr. Scharre, since the integration of AI in the military world will lead to an acceleration and complexity of combat to a level such that humans risk quickly becoming overwhelmed.

PHOTO TAKEN FROM THE CNAS WEBSITE

Paul Scharre, analyst

The machines would not only select individual targets, but also plan entire campaigns. The role of humans would then be to activate the machines and remain on the sidelines with little ability to stop the resulting wars.

Paul Scharre, analyst

In this regard, he draws a parallel with the episodes of stock market crashes attributed to algorithmic programming which occurred before the authorities could impose a brake.

Jacquelyn Schneider, an analyst at the Hoover Institution, specializing in military simulations, does not believe in this scenario.

Wars, she says, are iterative processes in which both sides respond “blow for blow” and adjust to the introduction of new technologies.

“We won’t see a war where you just push a button,” she said.

These scenarios will not be relevant for decades, if they ever are, notes Mr. Scharre, who emphasizes the current difficulties of integrating AI into the military world.

Systems based on deep learning present vulnerabilities that can have dramatic and unexpected consequences, making them a priori incompatible with making military decisions with serious consequences in terms of loss of life, he says.

A system “trained” on data corresponding to a specific environment can quickly find itself destabilized in a slightly different setting, going from “super smart to super stupid in an instant”.

The Pentagon, Mr. Scharre illustrates, developed an AI system a few years ago to help detect individuals in a complex urban environment. During a test, several soldiers were able to approach it without being detected, one by putting a cardboard box on their head, another by rolling around.

In the military domain, the problem is amplified by the lack of data available to train the systems, with opposing forces rarely being cooperative in this regard, notes Mr. Scharre.

AI, more belligerent than humans?

The fact that the behavior of AI systems cannot always be explained by their designers is another important issue.

Mme Schneider, along with Stanford University colleague Max Lamparth, carried out a series of revealing simulations on this subject using artificial intelligence systems developed by Meta and OpenAI, among others.

In an article recently published in the journal Foreign Affairsthey note that the systems in question have opted in a context of confrontation for military escalation and conflict, one of them even going so far as to recommend the use of atomic weapons.

In an interview, Mr. Lamparth notes that it is impossible, by training an AI system to try to avoid security slip-ups, to guarantee that it will have the behavior deemed desirable when put to the test.

It seems inconceivable to entrust such a system with high-risk decisions, particularly in combat situations, as long as training methods have not been profoundly transformed, note the two researchers.

This cautious approach, they add, must particularly apply in the nuclear field.

The United States has already indicated that it does not intend to place AI in the decision-making chain leading to the firing of nuclear missiles, but Russia remains circumspect on the subject and has already mentioned the idea of ​​a submarine autonomous carrying a nuclear missile.

No officer is going to make a decision like this based on what an algorithm tells him.

Zachary Davis, nuclear deterrent specialist at Lawrence Livermore National Laboratory,

However, we cannot exclude, says Zachary Davis, that the emergence of sophisticated AI systems will have a destabilizing effect in the nuclear field by undermining the idea of ​​mutually assured destruction on which the international order is based.

A country possessing nuclear weapons could be tempted to attack first if it thinks it has, thanks to AI, more precision and speed to strike and then neutralize a potential counter-attack, wherever ‘she comes from.

The soldiers in question must also be trained to fully understand what is “under the hood” of these singular assistants, just as a soldier knows the smallest parts of the weapons he uses in the field.

PHOTO FROM THE HOOVER INSTITUTION WEBSITE

Jacquelyn Schneider, analyst at the Hoover Institution, specializing in military simulations

If there is a nuclear armageddon one day, it will be because humans made mistakes.

Jacquelyn Schneider, analyst at the Hoover Institution, specializing in military simulations

Mr. Scharre believes it is essential that governments ensure the reliability of systems before considering using them in any way in the military domain.

However, we cannot exclude, he says, that a country finding itself against the wall due to lack of material or human resources might one day be tempted to use AI by taking reckless risks. A shock event, such as the attacks of September 11, 2001, can also favor such a scenario.

“In peacetime, there are a lot of safeguards in place” that no longer necessarily hold when conflict breaks out, he warns.


source site-63