AI-piloted F-16 test paves the way for a fleet of autonomous planes

After a successful test at the end of April, the Pentagon plans to put an AI-controlled combat aircraft into service by 2028, and hopes to eventually have a fleet of around 1,000 aircraft of this type.

Published


Reading time: 3 min

US Air Force F-16C Fighting Falcon model.  Illustrative photo (GIANLUCA VANNICELLI / IPA-AGENCY / MAXPPP)

In October 1947, test pilot Chuck Yeager broke the sound barrier for the first time, over Edwards Air Force Base in California. Almost 77 years later, on the same basis, 2024 will perhaps be another pivotal moment in the history of aviation: an F-16 fighter plane crossed the sky without a pilot at the controls.

Last week, Frank Kendall, the Minister of the Air Force, himself boarded the X-62 Vista, the name given to the F-16 fighter configured with artificial intelligence. He was able to observe it in flight, at 885 kilometers/hour, under the control of this AI, in an air combat simulation against another aircraft piloted by a human. What he saw from the air clearly satisfied him.

A fleet of autonomous aircraft

For the US Air Force, this technology is not just a single technological feat. There is clearly an intention to move beyond the prototype stage. The US Air Force plans to eventually acquire a fleet of a thousand of these planes, alongside drones piloted by ground operators.

The first devices of this kind should fly as early as 2028. According to the Associated Press, the only media other than NBC, authorized to attend last week’s demonstration, this would be the most significant progress since the stealth planes in the 90s. No other country has such technology, officially at least. China, for example, has not yet succeeded in training its artificial intelligence anywhere other than in a simulator. The American Vista plane has already flown around twenty times since September and has even beaten human pilots in aerial combat. These pilots could have their lives spared in the future, by one day sending these planes on dangerous missions, even if it means sacrificing them.

What role in bombings?

But these planes are also used for bombing. Would artificial intelligence decide who or what to bomb? This is a concern on the part of the International Committee of the Red Cross, which does not want to see “life and death decisions left in the hands of sensors and software”. The organization calls for an international political response. The security risk is rather “not have” this artificial intelligence, retorts Frank Kendall, the Minister of the Air Force.


source site-15

Latest