To identify its targets in the Palestinian enclave, the Israeli army uses powerful algorithms responsible for identifying potential members of Hamas.
Published
Reading time: 2 min
It is one of the biggest armaments events in France and in the world: the Eurosatory 2024 exhibition opens its doors near Paris on Monday June 17. But due to the ongoing offensive in Gaza, France has not authorized any Israeli companies to run stands. The star of this show is artificial intelligence and its military applications. Last April, based on the testimony of Israeli army officers, the Israeli investigative media +972 revealed the extent of AI in the Gaza offensive.
These soldiers described to them from the inside how, during the response to the Hamas terrorist attack, Israel used powerful algorithms including “Lavender” – Lavender, in French – responsible for identifying and locating in a manner “industrial” military targets for its pilots but also its ground troops.
Meron Rapoport is editor-in-chief of +972 : “The great innovation is that the artificial intelligence chose the targets to neutralize. It was not the AI which killed these targets. It was the agents who validated the machine’s choices behind a screen. Normally, a target has to be cross-validated by a lot of human intelligence. Here, the targets designated by the AI were approved very, very quickly.
In France, the revelations of these journalists did not surprise Amélie Férey. A researcher at the French Institute of International Relations, she has been examining the growing military use of AI by Israel for several years. “Lavender”, like other algorithms incorporating mass surveillance data, is integrated to designate Israel’s targets: “The ‘Lavender’ system uses communication systems. If you change your phone number, if you change your place of residence, if you are part of a WhatsApp group where there is another Hamas person, ‘Lavender’ ‘aggregates all this data and comes out with a score on the likelihood that this person is part of Hamas.”
“A target factory that operates 24 hours a day”, as described by the Israeli army during the first month of the conflict? “Noresponds today the Israeli army contacted by franceinfo. The IDF does not use a system that identifies or attempts to predict whether a person is a terrorist. These information systems are only tools for analysts in the process of identifying targets.”
This is not the opinion of Laure de Roucy-Rochegonde, another specialist in artificial intelligence in the military field: “The more targets we generate, the more strikes we carry out. But the operator, when he has to make this decision, he has a very short time to react. Around twenty seconds to either bring a veto or initiate a strike “It is not because we let the human say yes or no to a strike that the decision really remains in the hands of the human.” The investigation by journalists from +972 has not been censored by the Israeli Ministry of Defense. The use of artificial intelligence is part of the Jewish state’s communication and it is also a useful form of military deterrence.