A Europol report reviews the uses and risks linked to the use of artificial intelligence in the missions of internal security forces.
Published
Reading time: 4 min
If usually the technological choices of security forces are made rather discreetly, we will note the publication at the end of September of a report from EUROPOL on the advantages and challenges of the use of artificial intelligence by security services: police, gendarme, customs, etc.
Let us remember, Europol is the European agency for cooperation in the area of criminal law enforcement. It was founded in 1992 by the Treaty of Maastricht. It is not a European FBI, but rather a body for sharing information between member states, and a point of cooperation with their foreign counterparts such as the United States, Canada or Australia.
They are mainly of two kinds: On the one hand, there is the capacity to analyze large volumes of data. Particularly to identify links between people or organizations among a wide variety of documents: texts, tables of figures, images, etc.
This makes it possible to detect unusual behavior, for example among hundreds of thousands of transactions recorded in banking documentation. A means used to identify fraud or suspected fraud. This makes it easier to reconstruct financial movements or even identify criminal patterns between different entities.
On the other hand, AI can be used to extract useful information from a multitude of sources. Let them be freely accessible, like social networks. We then speak of OSINT (Open Source Intelligence) and more precisely SOCMINT (Social Media Intelligence). Or are extracted from digital captures such as video surveillance or sound recordings. The algorithms will be used to make transcriptions or translations, identify suspicious people or behavior. For example, someone pushing up a crowd that is converging in the same direction or collapsing to the ground.
AI can also be used to develop training scenarios for staff training purposes.
First, like any artificial intelligence, we must guard against the risks of bias, that is to say data used to educate the AI which would be biased, for example by including prejudices against people based on their age or their gender and would therefore distort the results obtained. And the risk of hallucinations, as when the algorithm offers absolutely false solutions, but which have the appearance of seriousness and truth.
Then, the danger can come from certain uses of AI, such as automated mass surveillance of biometric video surveillance devices that would recognize all faces on the fly. The algorithms must also always remain explainable. A transparency which should make it possible to understand how and why decisions were made by AI systems.
The European Union has adopted a text, the AI ACT, intended to regulate the use of AI in Europe. Which includes its exploitation by security forces. This IA ACT was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. Its entry into application will last from February 2025 to August 2027.
Starting by detailing the prohibitions relating to AI systems presenting “unacceptable risks”.
What is it about? Which would lead to practices contrary to the values of the European Union and fundamental rights.
- For example, social rating as it is practiced in China, with a scale of each of your daily actions (crossing outside the confines, making inappropriate comments or, on the contrary, donating blood to earn points) .
- Security services’ use of real-time remote biometric identification in publicly accessible spaces.
- Predictive policing measures targeting individuals.
- Emotion recognition in the workplace and educational institutions.
In France, the National Commission for Information Technology and Liberties, the CNIL, has published an analysis of this AI Act with its point of view as an independent administrative authority in charge of the protection of personal data. Enough to fuel collective and political reflection beyond the upcoming availability of technologies.