Artificial intelligence | Systems can be used as weapons, authorities warn

(Ottawa) Top cybersecurity officials are urging tech companies to build safeguards into the futuristic artificial intelligence (AI) systems they prepare, to prevent them from being sabotaged or used for malicious purposes. malicious.


Without proper safeguards, it will be easier for malicious nations, terrorists and others to exploit rapidly emerging AI systems to carry out cyberattacks and even develop biological or chemical weapons, said Jen Easterly , director of the US Cybersecurity and Infrastructure Security Agency, known as CISA.

Companies that design and develop AI software must work to significantly reduce the number of vulnerabilities that users can exploit, M arguedme Easterly in an interview. “These abilities are incredibly powerful and can be used as weapons if not created securely. »

The Canadian Center for Cyber ​​Security recently joined CISA and the UK National Cyber ​​Security Centre, as well as 20 international partner organizations, to announce guidelines for the development of secure AI systems.

AI innovations have the potential to bring many benefits to society, the reference document states. “However, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible manner. »

When it debuted late last year, OpenAI’s ChatGPT fascinated users with its ability to respond to queries with detailed, if sometimes inaccurate, answers. It also raised concerns about possible abuses of the emerging technology.

AI security has special dimensions because the systems allow computers to recognize and contextualize data patterns without rules explicitly programmed by a human, according to the guidelines.

AI systems are therefore vulnerable to the phenomenon of adversarial machine learning, which can allow attackers to cause unauthorized actions or extract sensitive information.

“There is a general consensus among governments and industry that we must come together to ensure that these capabilities are developed with safety and security in mind,” said Mr.me Easterly.

“Even as we seek to innovate, we must do so responsibly. »

A lot can go wrong if security is not considered when designing, developing or deploying an AI system, said Sami Khoury, head of Canada’s Center for Cyber ​​Security.

In the same interview, Mr. Khoury called the initial international engagement with the new guidelines “extremely positive.”

“I think we have to lead by example, and maybe others will follow later. »

In July, Canada’s Cyber ​​Security Center issued an advisory flagging vulnerabilities in the AI ​​system. For example, an attacker could inject destructive code into the dataset used to train an AI system, thereby skewing the accuracy and quality of the results.

The “worst case scenario” would be that a malicious actor poisons a crucial AI system “that we rely on,” causing it to malfunction, Mr. Khoury said.

The center also warned that cybercriminals could use the systems to launch spear-phishing attacks more frequently, automatically and with a higher level of sophistication. “Very realistic spear phishing emails or messages could lead to identity theft, financial fraud or other forms of cybercrime. »

Skilled criminals could also circumvent restrictions imposed by AI tools to create malware for use in a targeted cyberattack, the center warned. Even people with “little or no coding experience can use generative AI to easily write working malware that can harm a business or organization.”

Earlier this year, as ChatGPT made headlines, a briefing note from the Canadian Security Intelligence Service (CSIS) warned of similar dangers. The tool could be used “to generate malicious code, which could be injected into websites and used to steal information or spread malware.”

The SCRS note of February 15, recently published as part of the Access to Information Actalso indicated that ChatGPT could help generate “fake news and criticism, manipulate public opinion and create disinformation”.

OpenAI asserts that it does not authorize the use of its tools for illegal activities, disinformation, generating hateful or violent content, creating malware, or attempting to generate code designed to disrupt, damage or gain unauthorized access to a computer system.

The company also prohibits the use of its tools for activities with a high risk of physical harm, such as weapons development, military operations, or management of critical infrastructure for energy, transportation, or water.


source site-55