Joe Biden signs executive order aimed at regulating artificial intelligence

(Washington) The White House on Monday unveiled rules and principles intended to ensure that America “leads the way” in the regulation of artificial intelligence (AI), while Western lawmakers struggle to regulate this controversial technology.


US President Joe Biden signed a decree which notably requires companies in the sector to transmit the results of their security tests to the federal government, when their projects pose “a serious risk in terms of national security, national economic security, or public health “.

The criteria for these safety tests will be set at the federal level and made public.

“To realize the promise of AI and avoid the risks, we must govern this technology. There is no other solution […] : it must be regulated,” declared the Head of State before signing the decree at the White House, in front of elected officials, members of government and industry representatives.

In addition to new security assessments, the text provides guidance on fairness (to avoid discriminatory bias from AI), launches research into the impact of artificial intelligence on the labor market and recommends the development tools to easily identify content produced with AI, in particular.

The 80-year-old Democrat mentioned having seen a video of himself created from scratch with AI (deepfake).

“I wondered when could I have said that? “, said Joe Biden, getting emotional about the use of AI to scam people by posing as their family members.

“Moral responsibility”

The White House may praise the ambition of the decree, but in reality Joe Biden only has limited room for maneuver.

Any truly binding and ambitious legislation on AI should pass through the US Congress. However, the latter is currently divided between Democrats and Republicans, which makes the adoption of a large-scale law very unlikely.

Since the spring, the White House has insisted on the “moral responsibility” of companies to guarantee the security of their systems. This summer, it obtained that big names in the digital sector, such as Microsoft and Google, commit to submitting their artificial intelligence systems to external tests.

Artificial intelligence is already widely present in everyday life, from smartphones to airports.

But these technologies have taken on a new dimension, with the large-scale deployment of so-called “generative” AI since this year, following the unprecedented success of ChatGPT.

They allow you to quickly produce images, sounds or even videos on request in everyday language.

This technological revolution raises hopes for great progress, particularly in medicine, but also fears an explosion of disinformation, massive job losses and even the theft of intellectual property. Not to mention the use that authoritarian regimes or criminal organizations can make of AI.

Who will regulate first

The technological race is mainly played out in the American West, but the regulation of AI is the subject of fierce international competition.

“The United States is leading the way,” Joe Biden said on Monday.

His decree is based on a law dating from the Cold War, the Defense Production Act (1950), which gives the federal government a certain power of constraint on companies, when the security of the country is at stake.

“But we still need Congress to act,” he insisted, calling on parliamentarians to legislate to “protect the privacy” of Americans, at a time when artificial intelligence “not only makes it easier to “extract, identify and exploit personal data, but also encourages doing so, since companies use this data to train” algorithms.

The European Union, which produces an abundance of rules in the digital domain, wants to equip itself before the end of the year with a regulatory system for artificial intelligence, thus hoping to set the pace at the global level.

The United Kingdom is organizing a summit on the subject this week, in which American Vice-President Kamala Harris will participate.

Alexandra Givens, of the NGO Center for Democracy & Technology, praised Monday “a remarkable effort by the government to support the responsible development and governance of AI”.

But the efforts of Washington and London are considered largely insufficient by many other associations and personalities.

“When governments say they are putting in place safeguards, these are guardrails that big technology companies allow them to put in place,” Alex Winter, a director, lambasted Monday during a press conference of experts, lawyers and creators on the dangers linked to AI.


source site-55