Chronicle – Laws to innovate in AI

Artificial intelligence (AI) is associated with innovation. The use of such powerful technologies must be properly framed by laws. When discussing the need for laws to regulate technologies, some argue that we must not inhibit innovation. Many, including the creator of ChatGPT, claim a balance between regulation and innovation. To achieve such a balance, states must be proactive and consider technology regulation as a component of innovation processes.

In many industries, AI and other technologies can hold disruptive potential. From the lawyer who, with the help of AI, “invents” legal decisions to support his case, to the drug dealer who abandons the schoolyard for social networks. Virtually any device can be abused to cause harm to people and institutions. The deployment of objects carrying such great risks must be accompanied by precautions, such as those governing nuclear devices.

Stéphanie Marin reported, in The duty of May 19, a decision handed down by Judge Benoit Gagnon on April 14 explaining that the use by criminal hands of deep-faking technology sends shivers down our spines. This type of software makes it possible to produce images that could implicate virtually all children. A simple child video clip available on social media, or a surreptitious video capture of children in a public place could turn them into victims of child pornography.

A cybercriminal can sequence a video and swap the face of the child with that of a victim of sexual assault who is on the Internet. New files are thus created, and the image and sexual and psychological integrity of children will be irreparably damaged, with the potential for this file to spread all over the Internet, without any control.

The malicious uses of the technologies with which several online platforms operate illustrate what economists refer to as the “negative externalities” of the business models of Internet platforms. The ease with which ill-intentioned individuals can take advantage of platform loopholes to carve out their own “small business” has consequences. It is to prevent and counter these negative consequences that the laws exist.

But in some circles, we are quick to oppose innovation and regulation. Some “entrepreneurs” consider that anything that seems to them to be an obstacle to their “business model” is an obstacle to innovation. Governments listened to them a lot. They have laws in place granting privileges to social media and other platforms. To date, the laws exempt them from the obligation to proactively list abuses and deviant behavior that may take place on their sites.

Lax regulation justified by the desire to encourage innovation is not at zero cost. What it saves the actors thus favored is assumed by those who suffer. For example, the people who are harassed or those who are cheated by the scams that are allowed to run rampant on an online platform are those who suffer from the costs of this poorly supervised “innovation”.

However, true innovation is that which unfolds in harmony with the values ​​of respect and human dignity. Seen from this angle, well-designed regulation optimizes innovation. It is poorly calibrated regulation that hampers innovation.

The European Regulation on the protection of personal information can be cited as an example of mechanisms that must be promoted in order to regulate the activities of large online platforms. In 2020, the Court of Justice of the European Union (CJEU) ruled that the possibility reserved for American security services to access the data of Europeans was incompatible with this European regulation on data protection.

Under these regulations, Meta was recently fined €1.2 billion by Ireland’s privacy regulator. His social network Facebook was accused of transferring personal data from its European customers to the United States. The Irish national authority was nonchalant in applying the regulation to Meta, which chose to set up its European headquarters in Ireland. But a multinational body at European level decided to impose fines on the offending company.

European regulations illustrate the way forward to put in place effective rules of the game. We need regulations that work in synergy, giving state authorities the ability to impose requirements on companies that are larger than some states. It is this type of regulation that is urgently needed to limit the risk of malicious use of connected technologies such as AI. Online platforms operate in a network. State regulations must also work in this way. So technologies like AI can be really innovative.

Professor, Pierre Trudel teaches media and information technology law at the University of Montreal.

To see in video


source site-40