[Opinion] Debates on artificial intelligence must be grounded in reality

In a recent open letter, IT specialists and tech industry leaders called for a six-month moratorium on the development of artificial intelligence (AI), which drew a lot of backlash across the web. Even Canada’s Minister of Innovation, Science and Industry François-Philippe Champagne responded to the letter on Twitter.

Published by the non-profit organization Future of Life Institute, the letter asks all laboratories to suspend the development of AI systems more powerful than GPT-4, a model employed by the conversational agent ChatGPT. Its authors argue that artificial intelligence labs “have engaged in a frantic race to develop and deploy ever more powerful digital systems, the possible consequences of which no one — not even their creators — can reliably understand, predict or control. . »

The authors of the letter believe that artificial intelligence systems are on the way to — or at risk of — becoming “powerful digital minds”. However, this interpretation of the development of AI is part of a “long-termist” perspective, an ideological current that ignores the debates on important current issues to focus on future concerns surrounding AI.

Long-termism and AI

Long-termism is a belief that artificial intelligence is on its way to becoming an uncontrollable superintelligence, and as such poses a long-term existential threat to the future of humanity.

Concerns about artificial superintelligences are usually the stuff of science fiction. However, fanciful scenarios inspired by AI are among the many haunting topics in Silicon Valley that can give rise to dark prophecies. And like the Torment Nexus meme, those concerns translate into major investment, not precautionary action. Indeed, most major tech companies have reduced their responsible AI team.

Obviously, ChatGPT does not pave the way for some form of superintelligence. The authors of the open letter present language technologies such as ChatGPT as a breakthrough in the field of cognition — an entity capable of competing with humans in the performance of general tasks. But that’s just an opinion.

The blind spots of superintelligence

Long-termism has direct implications for policies, which emphasize superintelligence in defiance of other more pressing issues, such as the power imbalances engendered by AI. Some proponents of long-termism go so far as to argue that putting in place regulations to curb the evolution of superintelligence is more urgent than responding to the climate crisis.

However, the political implications of AI are a current issue, not a distant threat. As GPT-4 draws its knowledge from all over the web and is expressly operated for commercial purposes, some questions arise with respect to fair use.

In particular, it must be determined whether the images and texts generated are copyrightable, as machines and animals cannot hold copyrights.

When it comes to privacy issues, ChatGPT’s approach is almost identical to that of another AI app, Clearview AI. Both models were trained using an astronomical amount of personal information collected from the open Internet. In fact, Italian data protection authorities have just banned ChatGPT due to privacy concerns.

These immediate risks are not mentioned in the open letter, which oscillates between ideological absurdities and technical solutions, ignoring the most glaring issues.

Noise and pragmatism

The open letter exemplifies a deep-rooted dynamic that my co-author and I have described in a forthcoming peer-reviewed book chapter on AI governance. Indeed, there is a tendency to consider AI either as an existential risk or as a banal technological tool.

The open letter testifies to the tension between these two extremes. Indeed, its authors first claim that “an advanced artificial intelligence is likely to profoundly transform the history of life on Earth”, before asking for “significant public funding to support research on technical security issues related to AI”. By this request, it is suggested that preventing the social harms of AI is a simple technical challenge to be solved.

The focus on these two extremes is so loud that one hardly hears more of the proponents of a pragmatic discussion on the immediate risks of AI described above, work practices and other issues.

The attention given to this open letter is particularly problematic in Canada because two other open letters, published by artists and civil liberties organizations, have not garnered as much attention. These letters call for the adoption of a more rigorous reform and framework for AI governance to protect all parties involved.

An unnecessary distraction in the making of laws governing AI

In response to this open letter, governments highlighted the fact that Canada already has legislation governing AI, the Artificial Intelligence and Data Act (LIAD). However, the long-term risks of AI contribute to accelerating the process of adopting legislative frameworks such as the LIAD.

The introduction of this law is an important step in establishing an effective AI governance regime, but authorities must first consult better with those affected by AI. Above all, this process should not be rushed to respond to supposed long-term threats.

The call to fast-track AI laws may actually benefit the handful of commercial leaders in AI research. If we don’t take the time to consult, educate the public, and listen to affected parties, LIAD risks handing over the responsibility and oversight of AI to companies already well positioned to take advantage of this technology. , which would create a market for a new AI verification industry.

The fate of humanity may not depend on it, but the good governance of AI remains a vital issue.

To see in video


source site-48