[Opinion] Regulate AI, and get out of algorithmic capitalism

The meteoric rise of generative artificial intelligence (AI), which includes applications like ChatGPT and Midjourney, has resulted in significant hype and debate surrounding the deployment of these technologies. An open letter from the Future of Life Institute evoking the major risks of a frantic race for AI, and calling for a moratorium to promote appropriate regulation, has been echoed in two interventions published in the newspapers in recent days. One, co-signed by Yoshua Bengio, advocates tighter regulation of the industry, while the other emphasizes the benefits of AI, and calls for better public education to soften the impacts. of these tools in our societies.

These opinion pieces embody the two dominant positions that structure the public debate on AI. The “regulationist” approach calls for more supervision to ensure the “responsible development” of algorithms and to reduce their harmful effects in terms of security, the environment, democracy, individual rights, etc. . The “accelerationist” and techno-optimistic approach welcomes these developments with enthusiasm. She broadly subscribes to the promotional rhetoric of the companies creating and deploying these innovations, and stipulates that we should simply adapt to the rapid advances in AI.

These two approaches situate their questioning downstream of the fundamental social problems raised by these technologies. We must therefore take a step back, examine the issues upstream, and better understand the context of the deployment of AI.

A truncated regulation

Public policies are generally limited to the same narrow horizon of risk management, corporate self-regulation and the mitigation of algorithmic biases. For example, the Artificial Intelligence and Data Act (LIAD), developed as part of the federal government’s Bill C-27, aims to create a flexible regulatory system to guide innovation in of AI in Canada. This legislative framework seeks to better regulate high-impact AI systems, reduce instances of individual harm and biased results, prohibit certain fraudulent uses (such as deepfakes), all with the help of six main principles: human supervision and oversight, transparency, justice and fairness, security, accountability, validity and robustness.

It’s hard to be against virtue, but how will these principles be applied in practice? We want to “emphasize education, establish guidelines and help companies comply with them by voluntary means”, and “find the right balance between protecting Canadians and of their interests and the promotion of the Canadian AI industry. Thus, the law does not seek to create a regulatory framework prior to the deployment of algorithmic technologies, nor to redistribute the profits generated by this industry, but to “strengthen the confidence of Canadians in the digital technologies they use on a daily basis”.

This truncated regulatory approach seeks to reconcile the imperatives of economic growth and social acceptability, by putting on an equal footing the protection of citizens and the interests of a flourishing industry largely financed by public funds, but whose billions of profits are mostly captured by private interests. Several experts also point out the many blind spots in the bill, and call for a much broader legislative framework.

By seeking to better integrate AI into our lives through a “balancing game”, we risk losing sight of what should, in our opinion, be the primary issue of the debate: do we need robots and algorithms? initially and, if so, in which sectors and for what purposes? Why are these technologies being introduced into society? By who ? Who are the groups most affected by these machines, and the actors who appropriate the majority of the benefits? Asking these questions upstream quickly reveals the taboo aspect of the debates on AI: capitalism. We believe that the social impacts of AI cannot be adequately analyzed by ignoring capitalism, and vice versa.

Beyond Algorithmic Capitalism

Recent innovations in generative AI are part of an economic dynamic linked to the emergence of a new stage of capitalism: algorithmic capitalism. It is based on the massive extraction of personal data, the accelerated development of algorithms, the hegemony of platforms, a redesign of the world of work and the introduction of automated decision-making systems in different social spheres.

These transformations are accompanied by “algorithmic governmentality”, that is to say that algorithms play a growing role in predicting, monetizing, monitoring, controlling and manipulating behavior. We better understand AI as a vast institutional and technological field at the heart of the economic imperatives of this new form of capitalism, whose social and environmental costs are largely unsustainable.

The regulatory framework for AI downstream of its development thus appears like a sword in the water. Rather, it is necessary to aim for a reduction in the use and production of algorithmic machines, in order to promote a “digital sobriety” compatible with social justice, respect for individual rights and the limits of the planet.

It is not possible here to detail all the reforms, laws and mechanisms to be put in place to protect society against trends towards generalized automation, but a growing literature suggests various avenues to explore: creation of new digital rights (right to disconnect, right to be forgotten, right to free essential services), public high-speed Internet network, antitrust laws to dismantle oligopolies in the technology sector, creation of digital commons (such as Wikipedia, software books or the En commun platform), etc.

In short, an adequate framework for AI cannot simply support capitalist companies by making them more socially acceptable. The public debate must broaden and question the private ownership of the means of algorithmic production, and question the place, the role and the limits of algorithmic technologies in our lives and our society.

To see in video


source site-45

Latest