This text is part of the special Syndicalism booklet
Digital and platforms like Uber have already turned the world upside down work, but here it isve ChatGPT, and with it, moreseveral fears. But what effectfet will ChatGPT have on working relationships?
“Are we all going to lose our jobs? These questions have been raised in academia for a long time. At first, we thought that artificial intelligence (AI) was going to eliminate 50% of jobs, ”says Julie Garneau, professor of industrial relations at the University of Quebec in Outaouais. Today, we are talking more about a transformation of jobs.
With its ability to write summaries, answer questions and produce formatted texts, ChatGPT could speed up certain tasks. “It’s surprising to see the capacity of the algorithm, and this is only the beginning”, believes Mme Garneau. Content producers, journalists, professors, customer service agents, programmers, human resources professionals: all these workers could use ChatGPT to synthesize information, create posts for social media, check lines of code , etc. “People who adopt this kind of technology should not be blamed. They have enormous pressure to produce, and these tools help them save time and offload repetitive tasks,” says Ms.me Garneau.
Some companies could thus be tempted to hire fewer humans to entrust certain tasks to these algorithms. However, employers (like employees) will have to remain on the lookout for the issues and the effects that tools like ChatGPT generate, warn the experts.
Transparency
Created by a private company (see box), ChatGPT, like other conversational robots, remains a black box: what data is it trained on? How does it work ? Impossible to know. “With AI, you don’t see everything that’s going on behind. And there we come to the heart of labor relations, ”notes Mme Garneau. This type of tool can in the long run lead to a loss and decontextualization of professional knowledge. “Does the algorithm take into account the context of the individual? Professional judgment should take precedence,” she stresses.
“Where it becomes a problem is if the tool is used to replace human decision-making without questioning, and not to equip or to brainstorm “says Guillaume Pelletier, ethics counselor at the Commission on Ethics in Science and Technology, an agency of the Government of Quebec. Indeed, the law provides a framework when decisions are automated, but if a human takes the recommendation of a machine as it is, the vagueness sets in. Then there is the question of transparency. Moreover, how can you question a decision if you don’t know how the algorithm made its decision? “In working relationships, with pressure, we sometimes take the shortest route. But if we use ChatGPT to make a decision and there is an error, who will be responsible? asks M.me Garneau.
Reliability
If ChatGPT can charm with its perfect mastery of grammar and syntax, we must not forget that it has no obligation as to the accuracy of the information it returns. “ChatGPT is not concerned with truth,” recalls Mr. Pelletier. So beware of the information he provides, despite his excellent command of the language. This version of ChatGPT has also been trained on data up to 2021. Much like Wikipedia and Google, which we have learned to use and which have improved over the years, the more ChatGPT will be powered by new queries , the more effective it will be.
As with all algorithms, ChatGPT is likely to reproduce existing biases: sexism, racism, lack of representation of minority communities, etc. Not to mention ethical issues and plagiarism.
Those using ChatGPT must also create an account, and their queries are used to power and train the algorithm. This raises important questions in the context of work, where the worker feeds the tool with personal or sensitive data from his employer. “By providing more and more data, it feeds the AI. All of this gives power to the companies behind these algorithms,” notes Mr. Pelletier.
Better understand, better manage
Framing this kind of tool is also a complex exercise. “Collective agreements can include technological clauses, to ensure that employees will be properly trained or that positions will not be replaced by technologies, but unions do not have the power to say yes or no to a technological tool. “, explains Mr. Pelletier. “IT tools come under management law. It is the prerogative of the employer to bring in technology to improve the production of goods and services,” adds Ms.me Garneau. The professor also offers training to unions to enlighten them on possible uses and issues.
Governments also need to provide more oversight for AI. “There is Bill C-27 which is coming to the federal [qui s’applique au contexte commercial]. The European Union has already gone a little further, but all that is going extremely quickly, and to regulate, it takes time,” observes Mr. Pelletier. The Minister of Innovation, Pierre Fitzgibbon, also expressed concern about the issue in early April and met with six experts on the issue.
To understand all these issues, better digital literacy will be necessary, both for employees and managers and for unions. “If people are aware of the risks and biases, they will be able to use ChatGPT better,” says Pelletier.
This special content was produced by the Special Publications team of the Duty, relating to marketing. The drafting of Duty did not take part.