Conversational robots arouse enthusiasm and distrust

This text is part of the special section Unionism

Conversational robots powered by artificial intelligence (AI) are slowly but surely arriving in the world of work, raising their share of concerns and questions. Several experts agree that although we should not expect massive unemployment among employees, it will nevertheless be necessary to gradually put in place rules to mitigate the effects that AI could have on their working conditions. .

“We are not at a complete substitution [des employés], but it will be possible to consider replacing people for part of their work. This will therefore cause significant changes in the world of work and in the carrying out of certain tasks,” estimates Frédérick Bruneault, professor of philosophy at André-Laurendeau College. His colleague Andréane Sabourin Laflamme is professor of philosophy and ethics of artificial intelligence and researcher at the International Observatory on the societal impacts of AI and digital technology. She explains that the effects of conversational robots will be “variable geometry”.

The first professions affected will be those in office automation and customer service, which “are jobs occupied mainly by women and visible minorities,” she emphasizes. She adds, however, that these job cuts could “also lead to the creation of new forms of professions”.

According to the professor of ethics in the Department of Organization and Human Resources of the School of Management Sciences of the University of Quebec in Montreal (ESG-UQAM) Dominic Martin, metamorphoses in the world of work could rather be focused around “human-machine collaborations” in the future.

What worries experts more is “how constant, daily interaction with these devices will transform the way we work and think,” says Ms.me Sabourin Laflamme. While the rhetoric put forward by AI software designers touts the productivity and support that such technology can provide to employees, we also need to think about professional responsibility. “If I have delegated part of my work to the machine, what part am I responsible for if there is a problem or an error? There are all kinds of questions about labor law that arise, but also in terms of professional integrity,” she points out.

What professional responsibility?

While certain tasks can be automated, others are more sensitive, says Mr. Bruneault. “We could find ourselves in situations where the tasks delegated to a conversational robot will have significant consequences. If there is harm or an error, who will be held responsible? » he questions. The balance must therefore be found sooner rather than later, because the speed of development is “blistering” and not always controlled, he believes. Without falling into unnecessary fears and science fiction, the professor thinks that we must take the time to analyze the implications that AI will have in the professional lives of many people.

These conversational robots have extremely effective syntactic skills. However, they do not really understand what they are producing, these specialists point out. “There is no semantic capacity”, underlines Mme Sabourin Laflamme. According to her, there needs to be a watchdog with content expertise to verify that what the AI ​​is doing is valid. “It needs to be monitored, this AI, because it does anything. In some cases a machine is very efficient, and in other contexts it can invent things and produce erroneous results,” she warns.

Organizations should therefore not trust it with their eyes closed, adds Mr. Martin. “We have to be careful, because we are experiencing a wave of enthusiasm regarding technological promises, but these have limits. These systems are very competitive and also very misleading,” he recalls.

Questions about working conditions

“There are union questions too, in relation to working conditions”, specifies Mme Sabourin Laflamme. The use of AI could lead to increased performance expectations from managers. “However, we must be careful with this way of thinking that the machine will necessarily relieve us of certain tasks. This is not necessarily the case,” she says.

She believes that we must also focus on the possibilities of surveillance and control generated by AI in the world of work. “There are a lot of organizations that don’t have guidelines on the use of AI at the moment,” she recalls.

The idea of ​​thinking that a robotic manager would, for example, be more impartial than a human being with prejudices is false, according to Mme Sabourin Laflamme. “It’s a persistent myth because the algorithm is not value-neutral,” she says. Technological devices will, in fact, reproduce stereotypes and judgments which are, initially, human constructions. “Algorithms even risk amplifying them because they automate them,” she explains.

Some companies are starting to create chatbots to help their employees with their tasks. They see it as an advantage, allowing them to quickly share a body of knowledge and information, thanks to robots connected to a database. Others, on the contrary, are already choosing to ban AI in order to protect their information and security.

Organizations will gradually have to put in place rules and guidelines to govern the practice, experts believe. “What worries me is that we entrust responsibilities to these machines and that we have, in the end, a sort of standardization of content and that they are completely sanitized,” concludes the professor.

This content was produced by the Special Publications team at Duty, relating to marketing. The writing of the Duty did not take part.

To watch on video


source site-42