those workers who use ChatGPT as a personal assistant without the knowledge of their employer

Workers use generative artificial intelligence to optimize their work, without necessarily telling their supervisor. A practice under the radar which is not without risks.

A personal assistant capable of rephrasing emails, translating them into impeccable English, finding precise information… ChatGPT conversational application an opportunity to optimize their working days.

Because the approach is tempting: all you need is a well-formulated request to the artificial intelligence, for example “Can you summarize this report for me, as if you were addressing an eight-year-old child?” to obtain, in a few seconds, a coherent response at first glance. Easy to use and free, this so-called “generative” artificial intelligence program was released in a paid version, more fluid and responsive, in February. To better understand the issues related to the emergence of this new tool, franceinfo interviewed employees who began to integrate ChatGPT into their daily work without the knowledge of their boss.

“An hour or two” of work less per day

Pierre imports Japanese food products for the whole of Europe. For him, ChatGPT is like a great search engine that saves him from wasting time on Google. “I read a lot of articles about rice, crop status, seaweed in Japan… Some Organization for Economic Co-operation and Development (OECD) articles on these topics are over 200 pages, so I ask ChatGPT to give me a summary, and point out the important points to remember.” Dimitri, a buyer in the construction industry, unearths thanks to the chatbot technical information about the machines it is going to import. Volume, weight, dimensions… So many details that he struggles to find quickly on the web.

Initially, Geneviève* felt guilty about delegating part of her work to the app. But this 50-year-old, content manager in an NGO, believes she is saving “an hour or two” of work per day thanks to ChatGPT. “I manage an insurmountable quantity of content, provided by a hundred collaborators whose writing is not the job. I am very meticulous. If there is the slightest error, it falls on me.” She therefore asks the AI ​​to rewrite these texts – sometimes in more sustained English – before tweaking them herself. She is so satisfied with this program that she tries to convert her colleagues to its use.

Like Pierre, Dimitri and Geneviève, more than 206 million people visited the ChatGPT site in April, according to British traffic analysis company Similarweb. The interface of the American company OpenAI, unveiled to the general public in November 2022, has been a dazzling success, to the point of attracting one million users just five days after its launch.

Do-it-yourself workers

Patrick*, a 32-year-old engineer, works as a database administrator in a call center. He says he used the application to manage a crisis situation, during which he had to send a message that was both firm and empathetic to an employee accused of fraud.

“ChatGPT gave me a text that fit the situation perfectly. I was stunned!”

Patrick*, computer scientist

at franceinfo

The meaning of the formula is not the only strong point of the AI. “We no longer need to learn other computer languages”, rejoices Louis*. Researcher in anthropology, he managed to create a website without knowing computer language JavaScript. Damien*, an IT support technician, even manages to solve rare and complex failures thanks to ChatGPT. The 25-year-old computer scientist appreciates the pedagogy of this virtual colleague: “He will guide us, give an answer with the steps, detail the procedure to solve the problem…” Trained on nearly 200 billion words, according to a study published by OpenAI, the conversational application indeed uses an artificial neural network giving the impression of exchanging with a human being.

This type of use of AI can be described as “DIY”analyzes Yann Ferguson, sociologist at the Catholic Institute of Arts and Crafts in Toulouse and scientific manager of LaborIA, a research-action laboratory of the Ministry of Labor andNational Institute for Research in Digital Science and Technology. By appropriating this tool, “it is the worker who liberates uses, thanks to his approach. He research, tinker and make your own discoveries”. This spontaneous use, under the radar of the company, contrasts with a use prescribed by the employer in order to gain in productivity, competitiveness and performance, according to the researcher.

“Not sure my bosses agree”

According to an Ifop-Talan survey, published in May, 68% of French people who use generative AI in companies hide it from their supervisor. For Geneviève, the opportunity to discuss it with her boss did not arise. “If she came to see me, I would tell her that I use it“, she argues. Pierre finds himself in a gray area which suits him. “Not sure that [mes chefs] agree, for reasons of confidentiality”, confides the buyer, who says he is ready to respect a possible ban. patrick, he defied the ban from his employer, who deemed the application too resource-intensive. The computer scientist, however, claims not to transmit any confidential data to the chatbot.

Because the use of AI “on the sly” involves risks in terms of cybersecurity and data leakage for the employer. In wanting to correct computer code errors, Samsung employees have, for example, disclosed confidential information to ChatGPT. The South Korean giant has therefore decided to ban its employees from using it, the American news agency Bloomberg reported in May. To avoid these leaks, one of the solutions recommended by Yann Ferguson would be to deploy these generative AIs on companies’ internal servers. A way of “guarantee the accuracy and security of data”he points out.

An “absence of legal framework”

In France, as a precaution, the town hall of Montpellier took the decision in March to prohibit its agents from using artificial intelligence. “We obviously want to use it, but before that, we want a user manual”explained Manu Reynaud, deputy mayor of the city, deploring a “lack of legal framework”. This framework is in the process of being created under the impetus of the European Union. Members of the European Parliament approved in May a draft regulation of artificial intelligence, which will apply to its member states. In this “Artificial Intelligence Act”, the more the AI ​​is considered risky, the more the regulation which will apply will be strict.

Generative AIs are no exception. To comply with European regulations, their publishers will have to ensure the security of their program, and respect for the copyright of the content used to train the systems. They will also need to assess the risks to fundamental rights, security and the environment, democracy and the rule of law. “If we look at ChatGPT, it does not meet any of these European requirements”, notes Yann Ferguson. Pending this regulation, the EU and the United States have announced the forthcoming publication of a common “code of conduct” on artificial intelligence, applying on a voluntary basis.

Employees “augmented” more than “dominated”

The tool has its limits. Jonathan Narboni, a researcher in theoretical computer science at the Jagiellonian University in Krakow, is well aware of this: “ChatGPT is unable to produce mathematical proofs. He asserts things with a lot of confidence, but with inconsistencies… He does ‘bullshit’ [baratinage] business school”, he laughs. ChatGPT does not always tell the truth. According to a study conducted by researchers from the British University of Oxford (PDF document)unveiled at the end of May, the OpenAI program only gives correct answers in 58% of cases.

“This system is just a predictor of words, it puts them together. It doesn’t give an answer, but what the answer might look like.”

Yann Ferguson, sociologist

at franceinfo

Neither Jonathan Narboni nor Geneviève feel their jobs threatened by this still limited AI. “I tried to do creative stuff, because you have to create communication campaigns that grab attention. But I don’t feel like ChatGPT is going to save my life”, tempers the content manager. The workers interviewed by franceinfo correspond to the sociological profiles defined in the work of Yann Ferguson of the “augmented employee”, who has improved his productive capacities, or even the “rehumanized employee”, who frees himself from repetitive tasks to concentrate on typically human activities.

But there is also the figure of a “dominated employee”, driven by algorithms and often precarious. Yann Ferguson warns against the risk of polarization in the world of work “between a very well paid elite and a mass of uninteresting jobs, less well paid or even ‘uberized'”. In a 2018 report, the OECD said 14% of jobs were at risk of automation, while 32% were at risk of profound change.

“A robot” that just has to be tamed?

How to tame generative AI, without compromising your business? “ChatGPT allows us to learn to drive. But for now, we are trying to get into the car without having learned the rules of the road”, illustrates Yann Ferguson. According to the Ifop-Talan survey, 72% of French people believe that they do not yet have sufficient knowledge to use generative AI. The sociologist advocates enlightened learning, combining practice and theory, based on online training and asking the right questions: “Do I have the skills to check what ChatGPT is telling me? Am I ready to take responsibility if something goes wrong? If the answer is no, don’t go there”, decides the sociologist.

For the sake of transparency with his company, Dimitri, a buyer in the construction industry, set up a think tank on artificial intelligence with various professions in his group. He hopes that in the long term, his employer will agree to carry out tests on a larger scale, or even to finance their use. “We are already behind on these technologies. But I think it will happen”he hopes. “Whether we like it or not, our future will be around AI, and we will all be forced to use it”anticipates Pierre, a food buyer. “AI should not be seen as an enemy of work. It remains a robot, you just have to learn to use it, just like the internet”, he concludes.

* The first names followed by an asterisk have been modified at the request of those concerned.


source site-15

Latest