This text is part of the special section Artificial Intelligence and Cybersecurity
Increasingly used by workers and students, artificial intelligence (AI) capable of generating content is poised to revolutionize the way we do things. However, it involves dangers that users do not always suspect.
Whether we’ve already tried it or not, we all know ChatGPT. The conversational robot is part of what is called generative artificial intelligence, that is to say it has the capacity to accumulate, process data and reuse it to create new content by imitating the process creativity of the human being, synthetic speed and more. A very effective tool already adopted by one in five Canadians in their work or studies, according to a survey by the consulting firm KPMG published at the end of June.
This same survey reveals that for just over half of respondents, generative AI saves up to 5 hours of work per week and, for 67% of these same respondents, it allows them to do more tasks . The productivity gain is such that, according to a 2017 study by the consulting firm PwC, the contribution of generative AI to the global economy could amount to 15.7 trillion US dollars in here 2030.
This financial windfall, as with everything, does not come without risks. The first being data leaks, as explained by Cédric Thibault, cybersecurity expert at KPMG. “That means that during those interactions where you’re potentially sharing private or sensitive data, the owner of that AI, or even other users, can then obtain that data. » The problem particularly arises for public AIs, like ChatGPT, which are accessible to any user, over which we will lose control of this data.
The solution that Cédric Thibault recommends to companies is to opt for private generative AI, which will only work internally. The company will “personalize and evolve it with only its own corporate data. Thus the data will remain in the company and will only be consultable by members of the company. »
However, we must ensure, when installing this private AI, that it respects ethical rules, in particular by not storing more data than necessary, such as information on an individual’s political or sexual orientation, employee or customer, since this private data would potentially be accessible to third parties internal to the company. After all this, it remains to ensure that private AI cannot be hacked by an external actor as we have seen with data banks. “IT security is a perpetual race where malicious actors and cybersecurity companies are neck and neck,” recalls the specialist.
When the machine “hallucinates”
Another big risk induced by generative AI is that of reliability. Just like humans, they can be victims of “hallucinations”. This is misleading or erroneous content that the AI will generate. “It’s a technology that is not yet perfect,” recalls the expert. It is not always very well populated and therefore can produce certain inaccuracies, which is why you must always take this content with a critical eye. » In other words, AI-generated content must always be reviewed by a human. An essential step that only 49% of KPMG respondents say they do every time.
Canadian students responding to the KPMG survey agreed 60% that using generative AI is cheating. A point of view that is both true and false, because we must give back to Caesar 2.0 what belongs to him. No less than 23% of survey respondents, students and workers, admit to asserting each time they are used that they are the author of this original content. A propensity which even climbs to 29% among 27-42 year olds. We then touch on the question of intellectual property.
“It’s a very hot topic,” points out Mr. Thibault. […] The European Union is in the process of defining a new regulatory framework, many standards are being defined. » From there to attributing copyright to AI, and therefore rights in general, there remains some room, but the question really arises. “The boundaries of computing are constantly being pushed. With artificial intelligence, we are taking an important step forward. This brings us closer to a society where we have original creative content which comes directly from AI, analyzes Cédric Thibault. We think a lot about text, but there are also AIs that can generate images from scratch. […] We are dealing with original content. Currently it is not possible to assign copyright to an AI, but I don’t see anything that would prevent it. »
This content was produced by the Special Publications team at Duty, relating to marketing. The writing of the Duty did not take part.