Should we be extra careful when conversing with ChatGPT, the popular OpenAI chatbot? Since its launch in November 2022, its users have used it to perform many creative tasks, such as writing poems or solving mathematical questions. This type of request poses, in theory, few problems of confidentiality or security.
However, the computer program is also capable of providing answers and promising solutions to much more personal problems, such as questions about health issues. When The duty submitted three fictional medical cases to it, the robot was able to provide advice based on very specific medical conditions.
The question now is: Does ChatGPT store this data and how does it use it?
In OpenAI’s most recent privacy policy, it says: “When you use our Services, we may collect Personal Information that is included in any entries, file uploads, or comments you provide to our Services ( ” Content “). This personal data may be used for analysis, research and service improvement purposes.
“When you register on ChatGPT, [on donne] already all our personal information, and then, the history and the content of our conversation also become personal information”, immediately summarizes Anne-Sophie Hulin, assistant professor at the Faculty of Law of the University of Sherbrooke.
The company also indicates that “under certain circumstances, [elle peut] provide your personal information to third parties without notice, except as required by law”.
“They are quite vague about sharing our data with third parties. So, to what extent does ChatGPT collect data, resell it to third parties, create consumer profiles, become an element of consumer influence? It hasn’t quite happened yet, but it’s certain that there are significant risks, ”says the professor specializing in data and AI law.
Quebec legislation
Is there a difference between entrusting personal information on ChatGPT and on Google? “Our search engines have had to align themselves with ethical and legal practices and standards that limit the use of our data,” she adds. A legal framework that remains to be established for the conversational robot.
The law that governs ChatGPT’s privacy policy has been constructed in accordance with the “California Privacy Rights” which is California law that ensures the protection of personal information.
“Compared to the General Data Protection Regulation (GDPR) [de l’Union européenne] and our legislation [québécoise]California law is a little less protective,” notes Ms. Hulin.
In Quebec, measures are expected to come into force in September 2023, with the application of Bill 25 — the Act to modernize legislative provisions regarding the protection of personal information in the private sector. “It is the most protective law that we currently have. Even the bill that is on the table at the federal level does not go as far as Bill 25 in certain respects,” explains Ms. Hulin.
“However, we cannot guarantee that ChatGPT will be obliged to comply with it, since the criterion of attachment for a foreign company to Law 25 is not that certain,” she warns.
In other words, there is no guarantee that ChatGPT will be considered a business within the meaning of the law – an entity established in Quebec, recalls the lawyer. It would therefore be possible for ChatGPT to continue its activities without submitting to the personal information protection regime imposed by Quebec law.
The professor of private law recalls that if ChatGPT complied with Law 25 or even the GDPR, it would be possible to condemn practices such as the disclosure of information to insurance companies.
“As ChatGPT’s privacy policy is quite evasive about how they share this personal information with companies, one could well imagine it being sold to insurance companies who may re-evaluate their insurance policy with regard to of the information collected,” argues Ms. Hulin.
A critical look
For Ms. Hulin, the particularity of ChatGPT lies above all in its extremely rapid democratization. The program hit 100 million monthly active users in January, just two months after its launch, making it the fastest growing consumer app in history.
“Suddenly, artificial intelligence (AI), it’s no longer a constrained phenomenon when you go to the airport and have your picture taken. Now it’s being able to connect from anywhere and have access to a super powerful AI that can meet any of my needs, ”she illustrates.
Concretely, there is nothing wrong with people wanting to use ChatGPT and using their sensitive information to get answers, believes the professor. They just need to have a good understanding of the consequences.
Consequently, the professor insists on the fact that it is everyone’s responsibility to educate themselves adequately on the issues of confidentiality of AI.
“You need to have some form of education at all levels, whether it’s professionals, the general public, students… It’s up to us, the users, to develop careful use, and to understand that not everything is good to put on ChatGPT — just like not everything is good to put on social media,” she concludes.