Artificial intelligence (AI) chatbot Claude, one of the hottest in the tech sector, has arrived in Canada in hopes of spreading its security-focused philosophy.
Claude, who can answer questions, summarize documents, write text and even code, was made available in Canada on Wednesday. The technology launched by the emerging company Anthropic, based in San Francisco, in 2023, was already available in more than a hundred countries.
It’s now crossing the border because the company has seen signs that Canadians are eager to get into AI, according to Jack Clark, one of Anthropic’s co-founders and the company’s head of policy.
“We have enormous interest from Canadians in this technology, and we have expanded our products as well as our organizations in a compliant manner, so we are able to operate in other regions,” he said .
The company made its privacy policy clearer and easier to understand ahead of Claude’s launch in Canada.
Although Canada has access to many of the biggest artificial intelligence products, some chatbots have taken longer to arrive in the country.
Google, for example, only launched its Gemini chatbot in Canada in February because it was negotiating with the federal government over a law requiring it to compensate Canadian media companies for content published or reused on its platforms.
Despite the delays, Canadians have tried numerous artificial intelligence systems, including Microsoft’s Copilot and OpenAI’s ChatGPT, which sparked a recent AI frenzy upon its release in November 2022.
Security, more than a “side quest”
Anthropic’s founders met at OpenAI, but built their own company before ChatGPT’s debut and quickly decided their mission was to make Claude as secure as possible.
“We’ve always thought of security as something that, for many years, was seen as a complement or sort of a side quest to AI,” Jack Clark said.
“But our bet at Anthropic is that if we make it the heart of the product, it creates both a more useful and more valuable product for people, but also a safer one.”
As part of this mission, Anthropic does not train its models on user data by default. Rather, it uses publicly available information on the Internet, permissioned datasets from third-party companies, and user-contributed data.
An AI with values
It also relies on so-called “constitutional” AI, that is to say that a set of values is given to the company’s AI systems, which can train themselves in order to become more useful and less harmful.
At Anthropic, these values include the United Nations Universal Declaration of Human Rights, which emphasizes the equitable treatment of people regardless of age, gender, religion and color.
Anthropic’s rivals are taking note, according to Mr. Clark.
“Every time we gain customers – and it’s partly because of security – other companies pay a lot of attention to it and end up developing similar products, which I think is a good incentive for all players in the sector,” he declared.
He expects this trend to continue.
“Our general view is that AI safety will be a bit like seat belts for cars. If you develop technologies that are simple enough and good enough, eventually everyone will adopt them because they’re just good ideas.”
Anthropic’s commitment to security comes as many countries are still in the early stages of developing policies that could regulate how AI can be used and minimize the technology’s potential harm. .
Canada has introduced a bill focused on AI in 2022, but it will not be implemented until 2025. The country has meanwhile created a voluntary code of conduct.
The code requires signatories, including Cohere, OpenText and BlackBerry, to monitor the risks of AI systems and test for bias before launching them.
Jack Clark has not committed to Anthropic signing this code of conduct. He said the company is focused on global or at least cross-country efforts, such as the Hiroshima AI process, which G7 countries used to produce a framework to promote safe, secure and reliable AI. trustworthy.