Making your dead child speak thanks to AI

New developments in artificial intelligence (AI) now fascinate and amaze me several times every month. Often, too, they push me around.




The most recent innovation that caused all of this for me happened in the United States. Families who want to put pressure on elected officials to obtain better gun control have used AI to recreate… the voices of their dead children.

We are there, yes.

These children (there are six) have been killed by guns over the past ten years. Their families want to raise awareness among the American population and politicians.

THE Wall Street Journalwhich published a report on this initiative, stressed that these hyperfakes (deepfakes) mark “a new era for artificial intelligence” ⁠1.

I would like to point out that the parents of these children are being transparent. We clearly announce both on the website, called The Shotline⁠2, as well as in each of the recordings, the use of artificial intelligence. Unlike many other cases, we are not trying to fool anyone here.

Even though this is not misinformation, it is still very disturbing.

Excerpt from the appeal of Joaquin, victim of the Parkland massacre

SCREEN CAPTURE THE PRESS

On The Shotline website, you can notably hear the recreated voice of Joaquin, killed in Parkland in 2018.

I got chills when I heard the voices of these young people “resurrected” thanks to artificial intelligence.

I also said this to Valérie Pisano, CEO of Mila, the Quebec Institute of Artificial Intelligence, who offered me her opinion on this phenomenon.

I think the word “thrill” is right. We are very quickly venturing into completely unknown territory in the history of humanity. And it’s not much saying. We have thousands of years of history and we have never had the ability to revive the voices of our dead.

Valérie Pisano, CEO of Mila

Because this is just one example among others of the fact that artificial intelligence has the power to make us – artificially – immortal. A trend that raises many ethical questions.

Valérie Pisano taught me that there are already many companies whose business model is based on the idea of ​​“seeking to monetize the possibility of existing in word or image after our death”.

“What I can’t explain is that we put this decision in the hands of people who are either technology experts, therefore capable of creating tools, or business people who have capital and who want to develop products and services,” she says.

Faced with the accomplished fact

Same reaction from Jonathan Roberge, full professor at the Urbanization Culture Society Center of the National Institute of Scientific Research. “We are in an unprecedented situation, or almost. And there is no principle of clairvoyance. We should have thought something like this would happen! »

They are right. In an ideal world, these initiatives would be the subject of a great deal of thought upstream. Social debates. A framework.

But we are now faced with a fait accompli.

Moreover, parents could probably do the same thing in Quebec legally, estimates Mariève Lacroix, professor of law at the University of Ottawa.

Could we say that the child had not consented or had not said these words? The child is no longer there. If he did not express his wishes during his lifetime, what can we claim to slow down this impulse of his parents? In my opinion, this would not give rise to legal action.

Mariève Lacroix, professor of law at the University of Ottawa

Mariève Lacroix has just published, with Professor Nicolas Vermeys of the University of Montreal, the book Responsibility. AI – Quebec civil liability law with regard to artificial intelligence.

“For there to be a case of civil liability, there must be damage,” she adds. But in this case, it’s quite the opposite. “Parents want to send a social message of denunciation and they use it for this purpose. »

Which does not mean that there is not a lot of thought to be done on this subject, quite the contrary.

“We are detaching ourselves a little from individual rights, which expire at death, to migrate towards a more societal meaning, towards collective values. For example, we can ask ourselves the question: would Quebec society want to encourage such behavior? »

It also demonstrates, in my opinion, the extent to which elected officials who wish to supervise the development and use of artificial intelligence do not have it easy.

There has been a lot of talk in the media over the past year about the dangers of hyperfaking. From fake pornographic images of singer Taylor Swift to calls to American voters where AI was used to reproduce Joe Biden’s voice.

We must prohibit this type of abuse and penalize those who allow it.

But not all hyperfakes are necessarily harmful.

“We need to distinguish between different scenarios,” confirms Jocelyn Maclure, holder of the Jarislowsky Chair in Human Nature and Technology at McGill University.

In some cases, such as training artificial intelligence on a missing person’s data, “it’s not clear that there is harm.”

It is rather a question “which concerns our deepest ethical conceptions”, specifies the expert.

But in other cases, “AI-generated content in the form of hyperfakes, where content is claimed to be generated by humans, can create significant harm either to individuals or to society,” he emphasizes.

The toothpaste came out of the tube.

He won’t go back there.

It’s flowing very quickly at the moment.

So even if it is not easy to legislate to regulate these developments, we cannot continue to beat around the bush.

Both in Ottawa and in Quebec, new rules of the game are needed as quickly as possible.

1. Read the article from Wall Street Journal (in English, subscription required)

2. Visit The Shotline website

What do you think ? Participate in the dialogue


source site-63

Latest