Too woke or not enough, AI flounders in prejudices

An artificial intelligence (AI) model that invents black-skinned Nazis? This recent incident with Gemini (Google), quickly corrected, could have been trivial. But it highlights the outsized power of a handful of companies over this increasingly fundamental technology.

“We messed up. We probably didn’t do enough thorough testing,” admitted Sergey Brin, co-founder of Google, in front of engineers.

In February, shortly after launching Gemini, its new generative AI interface, Google suspended the creation of images of people, after numerous reports from users.

In response to requests for images of Vikings or “founding fathers of the United States”, Gemini produced the correct costumes, but historically inaccurate skin tones.

For Joshua Weaver, lawyer and technology entrepreneur: “in short, it was too woke”.

This word, used pejoratively by conservatives in the United States, is used to denounce what they consider to be an excess of activism towards the demands of minorities and social and climatic injustices.

Joshua Weaver explains during a talk at SXSW, an arts and technology festival in Texas, that Google has long been able to innovate at its own pace, but the current race toward generative AI with OpenAI (ChatGPT) and others companies are forcing the giant to hurry…

To the point of tripping.

In the ultra-polarized political context of the United States, the affair inflamed X (ex-Twitter) “in an exaggerated way”, relates the lawyer.

“But it questions us about the degree of control of information possessed by those who hold artificial intelligence,” he continues.

“Within 5 to 10 years, the amount of information created by AI in the world could completely eclipse that of humans. Whoever controls these systems will have outsized influence.”

“Colonial algorithms”

“In the future, when you get into a taxi without a driver, the AI ​​will scan you, and if it finds any violations, this car will become a “temporary police vehicle”. And she will take you to the police station,” says Karen Palmer, “storyteller of the future” and director of immersive experiences.

During her conference — “The next frontier of AI: breaking the chains of colonial algorithms” — she denounces, with other activists, the dangers linked to this technology in terms of discrimination.

AI is based on the analysis of mountains of data, to automate tasks (granting a bank loan, for example) or producing text, images, etc. (generative AI).

Coming from societies historically dominated by white men, this data is biased by nature. With Gemini, Google engineers attempted to rebalance algorithms by favoring responses that reflect human diversity.

The clumsy result backfired. But their error highlights a “very intuitive and therefore subjective process,” underlines Alex Shahrestani, a lawyer specializing in technology.

“During the supervised learning phase of the model, a person validates the answers aligned with our values ​​and rejects the others,” he explains.

“We imagine that this person brings conscious or deliberate bias, when in general, it is much more subtle. »

A subtle, but invasive influence. “We saw what happened with social networks which self-regulate, but which must generate profits for shareholders,” he analyzes.

“The content has evolved from “Hey, what are my friends doing at the end of the week?” to scandalous things, which captivate users. […] However, these AI models also compete for our attention…”

” Bandage “

Some “biased” responses “shocked our users”, “this is completely unacceptable”, declared Sundar Pichai, the boss of Google, in an email to employees.

Google has corrected the situation, but it’s like “putting a bandage” on a bullet wound, quips Charlie Burgoyne, boss of Valkyrie, a company specializing in data processing.

During his talk at SXSW, he blasted the “black boxes” of language models, comparing the situation to the movie Jurassic Park : a creation whose functioning “far exceeds our understanding”.

Experts and activists are calling for more diversity among engineers and more transparency for users — especially when algorithms rewrite their queries to improve results.

Jason Lewis of the NGO Indigenous AI works with indigenous communities from the United States to the Pacific to create different AIs, with their perspectives and data.

“It’s such a different job than the Silicon Valley approach,” he told a complicit audience, “with all their bullshit condescending talk: ‘We’re doing all this to benefit humanity. ”! But of course… “

To watch on video


source site-42