Why does artificial intelligence talk nonsense?

On Feb. 7, Microsoft released a new version of its Bing search engine that, unlike regular search engines, includes a chatbot that can answer questions in clear, concise prose.


People have since noticed that some of the answers provided by Bing’s chatbot are inaccurate, misleading, or downright bizarre, raising concerns that he’s become self-aware of the world around him.

This is not the case. And to understand why, it’s important to know how chatbots actually work.

Is the robot alive?

No. Let’s say it again: no!

In June, a Google engineer, Blake Lemoine, claimed that similar chat technology being tested within Google was self-aware. It’s wrong. Chatbots are unaware and unintelligent, at least not in the way humans are.

So why does he seem alive?

Let’s take a step back. Bing’s chatbot is powered by a kind of artificial intelligence called a neural network. It may look like a computerized brain, but the term is misleading.

A neural network is simply a mathematical system that learns skills by analyzing large amounts of digital data. By examining thousands of photos of cats, for example, a neural network can learn to recognize a cat.

Most people use neural networks every day. It’s the technology that can identify people, pets and other objects in images posted to internet services like Google Photos. It allows Siri and Alexa, voice assistants from Apple and Amazon, to recognize the words you speak. And that’s what makes it possible to translate English and Spanish on services like Google Translate.

Neural networks are very good at mimicking the way humans use language. And it can mislead us into believing that this technology is more powerful than it really is.

How do neural networks mimic human language exactly?

About five years ago, researchers from companies such as Google and OpenAI, a San Francisco startup that recently launched the popular chatbot ChatGPT, began building neural networks that learn from massive amounts of text. digital media, including books, Wikipedia articles, logbooks, and all sorts of other things published on the internet.

These neural networks are known as large language models. They are able to use these mountains of data to construct what might be called a mathematical map of human language. Thanks to this map, neural networks can perform many tasks, such as writing their own tweets, composing speeches, generating computer programs and, yes, carrying on a conversation.

These large language models have proven useful. Microsoft offers a tool, Copilot, which is based on a large language model and can suggest the next line of code when computer programmers create software applications, much like auto-completion tools suggest the next word when you type texts or emails.

Other companies offer similar technology that helps generate marketing materials, emails, and other text. This type of technology is also known as generative artificial intelligence (AI).

Now companies are offering versions of this technology that you can chat with?

Exactly. In November, OpenAI launched ChatGPT, and it was the first time the general public got a glimpse of this technology. People were stunned – rightly so.

These chatbots don’t speak exactly like a human, but they often seem to. They can also write memoirs, poems, and almost any topic that comes their way.

Why are they wrong?

Because they learn from the internet. Think about the amount of misinformation and other garbage that can be found on the web.

These systems also do not repeat verbatim what is on the internet. Building on what they’ve learned, they produce new text on their own, in what AI researchers call a “hallucination.”

This is why chatbots may give you different answers if you ask the same question twice. They are ready to say anything, whether it is based on reality or not.

If chatbots “hallucinate,” doesn’t that make them sentient beings?

AI researchers like to use terms that make these systems look human. But hallucinating is just a catchy term for “they make things up.”

It sounds scary and dangerous, but that doesn’t mean the technology is somehow alive or aware of its surroundings. She simply generates text using templates she found on the internet. In many cases, she mixes and matches patterns in surprising and disturbing ways. But she is not aware of what she is doing. She can’t reason like humans.

Can’t companies prevent chatbots from behaving strangely?

They try.

With ChatGPT, OpenAI tried to control the behavior of the technology. As a small group of people tested the system privately, OpenAI asked them to rate its responses. Were they helpful? Were they real? OpenAI then used these ratings to fine-tune the system and define more precisely what it could or could not do.

But these techniques are not perfect. Scientists today don’t know how to build systems that are completely truthful. They can limit inaccuracies and quirks, but they can’t stop them. One of the ways to limit weird behavior is to keep conversations short.

But chatbots will continue to say things that aren’t true. And when more companies start deploying these types of robots, not everyone will be able to control what they can and cannot do.

The bottom line: Don’t believe everything a chatbot tells you.

The original version of this article first appeared in The New York Times.


source site-55

Latest