By Cloey Callahan
Since ChatGPT became mainstream at the end of 2022, it’s become clear that its outputs are not always 100% factually accurate.
In fact, people have been having fun with the generative artificial intelligence bot, created by OpenAI, to test how far it would go in making things up. For example, there was a trend recently where people would prompt the bot with a simple “who is” question to see what kind of biography it would make up for an individual. People shared on Twitter their embellished bios that included the wrong schools they graduated from, awards they never received, incorrect hometowns and so on.
When it’s making things up, that’s called a hallucination. While it’s true that GPT-4, OpenAI’s newest language model, is 40% more likely than its predecessor to produce factual responses, it’s not all the way there.
We spoke to experts to learn more about what AI hallucinations are, the potential dangers and safeguards that can be put into place.
What exactly are AI hallucinations?
AI algorithms can generate unexpected and often bizarre outputs that are not consistent with their training data and when they don’t know the answer. So whatever a generative AI bot – like ChatGPT – spits out as an answer that is as close to the truth as it can muster without having ingested all the necessary data to give the right answer, it’s called a hallucination. And they’re currently quite common.
“The hallucinations start when you have a hole in the knowledge and the system goes with the most statistically close information,” said Pierre Haren, CEO and co-founder at software development company Causality Link. “It’s the closest it has, but not close enough.”
The hallucinations can be caused by various factors but the most common is when it has limited training data and hasn’t been taught to say “I don’t know that answer.” Instead of saying that, it will make something up that seems like it could in fact be the answer. It works hard to make it seem realistic.
“It’s saying something very different from reality, but it doesn’t know it’s different from reality as opposed to a lie,” said Haren. “The question becomes, what is the truth for a ChatGPT brain? Is there a way the system knows if it’s deviating from the truth or not?”
Arvind Jain, CEO of AI-powered workplace search platform Glean, says that it’s important to remember “the most likely answer isn’t always the correct answer.”
However, that is dangerous because it can lead to the spread of misinformation if it’s not fact-checked by the human.