Over the past few months, AI chatbots like ChatGPT have captured the world’s attention as they can have human-like conversations on almost any topic. However, they have significant drawbacks. The ease with which persuasive false information can be presented makes it an unreliable source of factual information and a potential source of defamation.
Why would an AI chatbot make things up, and would we ever be able to fully trust the output of an AI chatbot? I dug into how I found an answer that worked for me.
“Hallucination”—A Common Term in AI
AI chatbots, such as OpenAI’s ChatGPT, rely on a type of AI called a “Large Language Model” (LLM) to generate responses. LLMs are computer programs that have been trained on millions of text sources and can read and generate “natural language” text (the language humans naturally write and speak). Unfortunately, they also make mistakes.
In the academic literature, AI researchers often refer to these mistakes as “hallucinations.” But as the topic went mainstream, its label became more controversial. Some people think it anthropomorphizes the AI model (suggesting it has human-like functions) or gives the AI model agency in situations where it shouldn’t be implied. (suggesting that you can choose for yourself). Authors of commercial LLMs sometimes use hallucinations as an excuse to blame the AI model’s output error, instead of taking responsibility for the output itself.
Still, generative AI is so new that metaphors borrowed from existing ideas are needed to explain these highly technical concepts to the wider public. In this sense, the term ‘confabulation’, though equally imperfect, I feel is a better metaphor than ‘hallucination’. In human psychology, ‘confabulation’ occurs when there is a gap in someone’s memory and the brain convincingly fills in the rest with no intention of deceiving the other person. Chat GPT isn’t it It functions like the human brain, but the term ‘confabulation’ arguably works as a better metaphor because of the creative gap-filling principle at work, as we’ll see below.
storytelling problem
If AI bots generate false information and can be misleading, misinformed or defamatory, that’s a big problem. Recently, The Washington Post reported on a law professor who discovered that ChatGPT put him on a list of legal scholars who sexually harassed someone. But it never happened. ChatGPT made it. On the same day, Ars reported that it had discovered that an Australian mayor claimed ChatGPT was convicted of bribery and sentenced to prison. This is a complete fabrication.
Shortly after ChatGPT launched, people started declaring the end of search engines. But at the same time, many examples of ChatGPT confabulations began circulating on social media. Invented by an AI bot Book and the study things that don’t exist, Publication Not written by professor, fake academic papererror legal citation,not exist Linux system features,unrealistic retail mascotand technical details It makes no sense.
I’m curious how GPT will replace Google if it gives false answers with high confidence.
For example, I asked ChatGPT to provide a list of the top social cognitive theory books. Of his 10 books on answers, 4 don’t exist and 3 were written by someone else. pic.twitter.com/b2jN9VNCFv
— Herman Saxono (he/him) (@hermansaksono) January 16, 2023
And yet, despite ChatGPT’s counterintuitive tendency to casually fib, its tolerance for confabulation is also why we’re talking about it today. Some experts believe that ChatGPT is technically a vanilla GPT-3, as it may refuse to answer some questions or notify you when an answer may not be accurate. It points out that it is an improvement over (its predecessor model).
Riley Goodside, a large-scale language model expert and Staff Prompt Engineer at Scale AI, said: “Compared to his predecessor, ChatGPT tends to be particularly less hoaxed.”
When used as a brainstorming tool, ChatGPT’s logical leaps and confabulations can lead to creative breakthroughs. But when used as a factual reference, ChatGPT can actually do harm, and OpenAI is aware of it.
Shortly after the model’s launch, OpenAI CEO Sam Altman said, murmured“ChatGPT is incredibly limited, but enough to give the misleading impression that it’s good at some things. It’s a mistake to rely on anything important right now.” This is a preview of progress, there’s a lot to be done in terms of robustness, and honesty.” Later Tweethe wrote, “It knows a lot, but what’s dangerous is being overconfident and getting it wrong a fair percentage of the time. “
what’s going on