The surprising reason ChatGPT and other AI tools make things up – and why it’s not just a glitch

Large language models (LLMs) like ChatGPT have wowed the world with their capabilities. But they’ve also made headlines for confidently spewing absolute nonsense.

This phenomenon, known as hallucination, ranges from fairly harmless mistakes – like getting the number of ‘r’s in strawberry wrong – to completely fabricated legal cases that have landed lawyers in serious trouble.

Sure, you could argue that everyone should rigorously fact-check anything AI suggests (and I’d agree). But as these tools become more ingrained in our work, research, and decision-making, we need to understand why hallucinations happen – and whether we can prevent them.

The ghost in the machine

To understand why AI hallucinates, we need a quick refresher on how large language models (LLMs) work.

LLMs don’t retrieve facts like a search engine or a human looking something up in a database. Instead, they generate text by making predictions.

“LLMs are next-word predictors and daydreamers at their core,” says software engineer Maitreyi Chatterjee. “They generate text by predicting the statistically most likely word that occurs next.”

We often assume these models are thinking or reasoning, but they’re not. They’re sophisticated pattern predictors – and that process inevitably leads to errors.

This explains why LLMs struggle with seemingly simple things, like counting the ‘r’s in strawberry or solving basic math problems. They’re not sitting there working it out like we would – not really.

Another key reason is they don’t check what they’re pumping out. “LLMs lack an internal fact-checking mechanism, and because their goal is to predict the next token [unit of text], they sometimes prefer lucid-sounding token sequences over correct ones,” Chatterjee explains.

And when they don’t know the answer? They often make something up. “If the model’s training data has incomplete, conflicting, or insufficient information for a given query, it could generate plausible but incorrect information to ‘fill in’ the gaps,” Chatterjee tells me.

Rather than admitting uncertainty, many AI tools default to producing an answer – whether it’s right or not. Other times, they have the correct information but fail to retrieve or apply it properly. This can happen when a question is complex, or the model misinterprets context.

This is why prompts matter.

The hallucination-smashing power of prompts

Certain types of prompts can make hallucinations more likely. We’ve already covered our top tips for leveling up your AI prompts. Not just for getting more useful results, but also for reducing the chances of AI going off the rails.

For example, ambiguous prompts can cause confusion, leading the model to mix up knowledge sources. Chatterjee says this is where you need to be careful, ask “Tell me about Paris” without context, and you might get a strange blend of facts about Paris, France, Paris Hilton, and Paris from Greek mythology.

But more detail isn’t always better. Overly long prompts can overwhelm the model, making it lose track of key details and start filling in gaps with fabrications. Similarly, when a model isn’t given enough time to process a question, it’s more likely to make errors. That’s why techniques like chain-of-thought prompting – where the model is encouraged to reason through a problem step by step – can lead to more accurate responses.

Providing a reference is another effective way to keep AI on track. “You can sometimes solve this problem by giving the model a ‘pre-read’ or a knowledge source to refer to so it can cross-check its answer,” Chatterjee explains. Few-shot prompting, where the model is given a series of examples before answering, can also improve accuracy.

Even with these techniques, hallucinations remain an inherent challenge for LLMs. As AI evolves, researchers are working on ways to make models more reliable. But for now, understanding why AI hallucinates, how to prevent it, and, most importantly, why you should fact-check everything remains essential.

You might also like

How It works

Search Crack for

Latest IT News

Apr 25
ChatGPT Free users are getting a lightweight version of Deep Research, and Plus users will get access too after they reach their usage limits on the original version.
Apr 25
Perplexity releases a new iOS voice assistant as an action-oriented alternative to Siri.
Apr 24
If you really want to know what ChatGPT thinks, ask it to roast itself.
Apr 24
Can AI predict the 2025 NFL Draft? We're about to find out.
Apr 24
We list the best free streaming software, to make it simple and easy to live stream gaming or videos online, for content creators just starting out or when you're working to a budget.
Apr 24
Generous new usage limits mean that ChatGPT Plus users now get double the rate limits for ChatGPT-o3 and o4-mini.
Apr 24
Yet more Windows 11 weirdness emerges, as 24H2 update transports a plane from Grand Theft Auto: San Andreas into another galaxy.

Latest cracks