• About
  • Privacy Policy
  • Terms and Conditions
  • Contact
Sunday, May 18, 2025
  • Login
Best Technologies
  • Home
  • News
  • Tech
  • Spotlight

    Beyond Short-Term Fixes: How Themis Ecosystem Brings Long-Term Green Solutions

    A look inside both the Legion Go and Steam Deck OLED

    Construction robot builds massive stone walls on its own

    Receive an alert when one of your contacts is about to have a special day

    Here are the best iPad deals right now

    Here are the best smart locks you can buy right now

    Biomass Ultima Micro: A Smart Innovation That Solves a Big Problem

    What is an ‘AI prompt engineer’ and does every company need one?

    Recycled coffee grounds can be used to make stronger concrete

  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment
No Result
View All Result
  • Home
  • News
  • Tech
  • Spotlight

    Beyond Short-Term Fixes: How Themis Ecosystem Brings Long-Term Green Solutions

    A look inside both the Legion Go and Steam Deck OLED

    Construction robot builds massive stone walls on its own

    Receive an alert when one of your contacts is about to have a special day

    Here are the best iPad deals right now

    Here are the best smart locks you can buy right now

    Biomass Ultima Micro: A Smart Innovation That Solves a Big Problem

    What is an ‘AI prompt engineer’ and does every company need one?

    Recycled coffee grounds can be used to make stronger concrete

  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment
No Result
View All Result
Best Technologies
No Result
View All Result
Home News

AI hallucinations are getting worse – and they're here to stay

by News Room
May 9, 2025
in News
Share on FacebookShare on Twitter

Errors tend to crop up in AI-generated content

Paul Taylor/Getty Images

AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as “hallucinations”, have been a problem from the start, and it is becoming clear we may never get rid of them.

Hallucination is a blanket term for certain kinds of mistakes made by the large language models (LLMs) that power systems like OpenAI’s ChatGPT or Google’s Gemini. It is best known as a description of the way they sometimes present false information as true. But it can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.

An OpenAI technical report evaluating its latest LLMs showed that its o3 and o4-mini models, which were released in April, had significantly higher hallucination rates than the company’s previous o1 model that came out in late 2024. For example, when summarising publicly available facts about people, o3 hallucinated 33 per cent of the time while o4-mini did so 48 per cent of the time. In comparison, o1 had a hallucination rate of 16 per cent.

The problem isn’t limited to OpenAI. One popular leaderboard from the company Vectara that assesses hallucination rates indicates some “reasoning” models – including the DeepSeek-R1 model from developer DeepSeek – saw double-digit rises in hallucination rates compared with previous models from their developers. This type of model goes through multiple steps to demonstrate a line of reasoning before responding.

OpenAI says the reasoning process isn’t to blame. “Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” says an OpenAI spokesperson. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

Some potential applications for LLMs could be derailed by hallucination. A model that consistently states falsehoods and requires fact-checking won’t be a helpful research assistant; a paralegal-bot that cites imaginary cases will get lawyers into trouble; a customer service agent that claims outdated policies are still active will create headaches for the company.

However, AI companies initially claimed that this problem would clear up over time. Indeed, after they were first launched, models tended to hallucinate less with each update. But the high hallucination rates of recent versions are complicating that narrative – whether or not reasoning is at fault.

Vectara’s leaderboard ranks models based on their factual consistency in summarising documents they are given. This showed that “hallucination rates are almost the same for reasoning versus non-reasoning models”, at least for systems from OpenAI and Google, says Forrest Sheng Bao at Vectara. Google didn’t provide additional comment. For the leaderboard’s purposes, the specific hallucination rate numbers are less important than the overall ranking of each model, says Bao.

But this ranking may not be the best way to compare AI models.

For one thing, it conflates different types of hallucinations. The Vectara team pointed out that, although the DeepSeek-R1 model hallucinated 14.3 per cent of the time, most of these were “benign”: answers that are factually supported by logical reasoning or world knowledge, but not actually present in the original text the bot was asked to summarise. DeepSeek didn’t provide additional comment.

Another problem with this kind of ranking is that testing based on text summarisation “says nothing about the rate of incorrect outputs when [LLMs] are used for other tasks”, says Emily Bender at the University of Washington. She says the leaderboard results may not be the best way to judge this technology because LLMs aren’t designed specifically to summarise texts.

These models work by repeatedly answering the question of “what is a likely next word” to formulate answers to prompts, and so they aren’t processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term “hallucinations” when describing output errors.

“‘Hallucination’ as a term is doubly problematic,” says Bender. “On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines – hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.”

Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

Topics:

Source: New Scientist

Tags: artificial intelligence

Related Posts

News

Week in Review: Most popular stories on GeekWire for the week of May 11, 2025

May 18, 2025
News

Spokane’s aerospace Tech Hub loses $48M federal grant, sparking regional outcry

May 17, 2025
News

GeekWire Podcast: Box CEO Aaron Levie on AI agents, enterprise data, and the future of work

May 17, 2025
News

Quantum Computers Just Outsmarted Supercomputers – Here’s What They Solved

May 17, 2025
News

Report: Sustainable aviation startup ZeroAvia aims for $150M to stay airborne through 2028

May 16, 2025
News

Seattle’s Allen Institute launches ‘moonshot’ to create new approach to cell biology research

May 16, 2025

Trending Now

  • Sinners is getting an IMAX re-release

    0 shares
    Share 0 Tweet 0
  • Less Galaxy Z Fold and Flip units in 2025? I’m not surprised

    0 shares
    Share 0 Tweet 0
  • Here's when the world-class Sony WH-1000XM6 headphones will be officially unveiled

    0 shares
    Share 0 Tweet 0
  • An alternative to Android and iOS is no longer optional

    0 shares
    Share 0 Tweet 0
  • Duolingo said it just doubled its language courses thanks to AI

    0 shares
    Share 0 Tweet 0

Latest News

Mobile

Apple employees reveal why its AI failed: old school executives and a commitment to privacy

May 18, 2025
Mobile

TSMC is raising wafer prices and the iPhone will be affected

May 18, 2025
Mobile

Woman caught on video scrolling on a transparent “phone” that is sold out

May 18, 2025
Tech

China begins assembling its supercomputer in space

May 18, 2025
Mobile

This phone has the best battery life of 2025 and it's not even close

May 18, 2025
News

Week in Review: Most popular stories on GeekWire for the week of May 11, 2025

May 18, 2025
Best Technologies

Best Technologies™ is an online tech news portal. It started as an honest effort to provide unbiased and well-suited information on the latest and trending tech news.

Sections

  • Business
  • Energy
  • Entertainment
  • Health
  • Mobile
  • News
  • Security
  • Space
  • Spotlight
  • Tech
  • Windows

Browse by Topic

AI Apple buying guides Entertainment gaming google news policy politics reviews shopping Tech

Recent Posts

  • Apple employees reveal why its AI failed: old school executives and a commitment to privacy
  • TSMC is raising wafer prices and the iPhone will be affected
  • Woman caught on video scrolling on a transparent “phone” that is sold out
  • About
  • Privacy Policy
  • Terms and Conditions
  • Contact

© 2022 All Right Reserved - Blue Planet Global Media Network

No Result
View All Result
  • Home
  • News
  • Tech
  • Spotlight
  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment

© 2022 All Right Reserved - Blue Planet Global Media Network

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website, you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.