Editor's Pick

Why overconfident AI models are prioritising assertiveness over accuracy

Concerns are mounting over the reliability of artificial intelligence models, as new research reveals that some popular systems produce incorrect information in over a third of their responses, despite the increasing reliance and trust placed in AI technology, ING Group said on Wednesday.

Modern AI models, featuring deep reasoning, long-term memory, and autonomous agents, can perform tasks like web browsing with minimal human intervention. 

However, the execution of these tasks demands extensive data, leading to a greater dependence on external data sources that are often uncontrolled and unverified, ING’s Julian Geib, junior economist, global trade, said in a report.

Overconfidence

This increased exposure can lead to behaviour that resembles overconfidence, a cognitive bias where confidence in one’s knowledge or abilities exceeds actual accuracy. 

Leading AI systems generate false claims at a rate of up to 40%, a consequence highlighted in a recent study by the European Broadcasting Union (EBU).

The increased frequency of responses aligns with a shift in AI model behavior. 

Earlier AI systems were programmed to refuse to answer queries regarding topics outside their training datasets. 

However, contemporary systems with web connectivity are engineered to answer more often, even when the information available is limited or uncertain.

Increased user engagement is a benefit, but it results in more fabricated output, which we term “AI hallucinations”, Geib said. 

However, the models often deliver such responses with strong confidence, creating the impression that they are unquestionably correct.

Source: ING Research

Fluency over accuracy

Even newer AI models frequently experience hallucinations for several reasons. 

Primarily, when users pose vague or overly complex questions, the model struggles with interpretation. 

This often leads the model to rely on statistical patterns to “fill in the blanks,” generating a seemingly complete, but potentially factually inaccurate, response, Geib said. 

Although these responses aim to be helpful, they can introduce incorrect information.

Fine-tuning models with human feedback often favours confident, helpful-sounding answers, leading to a bias toward inaccurate but assertive statements over cautious or uncertain responses.

The problem is worsened by the plummeting “no response rate.” 

Older models refused nearly 40% of queries, but newer ones answer almost everything.

In critical fields like politics and health, this prioritisation of fluency over accuracy creates serious misinformation risks.

AI is becoming a more common tool for accessing information on current events, especially among younger demographics. 

Notably, 15% of people under the age of 25 state that they rely on AI chatbots as their main source for news.

“Given the rising usage of AI both privately and in businesses, accuracy should be a priority,” Geib said. 

Awareness vital

Geib added:

“In Germany, there’s a saying: “stiffly claimed is already half proven” – but confidence does not automatically translate into correctness.

The current limitations in the accuracy of AI suggest that the wholesale replacement of entire professional fields in the immediate future is a significantly low probability event, according to Geib. 

This is primarily because human professionals, in most domains, operate with a degree of nuanced judgment, contextual understanding, and accuracy that current AI systems struggle to consistently replicate. 

The risk of widespread job displacement, therefore, becomes critical only in scenarios where the practitioners within a profession become entirely reliant upon—and fail to critically verify—the potentially flawed or erroneous data and conclusions generated by AI. 

Essentially, AI currently serves as a powerful, yet imperfect, tool, and its inaccuracies ensure that human oversight, critical thinking, and validation remain indispensable components of professional work.

“AI-generated statements should be treated with the same critical mindset as human claims,” Geib noted. 

The post Why overconfident AI models are prioritising assertiveness over accuracy appeared first on Invezz

You May Also Like

Latest News

MILAN (Reuters) -Italian billionaire Francesco Gaetano Caltagirone has emerged as a leading player in the reshaping of Italy’s financial sector that is currently under...

Latest News

MILAN (Reuters) -Italian billionaire Francesco Gaetano Caltagirone has emerged as a leading player in the reshaping of Italy’s financial sector that is currently under...

Editor's Pick

Oil prices were mostly flat after rising earlier in the session on Thursday due to a fall in US inventories.  According to the US...

Latest News

MILAN (Reuters) -Italian billionaire Francesco Gaetano Caltagirone has emerged as a leading player in the reshaping of Italy’s financial sector that is currently under...

Disclaimer: Bullsmarketdominators.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2025 Bullsmarketdominators.com

Exit mobile version