What are AI 'hallucinations' and can they be stopped? | Context

Autor: TRF

What’s the context?

AI hallucinations can cause serious problems in fields like healthcare and law but experts say they cannot be eliminated

  • AI produces false information, known as ‘hallucinations’
  • 68% of large companies use AI despite the risks
  • Hallucinations can be reduced but not eliminated

LONDON – There’s an elephant in the room when it comes to talking about artificial intelligence (AI) and it’s the fact that sometimes it just makes things up and serves up these so-called hallucinations as facts. 

This happens with both commercial products like OpenAI’s ChatGPT and specialised systems for doctors and lawyers, and it can pose a real-world threat in courtrooms, classrooms, hospitals and beyond, spreading mis- and disinformation.

Despite these risks, companies are keen to integrate AI into their work, with 68% of large companies incorporating at least one AI technology, according to British government research.

But why does AI hallucinate, and is it possible to stop it?

What is an AI hallucination?

Generative AI products, like ChatGPT, are built on large-language models (LLMs) and they work through ‘pattern matching’, a process whereby an algorithm looks for specific shapes, words, or other sequences in the input data, which might be a particular question or task.

But the algorithm does not know the meaning of the words. While it might have the facade of intelligence, what it does is perhaps closer to pulling Scrabble letters from a large bag, and learning what gets a positive response from the user.

These AI systems or products are trained on huge amounts of data but incomplete data or biases – like a missing letter or a bag full of Es – can result in hallucinations.

All AI models hallucinate; even the most accurate register factual inconsistencies 2.5% of the time, according to AI company Vectara’s hallucination detection model

Can AI hallucinations be dangerous?

Depending on where AI is used, the effect of hallucinations can range from farcical to severe. 

After Google struck a deal with social media platform Reddit to use its content to train its AI models, its Gemini tool started pulling incorrect advice or jokes from the site – including a recommendation to add glue to cheese to make it stick to pizza.

In courts, lawyers have cited non-existent cases generated by AI chatbots numerous times, and the World Health Organisation has warned against using AI LLMs for public healthcare – saying data used to reach decisions could be biased or inaccurate.

“(It is) even more important for institutions to have safeguards and continuous monitoring in place, including human intervention – in this case, radiologists or medical experts to validate findings – and explainable systems,” Ritika Gunnar, a general manager of product management on data and AI at IBM, told Context.

How can hallucinations be reduced?

The risk of hallucinations can be reduced by improving the quality of the training data, using humans to verify and correct the output of AI, and ensuring a level of transparency about how the models work.

But these processes can be difficult to implement effectively as private companies are loath to relinquish their proprietary tools for inspection. 

Some large AI companies rely on poorly paid workers in the Global South, who label text, images, video and audio for use in everything from voice recognition assistants to face recognition to 3D image recognition for autonomous vehicles.

The hours are long and the work exhausting, exacerbated by lax labour regulations.

LLMs could also be fine-tuned to reduce the risk of hallucinations. One way of doing this is by using Retrieval-Augmented Generation, which bulks up AI’s answers using external sources.

While this could be effective, according to AI company Service Now, it could carry a high financial cost due to the infrastructure required, such as cloud computing space, data acquisition, human managers, and more.

Instead of using LLMs, AI could also deploy smaller language models, which would reduce the risk of hallucinations because they can be trained on complete, specified data – akin to choosing an answer from three responses compared to 3,000.

Using these smaller models would also reduce AI’s large environmental footprint.

However, experts from the National University of Singapore believe that hallucinations will never be completely abolished.

“It’s challenging to eliminate AI hallucinations entirely, due to the nature of how models generate content,” the researchers wrote in a paper published in January.

“An important, but not the only, reason for hallucination is that the problem is beyond LLMs’ computation capabilities,” they wrote. 

“For those problems, any answer except ‘I don’t know’ is unreliable and suggests that LLMs have added premises implicitly during the generation process. It could potentially reinforce stereotypical opinions and prejudices towards under-represented groups and ideas.”

(Reporting by Adam Smith; Editing by Clar Ni Chonghaile.)

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Te puede interesar también

¿Quieres hablar con nosotros en cabina?

Nuestros Horarios en el Estudio:

9am a 11am | 12m a 1pm | 4 a 5 pm | 5 a 6pm

horario del pacifico