AI hallucinations can pose a risk to your cybersecurity


In early 2023, Google’s Bard made headlines for a pretty big mistake, which we now call an AI hallucination. During a demo, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Bard answered that JWST, which launched in December 2021, took the “very first pictures” of an exoplanet outside our solar system. However, the European Southern Observatory’s Very Large Telescope took the first picture of an exoplanet in 2004.


What is an AI hallucination?


Simply put, an AI hallucination is when a large language model (LLM), such as a generative AI tool, provides an answer that is incorrect. Sometimes, this means that the answer is totally fabricated, such as making up a research paper that doesn’t exist. Other times, it’s the wrong answer, such as with the Bard debacle.


Reasons for hallucination are varied, but the biggest one is that the data the model uses for training is incorrect — AI is only as accurate as the information it ingests. Input bias is also a top cause. If the data used for training contains biases, then the LLM will find patterns that are actually not there, which leads to incorrect results.


With businesses and consumers increasingly turning to AI for automation and decision-making, especially in key areas like healthcare and finance, the potential for errors poses a big risk. According to Gartner, AI hallucination comp ..

Support the originator by clicking the read the rest link below.