<< It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. >>
<< They sound like politicians, they tend to make up stuff and be totally confident no matter what. >> Santosh Vempala. ️
<< Chatbots err for many reasons, but computer scientists tend to refer to all such blips as hallucinations. It’s a term not universally accepted, with some suggesting ‘confabulations’ or, more simply, ‘bullshit’. The phenomenon has captured so much attention that the website Dictionary.com picked ‘hallucinate’ as its word of the year for 2023. >>️
<< Because AI hallucinations are fundamental to how LLMs work, researchers say that eliminating them completely is impossible. >>️
Nicola Jones. AI hallucinations can’t be stopped — but these techniques can limit their damage. Nature. 637, 778-780. Jan 21, 2025.
Also: ai (artificial intell) (bot), nfulaw, in https://www.inkgmr.net/kwrds.html
Keywords: life, ai, artificial intell, LLMs, bot, nfulaw