Translate

Visualizzazione post con etichetta LLMs. Mostra tutti i post
Visualizzazione post con etichetta LLMs. Mostra tutti i post

giovedì 6 febbraio 2025

# life: chameleon machines

<< Large language model-based (LLM-based) agents have become common in settings that include non-cooperative parties. In such settings, agents' decision-making needs to conceal information from their adversaries, reveal information to their cooperators, and infer information to identify the other agents' characteristics. To investigate whether LLMs have these information control and decision-making capabilities, (AA) make LLM agents play the language-based hidden-identity game, The Chameleon. >>️

<< Based on the empirical results and theoretical analysis of different strategies, (AA) deduce that LLM-based non-chameleon agents reveal excessive information to agents of unknown identities. (Their) results point to a weakness of contemporary LLMs, including GPT-4, GPT-4o, Gemini 1.5, and Claude 3.5 Sonnet, in strategic interactions. >>
Mustafa O. Karabag, Ufuk Topcu. Do LLMs Strategically Reveal, Conceal, and Infer Information? A Theoretical and Empirical Analysis in The Chameleon Game. arXiv: 2501.19398v1 [cs.AI]. Jan 31, 2025.

Also: games, ai (artificial intell), nfulaw, in https://www.inkgmr.net/kwrds.html 

Keywords: life, games, chameleon game, ai, artificial intelligence, LLMs, privacy, nfulaw


sabato 25 gennaio 2025

# life: the Age of hallucinatory artificial intelligence (AI); the beginning.

<< It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. >>

<< They sound like politicians, they tend to make up stuff and be totally confident no matter what. >> Santosh Vempala. ️

<< Chatbots err for many reasons, but computer scientists tend to refer to all such blips as hallucinations. It’s a term not universally accepted, with some suggesting ‘confabulations’ or, more simply, ‘bullshit’. The phenomenon has captured so much attention that the website Dictionary.com picked ‘hallucinate’ as its word of the year for 2023. >>️

<< Because AI hallucinations are fundamental to how LLMs work, researchers say that eliminating them completely is impossible. >>️

Nicola Jones. AI hallucinations can’t be stopped — but these techniques can limit their damage. Nature. 637, 778-780. Jan 21,  2025. 

Also: ai (artificial intell) (bot), nfulaw, in https://www.inkgmr.net/kwrds.html 

Keywords: life, ai, artificial intell, LLMs, bot, nfulaw