Translate

Visualizzazione post con etichetta deep-learning algorithm. Mostra tutti i post
Visualizzazione post con etichetta deep-learning algorithm. Mostra tutti i post

sabato 29 maggio 2021

# ai.bot: from stochastic parrot to quasi-stochastic speaking (mimetic) entity, the next steps of LLMs AI phrasing algorithms ... Are you ready?

<< Soon enough, all of our digital interactions—when we email, search, or post on social media—will be filtered through LLMs. (i.e. large language model (LLM)—a deep-learning algorithm trained on enormous amounts of text data) >>️

 << it’s the gap between what LLMs are and what they aspire to be that has concerned a growing number of researchers. LLMs are effectively the world’s most powerful autocomplete technologies. By ingesting millions of sentences, paragraphs, and even samples of dialogue, they learn the statistical patterns that govern how each of these elements should be assembled in a sensible order. This means LLMs can enhance certain activities: for example, they are good for creating more interactive and conversationally fluid chatbots that follow a well-established script. But they do not actually understand what they’re reading or saying. >>

<< We can’t really stop this craziness around large language models, where everybody wants to train them, (..) But what we can do is try to nudge this in a direction that is in the end more beneficial. >> Thomas Wolf.️

<< "Language technology can be very, very useful when it is appropriately scoped and situated and framed," (Emily Bender) (..) But the general-purpose nature of LLMs—and the persuasiveness of their mimicry—entices companies to use them in areas they aren’t necessarily equipped for. >> ️

Karen Hao. The race to understand the exhilarating, dangerous world of language AI. Tech Rev. May 20, 2021. 


"Stochastic parrots" (by Timnit Gebru) in: 


Also

Notes (quasi-stochastic poetry)