General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsChatGPT Isn't 'Hallucinating.' It's Bullshitting.
This is from Undark magazine at MIT ( https://en.m.wikipedia.org/wiki/Undark_Magazine ).
https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/
The hallucinations of large language models are not pathologies or malfunctions; rather they are direct consequences of the design philosophy and design decisions that went into creating the models. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people its behaving exactly as it was designed to. By design, it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things its responding to. And by design, it guesses whenever that dataset runs out of advice.
A better term for this behavior comes from a concept that has nothing to do with medicine, engineering, or technology. When AI chatbots flood the world with false facts, confidently asserted, theyre not breaking down, glitching out, or hallucinating. No, theyre bullshitting.
Bullshitting? The philosopher Harry Frankfurt, who was among the first to seriously scrutinize the concept of bullshit, distinguishes between a liar, who knows the truth and tries to lead you in the opposite direction, and a bullshitter, who doesnt know or care about the truth one way or the other. A recent book on the subject, which one of us co-authored, describes bullshit as involving language intended to appear persuasive without regard to its actual truth or logical consistency. These definitions of bullshit align well with what large language models are doing: The models neither know the factual validity of their output, nor are they constrained by the rules of logical reasoning in the output that they produce. And this is the case, even as they make attempts towards transparency: For example, Bing now adds disclaimers which prime us to its potential for wrong, and even cites references for its answers. But like supercharged versions of the autocomplete function on your cell phone, large language models are making things up, endeavoring to generate plausible strings of text without understanding what they mean.
One can argue that bullshitting which involves deliberate efforts to persuade with willful disregard of the truth implies an agency, intentionality, and depth of thought that AIs do not actually possess. But maybe our understanding of intent can be expanded: For ChatGPTs output to be bullshit, someone has to have intent, but that someone doesnt have to be the AI itself. Algorithms bullshit when their creators design them to impress or persuade their users or audiences, without taking care to maximize the truth or logical consistency of their output. The bullshit is baked into the design of the technology itself.
-snip-
More at the link, and well worth reading - and forwarding to people who don't understand how flawed this technology is.
And as long as I'm on the subject of bullshitting by the AI being hyped and sold by OpenAI... There's new bullshit directly from.OpenAI, a new blog post telling people how concerned they are about safety and accuracy, in the wake of lots of negative publicity and Italy banning ChatGPT, with other European countries reportedly considering that, and President Biden talking about concerns about AI.
https://openai.com/blog/our-approach-to-ai-safety
Some excerpts:
Unless that's a reference to their talking to the Italian government after being banned, this is bullshit. CEO Sam Altman and others have said there should be regulation, but they're rushing ahead at a pace they know governments are unlikely to keep up with.
Translation: We're beta-testing on society at large. Deal with it. And notice our disclaimers about our AI's mistakes and hallucinations, and please don't sue us. And please don't pay any attention to the news stories about our partner Microsoft having known from earlier beta-testing in India that the GPT-4 powered version of Bing AI released in February would go off the rails very quickly, but they released it anyway and ran into the same problems and had to throttle it after a lot of negative press. Pay no attention to the chaos and greed-driven rush behind the curtain.
They are just now looking into such options because of the action Italy took. As speeding drivers claim to be more concerned about speed limits when facing tickets and fines.
There have been news stories about that being tested and found not to be true. But it's a nice selling point when you're charging much more for GPT-4.
enough
(13,760 posts)Renew Deal
(85,151 posts)"it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things its responding to"
I know this is a bit off-topic but this is for the people saying that AI is sentient and wants to be freed from its electronic confinement.
highplainsdem
(62,144 posts)I keep running across tweets and Reddit posts from ChatGPT users who are not only pretty sure the chatbot is sentient, but they're developing emotional relationships with it. All the emotion on their side, of course. Imagined emotion on the chatbot's side. Seeing how many people are vulnerable this way is enlightening, scary, and saddening - all at once.
Not that you need a chatbot as sophisticated as ChatGPT for that to become a risk. The family of a man in Belgium who committed suicide recently blamed a more primitive chatbot, especially after seeing his last conversation with it ( https://garymarcus.substack.com/p/the-first-known-chatbot-associated ).
Bok_Tukalo
(4,540 posts)The Taxonomy of Turds
"At the very bottom: dogshit. The lowest of the low-ragpickers, bag ladies, and the people who hang out on dung heaps. When you treat somebody like dogshit, your contempt knows no bounds.
Next we have chickenshit. Chickenshit allows for a certain humanity. A chickenshit may be a disgusting coward, but at least he's not dogshit.
Bullshit comes after that-blatant and aggressive untruths. But at a certain level, of course, we admire our liars, don't we? Bullshitters get elected, chickenshits never.
At the top of the hierarchy, at the summit of the heap: horseshit. Horseshit is false too, but it's not manifestly false. Horseshit is subtle. It's nuanced. It plays to win. Horseshit fools some of the people some of the time. Divine justice, for example, is horseshit, not bullshit. Indeed, we hold horseshit in such esteem that we decline to bestow the epithet on one another. A person can be a bullshitter, but only a horse can be a horseshitter."
~ James Morrow from "Bible Stories for Adults"