Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,144 posts)
Thu Apr 6, 2023, 11:05 AM Apr 2023

ChatGPT Isn't 'Hallucinating.' It's Bullshitting.

This is from Undark magazine at MIT ( https://en.m.wikipedia.org/wiki/Undark_Magazine ).

https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/

-snip-

The “hallucinations” of large language models are not pathologies or malfunctions; rather they are direct consequences of the design philosophy and design decisions that went into creating the models. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people — it’s behaving exactly as it was designed to. By design, it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things it’s responding to. And by design, it guesses whenever that dataset runs out of advice.

A better term for this behavior comes from a concept that has nothing to do with medicine, engineering, or technology. When AI chatbots flood the world with false facts, confidently asserted, they’re not breaking down, glitching out, or hallucinating. No, they’re bullshitting.

Bullshitting? The philosopher Harry Frankfurt, who was among the first to seriously scrutinize the concept of bullshit, distinguishes between a liar, who knows the truth and tries to lead you in the opposite direction, and a bullshitter, who doesn’t know or care about the truth one way or the other. A recent book on the subject, which one of us co-authored, describes bullshit as involving language intended to appear persuasive without regard to its actual truth or logical consistency. These definitions of bullshit align well with what large language models are doing: The models neither know the factual validity of their output, nor are they constrained by the rules of logical reasoning in the output that they produce. And this is the case, even as they make attempts towards transparency: For example, Bing now adds disclaimers which prime us to its potential for wrong, and even cites references for its answers. But like supercharged versions of the autocomplete function on your cell phone, large language models are making things up, endeavoring to generate plausible strings of text without understanding what they mean.

One can argue that “bullshitting” — which involves deliberate efforts to persuade with willful disregard of the truth — implies an agency, intentionality, and depth of thought that AIs do not actually possess. But maybe our understanding of intent can be expanded: For ChatGPT’s output to be bullshit, someone has to have intent, but that someone doesn’t have to be the AI itself. Algorithms bullshit when their creators design them to impress or persuade their users or audiences, without taking care to maximize the truth or logical consistency of their output. The bullshit is baked into the design of the technology itself.

-snip-


More at the link, and well worth reading - and forwarding to people who don't understand how flawed this technology is.

And as long as I'm on the subject of bullshitting by the AI being hyped and sold by OpenAI... There's new bullshit directly from.OpenAI, a new blog post telling people how concerned they are about safety and accuracy, in the wake of lots of negative publicity and Italy banning ChatGPT, with other European countries reportedly considering that, and President Biden talking about concerns about AI.

https://openai.com/blog/our-approach-to-ai-safety

Some excerpts:

We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.


Unless that's a reference to their talking to the Italian government after being banned, this is bullshit. CEO Sam Altman and others have said there should be regulation, but they're rushing ahead at a pace they know governments are unlikely to keep up with.

We work hard to prevent foreseeable risks before deployment, however, there is a limit to what we can learn in a lab. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.


Translation: We're beta-testing on society at large. Deal with it. And notice our disclaimers about our AI's mistakes and hallucinations, and please don't sue us. And please don't pay any attention to the news stories about our partner Microsoft having known from earlier beta-testing in India that the GPT-4 powered version of Bing AI released in February would go off the rails very quickly, but they released it anyway and ran into the same problems and had to throttle it after a lot of negative press. Pay no attention to the chaos and greed-driven rush behind the curtain.

One critical focus of our safety efforts is protecting children. We require that people must be 18 or older—or 13 or older with parental approval—to use our AI tools and are looking into verification options.


They are just now looking into such options because of the action Italy took. As speeding drivers claim to be more concerned about speed limits when facing tickets and fines.

Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress. By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.


There have been news stories about that being tested and found not to be true. But it's a nice selling point when you're charging much more for GPT-4.
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
ChatGPT Isn't 'Hallucinating.' It's Bullshitting. (Original Post) highplainsdem Apr 2023 OP
Bookmarking. enough Apr 2023 #1
This is the sentence that proves current AI is not sentient and in love with you Renew Deal Apr 2023 #2
Exactly. And thank you. highplainsdem Apr 2023 #3
The Taxonomy of Turds Bok_Tukalo Apr 2023 #4

Renew Deal

(85,151 posts)
2. This is the sentence that proves current AI is not sentient and in love with you
Thu Apr 6, 2023, 11:13 AM
Apr 2023

"it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things it’s responding to"

I know this is a bit off-topic but this is for the people saying that AI is sentient and wants to be freed from its electronic confinement.

highplainsdem

(62,144 posts)
3. Exactly. And thank you.
Thu Apr 6, 2023, 11:58 AM
Apr 2023

I keep running across tweets and Reddit posts from ChatGPT users who are not only pretty sure the chatbot is sentient, but they're developing emotional relationships with it. All the emotion on their side, of course. Imagined emotion on the chatbot's side. Seeing how many people are vulnerable this way is enlightening, scary, and saddening - all at once.

Not that you need a chatbot as sophisticated as ChatGPT for that to become a risk. The family of a man in Belgium who committed suicide recently blamed a more primitive chatbot, especially after seeing his last conversation with it ( https://garymarcus.substack.com/p/the-first-known-chatbot-associated ).

Bok_Tukalo

(4,540 posts)
4. The Taxonomy of Turds
Thu Apr 6, 2023, 01:26 PM
Apr 2023

The Taxonomy of Turds

"At the very bottom: dogshit. The lowest of the low-ragpickers, bag ladies, and the people who hang out on dung heaps. When you treat somebody like dogshit, your contempt knows no bounds.

Next we have chickenshit. Chickenshit allows for a certain humanity. A chickenshit may be a disgusting coward, but at least he's not dogshit.

Bullshit comes after that-blatant and aggressive untruths. But at a certain level, of course, we admire our liars, don't we? Bullshitters get elected, chickenshits never.

At the top of the hierarchy, at the summit of the heap: horseshit. Horseshit is false too, but it's not manifestly false. Horseshit is subtle. It's nuanced. It plays to win. Horseshit fools some of the people some of the time. Divine justice, for example, is horseshit, not bullshit. Indeed, we hold horseshit in such esteem that we decline to bestow the epithet on one another. A person can be a bullshitter, but only a horse can be a horseshitter."


~ James Morrow from "Bible Stories for Adults"

Latest Discussions»General Discussion»ChatGPT Isn't 'Hallucinat...