General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAnalysis Finds That Google's AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in History
https://futurism.com/artificial-intelligence/google-ai-overviews-misinformationA recent analysis conducted by the AI startup Oumi at the behest of The New York Times found that the AI-generated summaries, which appear above Google search results, are accurate around 91 percent of the time.
In a sense, that may sound like an impressive figure. But heres an even more impressive one: five trillion. Thats roughly the number of search queries that Google processes every year, translating to tens of millions of wrong answers that the AI Overviews are providing every hour and hundreds of thousands every minute, the analysis calculated.
In other words, Google has created a misinformation crisis. Studies have shown that people tend to trust what an AI tells them without question, with one report finding that only 8 percent of users actually double checked an AIs answer. Another experiment found that users still listened to AI when it gave them the wrong answer nearly 80 percent of the time a grim trend the researchers dubbed cognitive surrender.
-snip-
2naSalit
(103,032 posts)Propaganda generator, great.
benfranklin1776
(7,020 posts)Further proof humans shouldnt outsource thinking critically to a machine and unquestioningly accept what it tells them.
2naSalit
(103,032 posts)eShirl
(20,292 posts)OhioBack2Blue
(126 posts)The great corrupter. A paradox quietly emerges. As we chase the central idol, we, as humans, our societies, and our environment becomes increasingly reduced, hollowed out, nothing.
sanatanadharma
(4,090 posts)Posts and questions that I publish on an obscure forum on the internet turn up as Google AI sources when one googles the subject.
Google does not believe in second source confirmation. "If it is on the internet, it is true" is not a false-fact within the alternative-truth AI alternet (sic) reality world.
MineralMan
(151,343 posts)I post here and there on some pretty obscure subjects. I'm very careful about accuracy in those posts. I've noticed, as you have, that information I have posted is coming up in AI summaries about those subjects. Fortunately, that information has been correct, so far, but there is no attribution in the AI summaries. That's troubling, because it makes fact-checking more difficult.
The other thing I'm seeing in subjects with which I'm very familiar is that Google AI summaries are superficial for the most part and often look like a school kid trying to pad out information in an essay or paper. For that reason, I do not rely on AI summaries for anything.
TBF
(36,792 posts)when you do a search you get an answer - sometimes seems decent, but other times incomplete or even stretching to find anything even quasi-related.
I think we're going to find that AI is like so many other tools - useful sometimes. We really need to regulate usage and also consider the resources we are using to generate such a tool. Is it really worth it?
Ms. Toad
(38,678 posts)I have yet to find a single completely accurate summary.
Is be willing to believe that 91% of individual facts are accurate - but not that 91% of summaries are.