Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

applegrove

(129,615 posts)
Sat Dec 6, 2025, 08:57 AM Saturday

Beware DU: AI Chatbots More Effective at Swaying Voters Than Ads

AI Chatbots More Effective at Swaying Voters Than Ads

December 5, 2025 at 11:35 am EST By Taegan Goddard 105 Comments

https://politicalwire.com/2025/12/05/chatbots-more-effective-at-swaying-voters-than-ads/


MIT Technology Review: “A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party.”

“The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things.”
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Beware DU: AI Chatbots More Effective at Swaying Voters Than Ads (Original Post) applegrove Saturday OP
Pro-RW chatbots lied more than pro-LW ones muriel_volestrangler Saturday #1

muriel_volestrangler

(105,386 posts)
1. Pro-RW chatbots lied more than pro-LW ones
Sat Dec 6, 2025, 09:20 AM
Saturday

Nature paper:

The catch is, some of the “evidence” and “facts” the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena—including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, says Costello.

Science paper:
But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false information—and no one is sure why. “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality,” says Hackenburg.

I know why - because the truth is complex, and can leave you saying "well, both sides have a point" (remember, this was across the US, Canada, Poland and the UK, so it's not just "Republicans are lying bastards" ), whereas lies can be as polarized as you like.

Sure, ads aren't that effective - they're pre-made, so "one size fits all". They also, even in this day and age, get some fact-checking - so that, in the US, the candidate can say the "I approve this message" bit at the end, so that it ought not to be actual lies. The chatbots aren't restrained by that.

Maybe force chatbots to use an "I approve this message" tag too - which would mean they'd have to restrict the "facts" to ones that have been pre-checked? In another era, that might become law, but I guess the Republicans wouldn't go for it now, when they hear their chatbots lie more.

Latest Discussions»General Discussion»Beware DU: AI Chatbots Mo...