Nature paper:
The catch is, some of the evidence and facts the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomenaincluding political communication that comes from the right, which tends to be less accurate, according to studies of partisan social media posts, says Costello.
Science paper:
But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false informationand no one is sure why. It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality, says Hackenburg.
I know why - because the truth is complex, and can leave you saying "well, both sides have a point" (remember, this was across the US, Canada, Poland and the UK, so it's not just "Republicans are lying bastards" ), whereas lies can be as polarized as you like.
Sure, ads aren't that effective - they're pre-made, so "one size fits all". They also, even in this day and age, get some fact-checking - so that, in the US, the candidate can say the "I approve this message" bit at the end, so that it ought not to be actual lies. The chatbots aren't restrained by that.
Maybe force chatbots to use an "I approve this message" tag too - which would mean they'd have to restrict the "facts" to ones that have been pre-checked? In another era, that might become law, but I guess the Republicans wouldn't go for it now, when they hear their chatbots lie more.