unbiased, or use of AI for a pretense of knowledge and skills they don't have and won't bother to develop if they're happy to have that AI-assisted pretense (a pretense that will be shown as the joke it is the moment they're deprived of those AI tools).
I skimmed the Substack, didn't look at the video.
You could imagine a future where, say, you ask for the latest news on foreign policy, and the answer you receive from your chatbot is powered by The Atlantic or The Economist (or take your publication/ journalist of choice). A journalist on the other end loads their reporting (in whatever format suits them: writing, video, a stack of relevant files), and AI chatbots pull from that data but convey it in a format that suits you, the recipient. Your stories could arrive as two-minute audio summaries, 30-second videos, or traditional articles, whatever you prefer. When AI-powered, that kind of versatility becomes possible.
We dont lose the facts or accuracy, we just receive the story in the context of the new general purpose technology. I, for one, would feel significantly better about my AI-generated briefing if I had those guarantees, knowing the underlying sources are vetted institutions I trust, not an opaque mix of scraped content and synthetic nonsense.
That shows a misunderstanding of how GenAI - LLMs - work. Even with high-quality training data, they can hallucinate. GenAI that's supposed to summarize a meeting, for instance, can invent people who weren't there and conversations that didn't take place. GenAI quoting actual sources can scramble citations and invent quotes.
And the AI companies saying they care about both accuracy and copyright when they make deals with established mainstream media are just going through a charade - trying to pretend they care about truth and intellectual property rights. They aren't training new AI models from scratch only on the media outlets they made deals with. All the old training is still there, and they're continually scraping the internet and stealing from everyone they haven't made deals with. That all goes into the mix. OpenAI publicized its making deals with both rightwing and mainstream media. You can't summarize contradictory stories on the same event. At best you'll get both-sidesism with little or no background to help you know which is more accurate.
The "personalization" can be very misleading in terms of objectivity, too, but is designed to make the AI's output appeal to the user. It's a sycophantic parody of real journalism.