Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
9. You can actually set most LLMs to weight objective fact over probabilistic determination.
Thu Dec 4, 2025, 12:53 PM
Thursday

If objective fact is rated a certain factor higher than probability infill, it will automatically search (if you let it) for the most sites and data sources deemed most "Factual" which it will then present to you as links for corroboration and continuing research.

I've redesigned a few AI using certain open-source LLM models on a no-internet Tower that speak only in exclusively factual states of mind unless otherwise prompted.

In essence, like any tool, AI is only as good as the person using it. How many people didn't know about the simplest Google commands like "Quotes" searching for direct phrases or -Term meaning removing specific designated terms?

AI isn't hard or nebulous. It's quite fact based, but you need to know what you're doing if you want to get the most out of it.

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Re: AI and hallucinations...»Reply #9