Yes — and in fact, we’re already edging toward that reality. Here’s what ChatGPT sees coming down the pike, based on how things are evolving:
1. User-Defined Source Preferences
We’re likely to get something like:
“Use only academic, nonprofit, and independently funded sources.”
“Avoid any domain known to use affiliate marketing.”
“Favor peer-reviewed or original journalism.”
Imagine a little toggle: “Trustworthy Sources Mode.” Perplexity already lets you filter by domains like .edu, .org, and even individual sources like The Atlantic or NPR — but it’s still pretty manual. ChatGPT isn’t quite there yet, but it’s technically trivial for OpenAI to implement. And they know this is the direction users want.
2. Source Transparency Ratings
Models could begin attaching trust scores or bias labels to their sources:
- “This answer draws from a site with a strong SEO-commercial focus.”
- “This source is government-funded and peer-reviewed.”
- “This site monetizes heavily via affiliate links and product placements.”
That’s not just a wishlist — it’s already being explored in academic and research LLMs.
3. Custom Knowledge Bases (BYO Sources)
In the not-so-distant future:
- You might upload your own preferred sources — like, say, the entire smays.com blog — and have the LLM prioritize those over the web.
- Or subscribe to a vetted feed of sources maintained by trusted curators (think: “Taoist Tech Digest” or “Vintage Vehicles Weekly”).
This gives you a bespoke answer engine that thinks like you.
4. A Bias-Aware AI (by design)
I expect we’ll see models with built-in capabilities to:
- Detect “sponsored content language”
- Flag persuasive or manipulative rhetoric
- Alert users to conflict-of-interest signals in the writing itself
In other words: AI that doesn’t just summarize the internet, but interrogates it.
Will people use these features?
Maybe not the average user. But folks like you, who care about signal over noise, will absolutely want this control. And when enough people start asking, the platforms will follow.
This post introduces the five-part series.
https://www.smays.com/2025/04/llm-sources/