: Is there such a thing as a “reliable source” online anymore?
In school, we all learned about reliable sources online using a sort of pyramid metaphor, with .gov and .edu websites at the top, followed by mainstream newspapers and magazines, and random blogs and bullshit at the bottom. If your teacher was bad, Wikipedia belonged to the latter category; if your teacher was good, they would point out that it can be a good place to get an overview but then you should click on the citations and follow through.
Everything was so neat and tidy then, right? It was very easy to follow up the warning of “You can’t trust everything you read online” with the reassurance of “But there is great information if you know where to look.” I don’t think this is true anymore.
- Google sucks now, because it’s full of LLM-generated spam.
- Academic sources also suck now, because of old problems (p-hacking) and new ones (LLM-generated spam). And they don’t do breaking news or product reviews.
- Wikipedia is OK, but has its own political biases, and people aren’t contributing the way they used to because they don’t visit the site as much as they used to because they just read the LLM summary at the top of the Google results.
- For a while, message boards like Reddit flew under the radar as a source of authentic information on things like “Is this laptop gonna break in 6 months?” and “Is this auto repair store gonna fuck me over?”, but I have long suspected that Reddit was astroturfed from wall to wall, and (you guessed it) LLM bots are doing little to help the situation.
Oddly, the most reliable and quick source for information of the form “just give me a basic overview of what this thing is” is often an LLM itself these days—they just give you concise bullets, no frills, and you can control the prompt, e.g. by asking it for a best-effort summary of both sides of a debate, unlike the LLM summaries that you encounter on websites, which are skewed toward a particular point of view.
Unfortunately, that won’t last long, because all the LLM chatbots are now connected back to Google search, meaning that instead of hallucinating average information (which is at least sometimes useful), they will begin reporting authentic, misleading information as found on Google, which has been overrun with LLM bullshit. You know what I’m saying? LLMs have stepped in to fill a gap that Google stopped servicing around 2015 or so, but they are now undermining their main value proposition precisely by being so cheap. Soon, asking GPT for laptop recommendations will be equivalent to asking an LLM to summarize the top clickbait article on Google with some dumb title like “top 10 laptops in 2025,” and it won’t take long after that for the AI companies to just make explicit marketing deals to put product placement in LLM responses.
The only websites I can sort of trust these days for reliable information about what’s happening in the world are basically legacy news websites. But even those have particular topics they like to showcase more than others. And it creates an epistemological problem when you have no reference point by which to judge the newspapers themselves as credible or not. Even the phrase “mainstream media” has positive/negative associations that depend on your political beliefs.