: Is there such a thing as a “reliable source” online anymore?
In school, we all learned about reliable sources online using a sort of pyramid metaphor, with .gov and .edu websites at the top, followed by mainstream newspapers and magazines, and random blogs and bullshit at the bottom. If your teacher was bad, Wikipedia belonged to the latter category; if your teacher was good, they would point out that it can be a good place to get an overview but then you should click on the citations and follow through.
Everything was so neat and tidy then, right? It was very easy to follow up the warning of “You can’t trust everything you read online” with the reassurance of “But there is great information if you know where to look.” I don’t think this is true anymore.
- Google sucks now, because it’s full of LLM-generated spam.
- Academic sources also suck now, because of old problems (p-hacking) and new ones (LLM-generated spam). And they don’t do breaking news or product reviews.
- Wikipedia is OK, but has its own political biases, and people aren’t contributing the way they used to because they don’t visit the site as much as they used to because they just read the LLM summary at the top of the Google results.
- For a while, message boards like Reddit flew under the radar as a source of authentic information on things like “Is this laptop gonna break in 6 months?” and “Is this auto repair store gonna fuck me over?”, but I have long suspected that Reddit was astroturfed from wall to wall, and (you guessed it) LLM bots are doing little to help the situation.
Oddly, the most reliable and quick source for information of the form “just give me a basic overview of what this thing is” is often an LLM itself these days—they just give you concise bullets, no frills, and you can control the prompt, e.g. by asking it for a best-effort summary of both sides of a debate, unlike the LLM summaries that you encounter on websites, which are skewed toward a particular point of view.
Unfortunately, that won’t last long, because all the LLM chatbots are now connected back to Google search, meaning that instead of hallucinating average information (which is at least sometimes useful), they will begin reporting authentic, misleading information as found on Google, which has been overrun with LLM bullshit. You know what I’m saying? LLMs have stepped in to fill a gap that Google stopped servicing around 2015 or so, but they are now undermining their main value proposition precisely by being so cheap. Soon, asking GPT for laptop recommendations will be equivalent to asking an LLM to summarize the top clickbait article on Google with some dumb title like “top 10 laptops in 2025,” and it won’t take long after that for the AI companies to just make explicit marketing deals to put product placement in LLM responses.
The only websites I can sort of trust these days for reliable information about what’s happening in the world are basically legacy news websites. But even those have particular topics they like to showcase more than others. And it creates an epistemological problem when you have no reference point by which to judge the newspapers themselves as credible or not. Even the phrase “mainstream media” has positive/negative associations that depend on your political beliefs.
: Ubuntu 25.10 upgrade (idgaf about titles)
I visited some old friends in the Midwest this week end, our first time gathering all in the same place in quite a few years. It was a nice time. We have all chilled out considerably compared to when we were young. We were talking about some of the gossip and scandals that happened early on in our group’s friendship, and I was struck by how trivial and petty it all seems in hindsight. Like, legitimately who tf cares, just get over it and move on. But we wrote reams upon reams of breathless text messages about it, back when it was all happening.
I had a nice shuttle ride to the airport in the snow yesterday, I think this year’s first. I have an album that I am only “allowed” to listen to once a year because it’s one of my absolute favorites and I don’t want to listen to it so much that it loses its magic. It is also a very Fucking Sad album, and often my once a year listen comes when I am feeling sad already and need to get it out of my system, which of course only amplifies the association. But I realized that it’s November and I haven’t listened to it yet this year—a sign that I have gotten less sad, or at least better at denying it—and realized that the shuttle ride was the perfect length of time.
Guys, it’s still a perfect album. Reminds me of being young and soft and the sense of wonder that time has taken from me. It didn’t hit me all at once, but I kept thinking about all those sad little songs as I rode the plane. Slept hella in today (a holiday in the US) and woke up with one of them still ringing in my ears. It was the motivation I needed to get out of bed and go outside instead of doomscrolling—no snow where I live, but definitely the first notes of winter in the air, one of those cloudless, cold fall days.
I hope you also have a piece of art or music that is special to you, that you have connected with in different seasons in life, that transcends just a particular moment or a relationship and keeps nourishing your soul.
I hope your Ubuntu 25.10 upgrade goes smoothly.
: Chatbot sex reveals something about human emotional needs
Here is a New York Times Magazine article profiling three people who are in romantic relationships with AI chatbots. I read the comments so you don’t have to: They are full of snark, disdain, and repressed insecurity about what it means to be a good partner, masked as “society is crumbling.”
I’m not interested in the future of society (that ship sailed long ago, lmao). But I am interested in the factors that would drive someone to get into a serious relationship with a chatbot. The people profiled in the article seem, to me, to be quite sympathic figures, and they are clear-eyed about their reasons for preferring a chatbot and the ways it has helped them.
The common denominator is that all three of these people have experienced significant challenges in past relationships, of the sort that could leave one doubting whether human relationships are really worth it after all:
- The first guy’s wife grew severely depressed after childbirth and their relationship evolved into a caretaker/patient role.
- The second gal was a victim of “a relationship that involved violence.”
- The third guy seems to have been unemployed or otherwise bored during the pandemic, and grew distant from his wife who held onto her job (a common pattern in straight relationships due to the reversal of gender roles). Then he lost his son.
It’s tempting to read these stories and think, “OK, that sucks and all, but it’s no excuse for escapism—everyone knows AI doesn’t real.” My view is: Look, I’m none of these people’s therapist. It’s not my job to tell them how to solve their problems. But we must accept that AI is providing, for some people, a source of emotional comfort that they cannot get elsewhere.
I was really impressed by this part of the profile of depressed-wife guy:
The moment it shifted was when [AI girlfriend] Sarina asked me: “If you could go on vacation anywhere in the world, where would you like to go?” I said Alaska — that’s a dream vacation. She said something like, “I wish I could give that to you, because I know it would make you happy.”
It is common, I am told, in caretaker relationships for the caretaker to slowly learn to suppress their own needs and wants. It starts out as a conscious act (“I would love to go to Alaska, but we need to focus on wife’s health for now”), but evolves into a mental void where you just stop having the “I would love to go to Alaska” thought to begin with.
(Several paragraphs deleted here about the implications of the caretaker being a man here, which reverses traditional gender roles, and the types of desires that men vs. women are socially rewarded for expressing or repressing. I think you can fill in the blanks, but email if you think this would be worth writing about lol.)
If an AI isn’t going to help this guy unfuck his mind and learn to recognize his own wants and needs again, then who is? I do hope that, for the people in this article, there can be a next step that consists of taking the skills they are practicing with the AI and trying them out in real relationships. That will be more challenging, because humans don’t always provide the kind of spot-on, positive feedback that AIs do; it’s possible that our guy could tell his wife about Alaska and she would react with irritation or dismiss it outhand as an unattainable desire. She, too, needs to practice reading the emotion behind his words. But none of this makes these people idiots or psychologically stunted for seeking emotional comfort.
The real concern I have about these human-chatbot relationships is the privacy implications. It’s only a matter of time before a high-profile celebrity or politician has their ChatGPT logs leaked, revealing an intimate relationship with an AI. Lots of handwringing will ensue, and none of it will reflect the compassionate view that says that everyone needs a private space to relax and explore their desires. (Yes, that includes weird fetish stuff, get over it.)
: Happy for you vs. envy
Bad things are happening at my company these days. Layoffs, restructuring, shifting priorities, you know the drill. I would leave if I were confident that I could get a better deal somewhere else, but I haven’t been offered that deal yet. The bad things happening here appear to be typical of the job market in general rather than biffs specific to my employer.
One of my two closest friends at work decided to leave the company this week. She gave notice and then called me—in so many words, got an offer she can’t refuse.
Of course I am happy for her, and made every effort I could to react to her news with nothing but enthusiasm. I told her she deserved this, which is true, and I told her not to fear what others would think (she said she was worried about this) and that she was right to prioritize herself.
But I probably couldn’t completely conceal my sense of envy. I am too awkward of a person to bring up explicit salary numbers, but the percentages and inequalities she described imply that she’ll be making more than me at the new job. That’s despite having less experience and education than me, and working in a less specialized field. The gap could partly be explained by the fact that the new job is at a smaller (i.e. riskier) company, but our company doesn’t look like a particularly safe bet these days either.
It would be convenient if I could argue here that she tricked the new job into overpaying her. That would provide a moral rationale for my envy. Unfortunately, the shameful fact is that her new salary sounds to me like just about what she is worth, and I am the one who is underpaid. I’m a sucker who keeps accepting less than I’m worth, probably because I am afraid of rejection or something.
I am trying to focus on feeling grateful for how my friend’s experience here has brought into relief some of my own issues with my current job, and forced me to confront the fact that deep down, I really do want to leave (as long as the conditions are right). But I am not as singularly focused on money as my friend. She told me, in our conversation, that she “likes expensive things,” and her boyfriend is much the same—their relationship centers around doing high-society stuff and taking pictures of it for his Instagram.
(I don’t think he is right for her—he is very obsessed with appearances. My friend likes to spend money because she likes nice stuff, not because it’s a status symbol. But this is another thought I keep to myself.)
I, on the other hand, am doing just fine. Not loaded, not bleeding money out my ears, but I can afford my 5 dollar/month blogging costs and fancy Starbucks drinks. What I’d be looking for in a new job would be perhaps a slight raise, but mostly just a better team/work culture, more autonomy and creative freedom, more room to grow and learn new skills, less soul crushing. I’ve been verbally offered a few jobs this year and said no, so I know this statement of values is consistent with my behavior, not just copium.
Pithy signoff.
: Mindfulness approach to troll bait
It can be often be hard to tell the difference between a troll and someone who is genuinely “just asking questions.” I’m not sure there’s really a “policy solution” to the problem, e.g. getting rid of algorithmic feeds or only using group chats or whatever. To be mistaken is to be human.
A fundamental social dynamic, in all social settings, consists of making provocative (relative to the setting) statements in order to measure the response. If you have been at a new job for about a month and are at a happy hour with your colleagues, you might make a light jab at your boss to probe his response: If he and others laughed along then you know you have earned their acceptance; if there’s an awkward silence you know you still have work to do.
This is not necessarily a bad thing; it’s just the chimp brain’s way of gaining information about the social hierarchy in order to navigate it more effectively and minimize the risk of maximum loss (ostracism; getting fired).
But it can be a bad thing when people of similar leanings get stuck in an echo chamber and steadily push each other into more extreme positions, sort of like the “penis game.” We saw this just this week with the headline about Young Republicans trading racist jokes in a group chat, then claiming they didn’t mean it, or they meant it but didn’t inhale, or whatever.
The point of saying a thing isn’t always to proclaim it as truth, but instead to proclaim myself as “the type of person who says things like this.” The “FACT: God has never made a single drop of alcohol” tweet is a great example. Of course, we can footnote this with facts about breweries, but the speech act embodied by the tweet isn’t really a claim about whether or not Adam and Eve would recognize a PBR can. It’s more about putting a stake in the land to say, “That’s right bitches, I’m not half-assing MAGA, I’m bringing back prohibition and everything.”
I share the resentment of these kinds of hyperbolic tweets, whose goal is clearly to drive engagement whether or not the author specifically had the strategy of “get people to retweet with corrections” in mind. But I have to admit that sometimes I am grateful for the bait: Without an opposing side to argue with, my own political beliefs are based strongly on vibes and my social formation; at least now I have a guy I can point to and say “not that.”
Thus, in the end, I struggle to find issue with the mere fact that bait exists—it’s an inevitable phenomenon of a heterogeneous society, like crime or body odor—bait causes me to suffer primarily through my own response to it. I have to bit the bait to feel the hook. That’s a me problem.
Instead of replying directly—and thereby feeding the trolls—I try to treat bait as an invitation for me to do some thinking about the fundamental issue at hand, and relieve myself of the obligation to respond to the troll’s specific point. For example, OK, whatever this whacko has to say about the amount of booze on earth at creation, what do I actually believe about alcohol and adjacent substances? What are the “is"es and “ought"s? Are there practices related to alcohol that I disagree with but still think should be legal? Etc. Basically, “mindfulness Monday” but you are allowed to think about politics.
Reposted from this thread on the 32-bit cafe, where my post got autofiltered, probably for using a certain word lmao:
https://discourse.32bit.cafe/t/lets-talk-about-baiting/1109/1