: I'm healing but the world is crumbling
For the past three years or so I have been on a long journey of healing—finally learning to recognize my problems for what they are, getting therapy, understanding how my childhood environment made me Like This, etc. It’s really been an amazing process—I had no idea that I was holding myself back so much, with all those unhelpful mental patterns and self-defeating narratives.
Finally, in 2026, I have a semblance of the feeling that I am in control of my life and I can start thinking about what I want to do with it instead of just letting it happening to me … but then the world goes to shit like this. It feels like such a joke, you know? What was the point of all that hard work to achieve mental clarity if all I can see with it is the collapse of civilization as I know it?
OK, I mean, maybe that’s dramatic. We still have grocery stores, gas stations, and taxes. Everything still works. But sometimes I feel like the circle of things that “at least we still have” is getting smaller and smaller, and maybe I am just moving goalposts in order to preserve my sanity instead of seeing things as they are.
Let’s run with that hypothesis—suppose everything really is going to get way, way worse from here, and suppose I have the option of either acknowledging what’s happening or sticking my head in the sand. What do I actually gain by picking the former? I am already doing everything in my (admittedly limited) power to fight for the things I believe in, so what do I have to gain by confirming that my struggles are in vain?
A buddy of mine is also predisposed to the kind of catastrophic thinking that plagues me, and he told me once that he loses sleep because he spends so much time scrolling Reddit before bed and seeing all the tales of despair. I told him to stop scrolling Reddit. It doesn’t make a lick of difference that he saw an upsetting post and felt upset about it. It doesn’t solve the problem to hook his brain up to the 24/7 news cycle and feel as awful as possible. Awareness is not results.
: If you think LLMs are better than people it's because you don't understand relationships
In response to “LLM problems observed in humans” by Jacob Kastelic, which makes the familiar point about how the bar for LLMs and the Turing test keeps getting higher and higher and it’s about time we admit that LLMs are already outperforming real people at many things.
I don’t really disagree with what Jacob’s arguing here—I’ve been saying the same thing since like, GPT-2: People’s critiques of LLMs and AI often reflect an unreasonably generous view of human capabilities. The fact that LLMs can construct grammatical, legible sentences at all puts them ahead of a large chunk of adult humans. And yeah, common criticisms about hallucinations and slop have always rung hollow to me—have you seen the kind of shit your uncle posts on Facebook?
The thing is, though, Jacob frames his argument in a weird way. He doesn’t argue that LLMs are getting pretty good and we should use them for the things they are good at. Instead, his argument is that people suck, and that we should feel bad for him for being so smart that he can tell. For example:
Despite exhausting their knowledge of the topic, people will keep on talking about stuff you have no interest in. I find myself searching for the “stop generating” button, only to remember that all I can do is drop hints, or rudely walk away.
I don’t find this framing compelling at all. Like dude, get better friends? Find people who share your interests? Learn to firmly, but politely end the conversation? I certainly have this problem too, e.g. when people tell me way more than I care to know about sports gambling or credit card rewards. But just as nobody likes being bored, nobody likes being boring either. So here’s what you say: “That’s cool, man, I definitely want to read about it more sometime next time I need a new credit card. Say, have you seen anything good on Netflix?” This is a basic social skill: gracefully changing the subject.
I won’t go one by one through Jacob’s other examples, but the post is a list of LLM features that he misses in human conversations, e.g.
- The inability to turn on a “thinking mode” that is more grounded in Google research
- The fact that some people have a “small context window” i.e. eventually the conversation topic drifts
OK, and? Where’s the love, man?
Jacob thinks he has an insight in positioning humans and LLMs as competitors and pointing out all the new ways in which humans don’t stack up, but the problem with this framing is that that’s not what humans are for. I don’t know where Jacob got this idea—that humans are a sort of transactional machine where you put a prompt in and get a peppy research summary out. Like, what a sad way to live? Humans are good for so much more than that, e.g. making hilarious, off-color jokes; crying with you at the club; sex; and dining in.
Call me a romantic, but I simply don’t mind if my friends are boring sometimes or believe in weird conspiracy theories (but please, if we could cool it with the racism…) or don’t understand science. That was never my bar for friendship, and the fact that LLMs exist doesn’t change anything about the situation.
: I don't care about your credit card
I don’t know if this is just a me problem or not, but people love telling me about their premium credit cards. They always bring up the same points:
- It costs $400/year
- But it pays for itself
- Travel rewards and cash back
- Airport lounges
I get it, it’s just making conversation, right? But is there anyone alive who hasn’t heard this sales pitch at this point? Anyway, if you are one of these Premium Kappa Whatever reserve hustlers, here are the reasons you should reconsider evangelizing your credit card:
- I already know.
- I don’t like playing fiddly numbers games with my money—yes, even though I am good at math.
- If I wanted the card I would have signed up by now.
- I took microeconomics and understand that “it pays for itself” is a losing proposition for the creditor—the benefits will be enshittified before long.
: In service of what?
I have talked myself out of buying a new (refurbished) laptop like seven times in as many days. $600 seems like a lot of money that I could be saving for a rainy day or cozier retirement or house (hah!).
Similarly, when I have free time after work, I consistently use it to try to develop my brand (not the convexer brand—that’s an anti-brand—but networking and toiling in service of my realname brand which pays the bills) or learn some fancy new skill. Make no mistake, I love the work I do and I love learning new things. But there are two wolves within me, and the one with all the ambition drives me to overexertion sometimes. When I finally make it to 7pm, my brain is so depleted that I “need”(?) to do some stupid phone scrolling to decompress.
An endless loop of scrunching and unscrunching.
In the hypothetical retirement where I have $600 extra from not buying a laptop at the end of 2025, and perhaps a few more from being promoted for pressing good computer buttons, what would I be doing instead of the phone scrolling and self-flagellation?
I guess at the bottom of all the pressures that have scrunched me up so tight is a vision of cool vacations with ym SO, a dream crafts room, and a big, heavy library of the sort one regrets when moving homes. In the abstract, I do have interests. But I don’t really have much practice at enjoying them.
Planning vacations stresses me out. Picking restaurants stresses me out. Adding items to cart stresses me out. All I can think about is the money and time draining away.
New year’s resolution: recreation.
: Turning off the guestbook
Turning off the guestbook for this site to save a few bucks a month. It was mainly an educational project for me anyway and I have other ideas that I want to play with and could benefit from an always-on VM.
There were only a few posts on there (people usually just email me) but I have a backup of the database; might try to extract it as a static page or something.