: If you think LLMs are better than people it's because you don't understand relationships
In response to “LLM problems observed in humans” by Jacob Kastelic, which makes the familiar point about how the bar for LLMs and the Turing test keeps getting higher and higher and it’s about time we admit that LLMs are already outperforming real people at many things.
I don’t really disagree with what Jacob’s arguing here—I’ve been saying the same thing since like, GPT-2: People’s critiques of LLMs and AI often reflect an unreasonably generous view of human capabilities. The fact that LLMs can construct grammatical, legible sentences at all puts them ahead of a large chunk of adult humans. And yeah, common criticisms about hallucinations and slop have always rung hollow to me—have you seen the kind of shit your uncle posts on Facebook?
The thing is, though, Jacob frames his argument in a weird way. He doesn’t argue that LLMs are getting pretty good and we should use them for the things they are good at. Instead, his argument is that people suck, and that we should feel bad for him for being so smart that he can tell. For example:
Despite exhausting their knowledge of the topic, people will keep on talking about stuff you have no interest in. I find myself searching for the “stop generating” button, only to remember that all I can do is drop hints, or rudely walk away.
I don’t find this framing compelling at all. Like dude, get better friends? Find people who share your interests? Learn to firmly, but politely end the conversation? I certainly have this problem too, e.g. when people tell me way more than I care to know about sports gambling or credit card rewards. But just as nobody likes being bored, nobody likes being boring either. So here’s what you say: “That’s cool, man, I definitely want to read about it more sometime next time I need a new credit card. Say, have you seen anything good on Netflix?” This is a basic social skill: gracefully changing the subject.
I won’t go one by one through Jacob’s other examples, but the post is a list of LLM features that he misses in human conversations, e.g.
- The inability to turn on a “thinking mode” that is more grounded in Google research
- The fact that some people have a “small context window” i.e. eventually the conversation topic drifts
OK, and? Where’s the love, man?
Jacob thinks he has an insight in positioning humans and LLMs as competitors and pointing out all the new ways in which humans don’t stack up, but the problem with this framing is that that’s not what humans are for. I don’t know where Jacob got this idea—that humans are a sort of transactional machine where you put a prompt in and get a peppy research summary out. Like, what a sad way to live? Humans are good for so much more than that, e.g. making hilarious, off-color jokes; crying with you at the club; sex; and dining in.
Call me a romantic, but I simply don’t mind if my friends are boring sometimes or believe in weird conspiracy theories (but please, if we could cool it with the racism…) or don’t understand science. That was never my bar for friendship, and the fact that LLMs exist doesn’t change anything about the situation.