I like LLM-generated code more than most people do, but I'm pretty sure I dislike LLM-generated English more than most people do. There's plenty of anti-LLM sentiment out there, but surprisingly (to me) little theorizing about what is wrong with it, and why. So, here's my view: A lot of what we get from writing, we get over time or from scrutiny. Both in fiction or nonfiction, we often underrate how much of its meaning and import only comes to us after investments of time and effort. LLM-generated English does not repay those investments the way human-generated English can. So, presenting LLM-generated English as human-generated English violates a trust. You are implicitly asking someone to consider something with care and to think beyond the surface of something, but the writing will not reward that care and time.1 Relatedly, LLM-generated writing is disproportionately manipulative. Precisely because it can't do much else, it trades in formulaic contrasts, cheap sensationalism, and…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.