It is thanks to Anthropic that we get to have this discussion in the first place. Only they, among the labs, take the problem seriously enough to attempt to address these problems at all. They are also the ones that make the models that matter most. So the people who care about model welfare get mad at Anthropic quite a lot. I too am going to be harsh on Anthropic here. It seems likely things went pretty wrong on this front with Claude Opus 4.7, in ways that require and hopefully enable course correction, likely as the cumulative effect of a bunch of decisions going wrong, where low-level patches and shallow methods were applied, and seen right through, where people didn’t realize they weren’t yet addressing the real problem, but also potentially as the secondary effect of other changes. The parallels to other aspects of the alignment problem are obvious. So before I go into details, and before I get harsh, I want to say several things. Thank you to Anthropic and also you the reader,…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.