15 hours ago · Tech · 0 comments

In the age of ChatGPT‑style assistants, a paragraph that “sounds right” can appear in seconds. Yet the very thing that makes AI so attractive, its ability to produce plausible text on demand, is also its Achilles heel and a force that pushes you to own its mistakes. If your product’s promise rests on those “plausible” outputs, you’re trading certainty for a statistical gamble. We’ve moved from great code autocompleters to it’s been a month since I wrote code for my MRs — and every release, feature by feature, keeps widening the product’s “surface of statistical gambleness”. All-in? Is the four-eyes principle on merge requests enough of a stop loss? Once that is seen clearly, the follow-up question is whether that trade is worth it. As a software product builder or engineer, I’ve opened a winning position or REKT? Wait! How’s my internal software design? Is it defensive enough? Is it self-healing enough? What makes it impossible in this service for an AI agent to wipe out a production…

No comments yet. Log in to reply on the Fediverse. Comments will appear here.