Early-stage technologies tend to be inefficient. The first internal combustion engines were grossly wasteful compared to modern ones. Their makers were focused on making the damned things run, not on making them run well. That’s where we’re at with AI. Companies are throwing stuff at LLMs to see what works. Right now, traction and speed matter more than efficiency. But that won’t always be the case. Eventually, we’ll shift to AI-powered systems that are both more efficient and controllable. What will they look like? My bet: a combination of more carefully structured inputs (i.e., context engineering) and good, old-fashioned deterministic programming. I like Simon Willison’s definition of agents: “An LLM agent runs tools in a loop to achieve a goal.” The most common way to do this is to give an agentic system (e.g., Claude Code) the ability to call deterministic tools — Unix CLI utilities, APIs, etc. It’s a powerful and flexible approach, but one that uses a lot of tokens and requires…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.