Imagine you’re trying to deliver groceries in a busy city using a map that was published in 1971. You’ll find yourselves looking for houses, apartment blocks, entire neighbourhoods that didn’t exist when the map was drawn. This is what it’s like when an AI coding assistant or agent is trying to work on a code base using an out-of-date picture of the structure. With every change to the code, the map gets a little bit more out-of-date. With every change, the context gets a little bit more misleading. The thing about context to an LLM is that it can’t distinguish fact from fiction – the real code as it is at this moment from, say, a summary of the code that was generated a bunch of changes ago. To a Large Language Model, it’s all just context. This is why it’s important to keep the information in the context as current as we can. The implication is that we need to refresh the context after every significant structural change to the code. This won’t be the only principal that encourages…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.