I lead a medium-sized team at a cybersecurity startup - fast-moving, high standards, low tolerance for bugs in prod. I am in charge of technical direction, architecture, and defining/raising engineering standards - and AI has effectively thrown a wrench into my perfect world. As revolutionary as AI is, it's also been the source of many pains, mainly: superficial planning, a reduction in code quality, increased test fragility, and an increase in bugs. If you're facing something similar - whether you are leading the effort of AI adoption at your company or are a member of an evidently flailing team - then I hope you find something useful in here. If you're already sold on AI and just want the process, skip to Enforcing Good Usage. Note that this guide focuses on the most obvious symptom of AI abuse - code quality - over other important aspects of the job - planning, maintenance, and triage. I may do a follow up on each of these in the near future. TL;DR I wrote this for dev leads but…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.