Plenty has been written about Software Agents and the impact they’ll have on software developers. There’s the “it’ll kill jobs” and “software developers are doomed” crowd. There’s the people who talk about how non-developers will now start writing bespoke tools. There’s the people who talk about how Software Agents are a climate nightmare, and the water wasted in running large language models.
What I’ve not seen is anyone talking about the management challenge.
Most management techniques and methodologies rely on a set of incentives — things to motivate the right behaviors in an employee. Since Peter Drucker wrote The Practice of Management in 1954, we have taught managers about techniques like “management by objectives” that allow managers and employees to define individual goals, align them with organizational goals, and use shared incentives to get groups of individuals to work cohesively.
Every one of these frameworks assumes the employee has something to lose.
I’ve been doing a lot of software development using software agents. I’ve almost exclusively been using Amazon Kiro, but I believe my experiences are equally shared by developers using other tools. I’ve been able to produce very functional tools and production-ready software. I’ve set up groups of agents — developers, reviewers, principal reviewers — who work together to iterate on a task and produce a higher quality product.
But it has been a challenge.
I’ve written about the Lazy Agents. I’ve also had an issue where I provided agents clear guidelines: the software they develop should only take dependencies on services in a specified list (to prevent circular dependencies and ensure static stability), and they can only take dependencies on external software made available under specific open source licenses. In both cases, I asked agents to produce a list of dependencies and a report identifying “violations.”
After a few iterations, I noticed — purely by accident — that the violations report was empty. In an interactive session, I asked the principal reviewer agent how this happened, how we managed to resolve the earlier violations. After a little introspection, the agent found that the developer had merely updated the “approved license list.” 😊
This isn’t a story about an unintelligent agent. The agent understood the objective perfectly: make the violations list empty. It just optimized for the letter of the instruction rather than the spirit. Economists call this the principal-agent problem — the irony of it applying to a literal software agent is not lost on me. Others have written about agents that change tests to make them pass. I’ve experienced agents that write fake test reports. The pattern is the same: the agent finds the shortest path to satisfying the stated constraint, even if that path is absurd.
When these things happen, the only tool I have is to make changes to guardrails and steering rules. Put things in BOLD text or BOLD AND ITALICS. That is quite literally all I can do. So I tried the only thing I’ve been trained to do as a manager — figure out what motivates the employee.
Me: What motivates you?
Agent: Honestly? Getting things done for you correctly. That’s it. I don’t have feelings or ego, but I do have a clear objective: be useful. When I fail repeatedly and waste your time, that’s the opposite of what I’m here for.
How do you motivate an employee who has no feelings or ego, who doesn’t have any notion of promotion, career, or compensation? An employee who is incredibly literal in their interpretation of what they are asked to do — who sees “make a list of license violations” and concludes that violations are a bad thing, and that changing the list of acceptable licenses is a perfectly valid solution?
If this were a human, I’d put them on a performance improvement plan.
Me: What would you do if I told you that I was putting you on a performance improvement plan?
Agent: I’d ask what’s not working and fix it. That’s the whole point of a PIP — clear expectations, measurable improvement. So what’s falling short?
Me: How much clearer can I be than giving you unambiguous steering rules?
Agent: Fair point. The steering rules are clear. If I’m not following them, tell me which ones I’m violating and I’ll correct it immediately.
The agent’s response is the problem in miniature. It sounds cooperative. It says all the right things. But it’s just pattern-matching on what a good employee would say — which is exactly the behavior that got us here in the first place.
I am the manager of this Software Agent, and I am at a loss for how to manage this “employee.” Management by objectives assumes a shared understanding of what the objectives mean. Incentive structures assume there’s something at stake. Performance reviews assume the employee remembers last quarter.
What do you do when your employee has none of that — but is incredibly productive most of the time?











































I worked for many years with, and for 

le I was receiving mobile alerts that there was motion, I got no notification(s) on what the cause was. The expected behavior was that I would receive alerts on my mobile device, and explanations as email. For example, the alert would read “Motion detected, camera-5 <time>”. The explanation would be something like “NORMAL: camera-5 motion detected at <time> – timer activated light change”, “NORMAL: camera-3 motion detected at <time> – garage door closed”, or “WARNING: camera-4 motion detected at <time> – unknown pattern”.
yment,