Rise of the Agent Director

In my last post, Where have all the Engineers gone?, I wrestled with some tough questions we’re facing today, in the age of AI-powered engineering. I can’t just leave it there…

Just as the rise of automation and CI/CD pipelines reshaped operations teams by forcing traditional sysadmins to upskill into DevOps engineers or risk obsolescence, we’re seeing a similar inflection point with AI in software engineering. The expectations of engineers are changing fast. Those who adapt and learn to manage agents will thrive. Those who don’t may find themselves deprecated, irrelevant, and obsolete. This is not because they weren’t smart, but because they didn’t evolve.

“The quicker you let go of old cheese, the sooner you find new cheese.”

Spencer Johnson, “Who moved my cheese?”

We’re already seeing the shift. Engineers are building agents, experimenting with them in IDEs, integrating them into workflows, and discovering just how powerful and helpful they can be. Over the last month, I’ve created a video agent, security vulnerability remediation agent (really rolls off the tongue), ground truth analyzer, legacy code remover for our 220ish repos, and more. In my free time, I busted out a new Star Wars site, a Star Wars meme app, and have 2 more (not all Star Wars) in progress. In just hours, I’ve been able to accomplish monumental tasks; it’s incredible. (and have a lot of lessons learned and battle wounds). Engineers are falling in love with agents. I’m not falling in love, I’m already in love.

This is a natural evolution, just like version control, open source, and DevOps have changed the shape of how we work; agent orchestration is the next evolutionary step, and will move faster than we’ve seen before. Unlike version control, open source, DevOps, etc., which have benefited the way we make software, AI changes the way we interact with how we make software. We don’t need to be super technical to make an app. Business is going to move tremendously faster because they don’t care about code, they just want their app. We, engineers, need to evolve too.


Subscribe to my blog and get posts like this in your inbox. Share your email below, or follow me on Threads, LinkedIn, or BlueSky.


Evolve into an Agent Director

Soon, we won’t just be coding with agents, we’ll be running teams of agents. And we’ll need engineers who can direct, supervise, and course-correct them when things go sideways, because they will. This is where experienced senior engineers come in, not as the best coders, but as the best judges of code, architecture, and trade-offs; decision makers for the agents.

AI can write it. You decide whether it’s right.

This is the future, get ahead of the curve, now. Software engineers should begin to transition into a new role: Agent Director. I’m sure someone else will come up with a catchier buzzword. I truly believe this role is the future of engineering, and frankly, it is here today. We’re still early in the hype cycle, but it will catch on. It’s here, and it’s glorious.

What is an Agent Director?

Why, someone who directs agents, of course!

An Agent director will be focused on:

  • Creating, maintaining, and evolving agent configurations to optimize outcomes
  • Monitoring agent performance and debugging the full pipeline, from prompt to production
  • Training and fine-tuning agents to meet engineering and product standards
  • Managing and observing workflows driven by AI
  • Ensuring agents adhere to architectural constraints, compliance requirements, and security policies through well-defined documentation
  • Validating high-level system design decisions
  • And whatever else an agent can’t do, yet

Huh, each one of those could be a blog post.

Some of this will be new: it’s clear there are new skills to be learned. We should be finding opportunities to learn this today. If you can’t do this at work, start a side project, or 3, and let the agents do the work. Some of this isn’t new and probably looks similar to what team leads are already doing today.

It’s not all flowers and agents

The agent director role comes with real risks. Plenty of folks have raised these, and I’ve mentioned them as well (see last post). We’re a little early right now to truly appreciate the impacts, but it’s a matter of time. We have to be thinking about this now.

  • Erosion of foundational skills. Relying too heavily on AI might make us dumb. It may weaken our judgment and code quality instincts. There are some early studies out there in agreement. I would argue: if this is done right, this frees us to do more creative things, stretching new parts of our brains. But I’m an optimist.
  • Loss of system and architectural consistency. Today, agents run great locally, can optimize code within their immediate view, but frequently miss systemic design principles and the “bigger picture”. As context windows get larger and models get smarter, will this become a non-issue?
  • Overconfidence in automation, aka: we get lazy. If we implicitly trust AI and assume agents are “smart enough” without proper validation, we begin to create fragile systems. However, if the agent is just going to fix itself, do we care? (<– see, overconfidence is too easy)
  • We lose context. Context switching for the human brain is challenging. Even though less context is needed when managing agents (I don’t need to load the entire code base into my working memory), managing a dozen agents across a dozen apps, with varying technologies, will require next-level context management for us, the humans.

Your experience is needed…?

With the rise of Agent Directors, experience becomes all the more important. These roles prioritize oversight, coordination, and judgment over line-by-line coding.

Unless, it won’t.

The next iteration of AI tools could blow our minds; engineering skills and expertise will die like needing an ops team to manually push code into production. Maybe we, agent directors, are needed less as AI improves and can carry its own.

I’m not sold that mind-blowing AI is coming too soon. AI, as we know it, is trained, spitting out best-chance tokens in order. Thinking models (don’t really think) can review, critique, and improve the chances, but it’s still a trained model. There is no creativity. No new ideas. No ingenuity. Human engineers have to remain on top of it to keep our code and the products we create optimal, impactful, and purposeful.

In my next post, we’ll discuss what an early-career engineer might do today to gain the necessary experience for the future. Stay tuned!

What are your thoughts? Where do you see this all heading?


Subscribe to my blog and get posts like this in your inbox. Share your email below, or follow me on Threads, LinkedIn, or BlueSky.


Leave a Reply

Blog at WordPress.com.

Up ↑