I’m very excited to have another guest writer on my blog: OpenAI Deep Research!
I didn’t write this post. I heavily relied on OpenAI’s Deep Research. I did review, clean up, and modified a little, but this is a guest post from ChatGPT ;) My prompt was:
Write a comprehensive blog article, following my style, about the current state of agentic engineering. Perform research on the web to identify current trends. Focus on the reality behind the buzz. Ensure to use credible news sources, ideally using sources from engineering houses from top brands and companies.
Why, might you ask, am I doing this? Two big reasons:
- Is AI good enough yet? I don’t think so. Not for coding, not for writing the way I like to write (me, I’m the problem, it’s me).
- I am genuinely interested in this topic, I engage with AI daily, through code and other tasks. So I thought I’d kill two birds (learn something and write an article) with one AI.
No, this won’t be a regular thing. Writing these silly blog posts is something I enjoy; I won’t let the machine take that from me.
Hope you enjoy, emdashs and all!
AI is taking over coding – human programmers are becoming obsolete! At least, that’s the impression one might get from the buzz around “AI agents” in software development. Visionaries boast about autonomous coding systems that can plan, code, test, and deploy all on their own. On the other hand, skeptics counter that AI-generated code is buggy garbage. The truth lies somewhere in between. In reality, “agentic engineering” – integrating AI agents into software development – is neither magic nor myth, but an emerging practice with real benefits and real challenges. This comprehensive look at the current state of agentic engineering cuts through the hype to see how engineering teams are actually using (and not using) AI coding agents today.
What Is Agentic Engineering, Really?
The term agentic engineering refers to using AI “agents” as active participants in the software development process, beyond just single-shot suggestions. Instead of merely autocompleting a line of code, an AI agent can be given higher-level tasks and operate with a degree of autonomy – cloning repos, generating code across files, running tests, and iterating based on results. In other words, “building for AI agents rather than just with AI”medium.com, dev.to. An AI coding agent perceives the environment (codebase, logs, etc.), makes decisions, and takes actions towards a goal with minimal human input dev.to, dev.to.
Crucially, agentic engineering is about collaboration between human and AI, not a total handover of the reins. One engineering team describes it as “combining human craftsmanship with AI tools to build better software” zed.dev. There’s no substitute for a skilled engineer in this equation – the AI provides speed and exploratory power, while the human ensures quality and direction zed.dev, zed.dev. In practice, integrating AI into development workflows means treating these agents kind of like extremely speedy but inexperienced team members. They can execute rote tasks or generate boilerplate in seconds, but humans still must guide them, review their outputs, and handle the complex judgement calls davidlozzi.com.
From Hype to Reality: Adoption Is High, but Caution Remains
There’s no question that AI coding assistants have spread rapidly through the software industry. One recent survey of tech companies found 94% have teams actively using AI coding tools – essentially nearly every software organization is experimenting with AI assistance now opslevel.com. GitHub’s own developer relations team observed that “92% of developers are using AI coding tools today” in some form itbrew.com. From big enterprises to lean startups, AI pair programmers have moved from novelty to normal. GitHub Copilot leads the pack (deployed at 88% of companies using AI coding tools) but teams are trying a mix of solutions – from OpenAI’s ChatGPT and Anthropic’s Claude to niche coding agents – often simultaneously opslevel.com, opslevel.com. This experimentation reflects an industry consensus that AI tools are here to stay and everyone wants to find where they fit opslevel.com.
However, high availability doesn’t equal universal adoption at the individual developer level. Many developers remain cautious. That same industry survey revealed that while AI assistants were introduced at almost every company, only about one-third of organizations have achieved a point where over half their developers regularly use the tools opslevel.com, opslevel.com. In fact, a similarly large chunk of companies reported that fewer than 25% of their devs were actively using the AI assistants available to them opslevel.com. In other words, the tools are widely present, but deep usage is still surprisingly shallow. Cultural inertia, trust issues, and workflow habits mean many engineers haven’t fully embraced these agents day-to-day yet. (As one Gartner analyst put it, “Developers are creatures of habit… you can’t just give someone a new tool and say, ‘Oh, you’re going to be 50% more productive now’.” itbrew.com)
Developer sentiment data underscores this cautious reality. According to Stack Overflow’s 2025 survey of nearly 50,000 developers, only 33% of developers now say they trust the accuracy of AI coding tool outputs – a drop from 43% the year before leaddev.com. And the portion who view AI assistants favorably in their workflow fell from 72% to about 60% in a year leaddev.com. In other words, after the initial hype peak, many developers have become more skeptical. “Developer trust in AI is becoming more realistic as the industry moves beyond the initial hype phase,” notes one market analyst, reflecting on the survey results leaddev.com. Engineers have gotten hands-on with these tools and discovered their limitations, leading to a healthy tempering of expectations.
Why the dip in enthusiasm? Early experiences showed that AI agents, left unguided, can and do screw things up. “One of the most common stories I hear in 2025 goes like this: someone gives an AI coding agent a try, expecting magic,” recalls a longtime CTO. “But after a few actions, it messes up the architecture, changes something it shouldn’t, or just spits out bad code.” leaddev.com Many developers have lived some version of that story – watching an overeager agent refactor code into chaos or introduce subtle bugs while trying to “help.” When an AI’s suggestion is wrong or weird, devs have to double-check and fix it, which eats into the supposed efficiency gains. Hence a natural pullback: three-quarters of developers say that whenever they don’t fully trust an AI-generated answer, they simply “revert to human oversight” and verify the work themselves leaddev.com. In effect, teams are learning not to treat the AI as an infallible genius, but as a junior helper that needs supervision.
How Engineering Teams Are Actually Using AI Agents
With experience, a clearer picture is forming of where AI agents truly add value in the software lifecycle – and where they struggle. The reality is that AI coding assistants excel at certain tasks, but they are not close to replacing a human engineer’s holistic skills. Instead, teams are finding ways to slot AI into specific roles that can augment human productivity.
1. “Pair Programmer” Autocomplete & Boilerplate: The most common use of AI in coding remains inline code suggestions and chat-assisted coding (e.g. GitHub Copilot’s IDE completions). These tools are essentially fancy autocompletion on steroids. Developers are leveraging them to write routine code faster – from generating syntax for a new function to boilerplate configuration. This is the least controversial use-case, and the gains here are real but incremental. Many devs report that tools like Copilot or Cursor reliably save a few seconds on many small tasks, which over a day or a project can add up to hours saved. It’s like having a tireless pair programmer who’s really good at rote recall. The code quality at this level is generally good and requires only minor touch-ups, according to anecdotal reports (and the steady 60%+ favorable rating suggests many are happy with these assistants in this capacity leaddev.com).
2. Autonomous Code Generation (With Review): The more “agentic” scenario is when you ask an AI to write an entire function or module given a high-level description – essentially assigning coding tasks to the AI. This is where results become hit-or-miss. Current generative models can produce surprisingly complete solutions for well-bounded problems (e.g. “implement a binary search in Python” or “build a simple web form with validation”). But as the scope gets bigger, the AI’s output quality becomes inconsistent and often not aligned with the team’s expectations or style. Engineers report that large code outputs from agents usually require substantial cleanup: reorganizing weird logic, renaming things to match conventions, tightening up inefficient code, etc. As one engineer quipped, AI can write the code, but you decide whether it’s right. It’s common to say these coding agents behave like a junior dev – they’ll do something passable, but senior engineers must review every line and give thorough feedback to reach production quality. This review-and-iterate loop can involve telling the agent what to fix, re-running it, and so on, much like mentoring a human junior. It works, but the time savings are less dramatic unless the human driving the process has the experience to guide it efficiently. In short: AI can draft code, but it takes human judgment to architect and polish that code into a reliable solution.
3. AI in Code Review and Testing: An emerging “win” for AI in engineering is assisting with code reviews and quality checks. In fact, this has quickly become one of the most practical uses of AI agents in large organizations. A great example comes from Microsoft’s internal engineering: they built an AI-powered pull request reviewer that now automatically comments on over 90% of all PRs company-wide devblogs.microsoft.com. This AI agent acts like a tireless reviewer, scanning code changes for issues and best practice violations. It will drop comments in the PR thread if it spots, say, a potential null pointer, a security sensitive call, or even just a style inconsistency – complete with an explanation or a suggested improvement devblogs.microsoft.com, devblogs.microsoft.com. Human reviewers and authors can then decide whether to accept the AI’s suggestions. Notably, Microsoft’s AI reviewer is not allowed to directly commit changes on its own devblogs.microsoft.com – it suggests, and humans still approve and merge. The result? Engineers report that the AI catches many low-level issues automatically, freeing human reviewers to focus on higher-level feedback (design, larger bug logic, etc.) devblogs.microsoft.com, devblogs.microsoft.com. And since the AI never tires, it ensures even huge PRs get an initial pass quickly. This kind of AI-assisted code review is a current reality in top tech companies, and it demonstrably improves code quality and review turnaround times. We’re also seeing AI tools generate unit tests or find bugs: for example, agents that take a piece of code and suggest test cases, or run static analysis with AI explanation. These uses augment the safety net without replacing human judgment.
4. Multi-Step Agents for DevOps & Maintenance: Pushing further into “autonomy,” some teams are experimenting with agents that handle routine DevOps tasks or refactoring chores end-to-end. For instance, setting up an CI/CD pipeline, or performing a refactor across an entire codebase (say, convert all usage of one library to another). Early agent frameworks – like OpenAI’s AutoGPT, Microsoft’s Autogen, or various LangChain-based scripts – have showcased that an AI agent can loop through plan->code->execute steps to carry out such tasks. In theory, you could say: “AI, please upgrade this project to the latest React version,” and an agent will attempt to modify files, run tests, and continue until done. In practice, these ambitious multi-step agents are still very experimental. They tend to either overshoot (breaking things you didn’t want touched) or get stuck needing human guidance. There have been some promising demos (e.g. agents that automatically opened pull requests to update dependencies), but reliable fully-autonomous DevOps agents are not common in production use yet. What’s more common is using AI tools to assist a human DevOps engineer – for example, generating config snippets, writing Terraform scripts, or checking Kubernetes settings, which the human then reviews. The vision of an AI “junior DevOps engineer” on the team is out there dev.to, dev.to, but in 2025 it’s largely confined to prototypes and a few forward-thinking teams. Most organizations aren’t ready to let a script-wielding AI loose on their production infrastructure without heavy supervision.
In summary, engineering teams today trust AI agents in supporting roles more than in autonomous ownership roles. Writing parts of the code, reviewing code, answering questions, generating docs or tests – yes. Designing the system architecture or building a whole feature front-to-back with no human in the loop – not so much. The more critical the task, the more human oversight remains the norm.
The Human Factor: Why Trust and Oversight Are Key
The cautious approach prevalent in the industry highlights a key point: agentic engineering is not about removing humans from the loop – it’s about amplifying human developers, with humans firmly in charge. Organizations that have rushed in thinking they can replace devs or “set and forget” an autonomous coder have often been swiftly corrected by reality. Real-world experience shows that AI agents can accelerate work, but only under human direction.
The drop in trust we discussed earlier is actually a healthy recalibration. Developers are learning exactly where an AI’s “sweet spot” ends and where its pitfalls begin. For example, AI models are known to “hallucinate” – they may produce code that looks legit but is completely wrong or uses non-existent functions. They also lack true understanding of context beyond what they’re given, so an agent might make changes that technically satisfy a prompt but break subtle assumptions elsewhere in the system. Seasoned engineers know this, so they treat AI outputs as drafts – useful starting points, not final solutions. As one developer wryly noted, “Without human oversight, your product turns into an incoherent mess of half-baked features.” In other words, letting an agent run wild can quickly lead to chaos, especially in a large codebase.
To make the most of AI, many teams are formalizing an approach of AI as “intern” or “junior dev”. The human engineer (effectively acting as a lead/manager) must:
- Clearly specify the requirements for the task (prompt engineering is the new planning).
- Give the AI context and constraints – e.g. feeding it relevant documentation, coding style guidelines, and examples so it doesn’t operate in a vacuum.
- Review everything the agent produces, just like you’d review a junior developer’s code.
- Test the outputs thoroughly, since AI-written code can fail in unexpected ways. If something is off, you might even ask the AI to review its own work or run multiple agents (one writing code, another checking it) as a safeguard.
- Iterate: it often takes a few cycles of refining prompts or fixing bugs for the AI to get a chunk of code right. Patience and guidance are essential – rushing the process leads to frustration.
When done right, this dynamic can indeed speed things up. One engineering leader likened it to having a super-fast but naive new team member: you invest time upfront to train and correct them, and over time they start producing acceptable work more independently. But skip the training/oversight, and you’ll be cleaning up mistakes.
It’s worth noting that younger or less-experienced developers tend to trust and use the AI tools more readily – and correspondingly, get bigger productivity boosts from them. A field study across Microsoft, Accenture, and another large company found that less-experienced developers not only adopted the AI assistant at higher rates, but saw their output (completed tasks) increase 27–39% with the AI’s help, compared to an 8–13% boost for senior developers mitsloan.mit.edu. The junior folks leaned on the AI for help and it effectively leveled them up closer to mid-level productivity. Senior engineers, with or without AI, were already efficient (and perhaps more skeptical of the AI’s suggestions), so the gain was smaller. This highlights two things: (1) AI tools can be a great mentorship aid and force-multiplier for those still learning, but (2) it doesn’t replace the need for experience – in fact, the less experienced the dev, the more critical it is that they get guidance to use the AI correctly and not develop blind spots. Many tech leads are now keenly aware of the “junior dev dilemma”: we want new engineers to use AI to be productive, but we also must ensure they actually learn the fundamentals and don’t become just “prompt typists” who can’t code without an AI crutch. The training and mentorship aspect of engineering needs even more attention in the age of AI.
Productivity vs. Pitfalls: Is It Worth It?
Given the need for oversight and the quirks of AI, one might wonder: are these coding agents actually yielding a net positive? The answer from the field so far appears to be yes – when used judiciously, they do make developers faster and even happier – but it’s not a simple multiplier, and bad implementation can negate the gains.
On the plus side, multiple studies and surveys confirm significant productivity improvements with AI assistance:
- Speed: In controlled experiments, developers using AI code assistants have completed tasks anywhere from 25% faster up to 55% faster than those without mitsloan.mit.edu, github.blog. GitHub’s research with one enterprise partner measured a substantial time-to-code reduction and a higher completion rate for devs given Copilot github.blog. Similarly, internal tests at IBM projected hefty time savings on coding tasks when using their AI assistant (e.g. ~38% less time on code generation, ~59% less on writing documentation) ibm.com.
- Output: Some engineers simply get more done. The Microsoft/Accenture study showed a 26% average increase in completed tasks across all devs with Copilot access mitsloan.mit.edu. Importantly, this wasn’t just trivial code – it included engineering, design, testing tasks as well, suggesting the AI helped broadly with the workload.
- Developer Satisfaction: Interestingly, AI can make coding more fun. Surveys have found that a majority of developers feel more confident in their code quality and enjoy their work more with AI support github.blog, github.blog. It’s like having an ever-present assistant for tedious parts, letting devs focus on creative or complex aspects. Eliminating drudgery (like writing the tenth variant of an API call or slogging through boilerplate) boosts morale. One survey reported 90% of developers felt more fulfilled in their job when using an AI assistant, citing less frustration on mundane tasks github.blog.
Those are real wins. However, the pitfalls are equally real if AI is misused:
- Quality issues: AI suggestions, if not vetted, can introduce bugs or security vulnerabilities. There have been cases of AI-generated code that looks legit but fails edge cases or has unsafe practices (like SQL injections or poor error handling). Without manual review, such issues slip through. That’s why no serious team runs AI-written code into production untested. The IBM AI team actually frames it positively: AI can improve code quality by catching mistakes early ibm.com, ibm.com – which is true if used in a review capacity. But if you rely on AI to write code and don’t check it, quality can suffer. The net effect depends entirely on your process.
- False confidence & dependency: Some less experienced devs may over-rely on AI and not build their own skills. If a developer always uses AI for certain tasks, they might struggle to do them manually when needed (or to debug the AI’s output). This is why tech leads are encouraging practices like “shadow the AI” – have juniors compare the AI’s solution to a manual one, to understand the why, not just accept whatever the tool gives. davidlozzi.com Without that intentional learning, there’s a risk of a generation of engineers who know how to ask the AI but not what the code really means.
- Process overhead: Counterintuitively, using an AI agent can sometimes slow down a task if the prompts or outputs go awry. Developers have shared war stories of spending an hour fighting with an AI that kept misunderstanding the request, whereas writing the code from scratch might’ve taken 30 minutes. Tuning prompts, re-running agents, or cleaning up messy AI code can eat time. The key is recognizing when to not use the AI – knowing its sweet spots. Seasoned users learn to quickly gauge if the AI is helping or if it’s faster to do it manually. In a way, that judgment is a new skill in itself.
- Security and IP concerns: Some companies are cautious about AI because of data governance – e.g. not wanting to paste proprietary code into a third-party AI service. There’s also worry about the AI regurgitating licensed code without attribution. These concerns have slowed adoption in places with strict policies. Over time, solutions like self-hosted models or on-prem AI services are addressing this, but it’s a non-trivial hurdle for many large enterprises (hence some lag in adoption at companies with heavy compliance requirements).
All told, the current state is that most teams that approach AI agents with clear goals and guardrails see a net productivity boost, while those who jumped in blindly have sometimes pulled back until they develop better policies. This aligns with the idea that we’ve passed the hype peak and are now iterating on making the technology truly useful. As one Gartner analyst observed in late 2024, “we are moving past the peak of inflated expectations, a little into the trough of disillusionment” – AI coding hasn’t revolutionized everything overnight, “no one’s firing half their developers” due to miraculous productivity, that simply hasn’t happened itbrew.com, itbrew.com. But step by step, it is improving workflows and outcomes when integrated thoughtfully.
How Engineering Roles and Team Dynamics Are Evolving
Perhaps the most intriguing aspect of agentic engineering’s rise is how it’s starting to reshape the roles and skills on software teams. If AI agents handle a chunk of the coding and quality checks, what do human engineers focus on? Early signals suggest a shift in emphasis from typing out code to directing, validating, and integrating code – essentially, more thinking and reviewing, less routine typing.
A common viewpoint is that the role of a senior engineer is morphing into more of an “AI orchestrator” or Agent Director. Experienced engineers are becoming the ones who design how AI agents fit into the development process, set the standards, and ensure the outputs align with the product vision and quality bar. Instead of manually writing every line, a senior might outline an approach, let the AI draft some parts, then refine and make high-level decisions. This is analogous to how a tech lead might delegate implementation to junior devs and focus on reviewing and integrating their work – except now the juniors are partly silicon. In fact, some companies have begun to explicitly hire or train for “AI engineering” roles. For example, the healthcare AI startup Hippocratic AI notes that an “agentic engineer” blends software development with prompt design, system evaluation, and product management hippocraticai.com. They’ve defined titles like Agent Architect – responsible for designing systems of multiple AI agents working together – and Agent Engineer – focused on implementing and tuning these agent components hippocraticai.com, hippocraticai.com. These roles spend less time writing traditional code and much more time testing AI behaviors, crafting prompts, and building evaluation frameworks to ensure the AI-driven systems are reliable hippocraticai.com, hippocraticai.com. While such job titles are still rare, it signals where things are heading: a specialization around AI-driven development practices.
Meanwhile, junior developers in an AI-rich environment face a new kind of career path. They have powerful tools at their disposal that can do in seconds what might have taken them hours. Great for productivity; potentially bad for learning. As discussed, there’s a concerted push to give early-career engineers chances to solve problems the hard way too, so they build foundational skills. We may see structured onboarding that includes “AI-free” sprints or using AI as a teacher rather than just a crutch – e.g. juniors asking the AI why a piece of code is written a certain way, not just asking it to write it. Paradoxically, the presence of AI means soft skills and higher-level understanding become even more important for new engineers: debugging, architectural thinking, and the ability to validate outputs. In the past, those skills were acquired through trial-and-error and long hours of coding. Now, teams might have to simulate that grind or find new ways to impart deep experience. Organizations that neglect this could find in a few years that they have a gap: a generation of devs who never honed their “sixth sense” for code because the AI always answered for them. Smart engineering orgs are alert to this risk and are adjusting mentorship and training accordingly.
Another shift is cross-functional collaboration. With AI agents capable of pulling tasks that blur lines (coding, testing, ops), we might see developers, QA, and DevOps roles collaborating more fluidly around these tools. For instance, a QA engineer might use an AI to generate test cases for a new feature and work with developers to implement any code changes the AI suggests from test results. Or a DevOps specialist might set up an AI agent that developers can use to deploy their own services safely. The traditional handoffs could become more of a continuous loop with AI bridging gaps (though we’re still in early days of that vision).
One concrete example: pull request workflows now sometimes include AI as an official “participant.” Microsoft’s case, where an AI reviewer is in the loop for every PR, shows how an AI can become just another team member (albeit one that doesn’t count towards headcount!) devblogs.microsoft.com. It reviews code alongside humans. We may soon see more such “AI team members” with various specialties – an AI security auditor that scans every commit for vulnerabilities, an AI documentation bot that writes release notes, etc. Each would slot into the process at the appropriate point, always with a human final say. This could make teams more efficient, but it also means engineers will need to learn how to work alongside AI – e.g. interpreting the AI’s feedback, knowing when to trust it versus when to override. Those who master this collaboration will likely be in high demand.
The Road Ahead: Continued Integration (Not Replacement)
As of mid-2025, agentic engineering is maturing from wild experimentation to a more disciplined practice. The trajectory seems clear: AI agents will become a standard part of the developer toolbox, embedded in many stages of the SDLC – but as assistants, not autonomous overlords. The industry consensus is that usage will only grow: over 90% of engineering leaders in one survey said they plan to expand their use of AI coding tools soon opslevel.com, opslevel.com. Analyst forecasts agree – Gartner projects that by 2028, perhaps 75–90% of software engineers will be using AI code assistants as part of their regular work, a massive jump from under 15% just a year or two ago ibm.com. This suggests we’re on the cusp of widespread adoption, moving from early adopters to the majority.
But “using AI” doesn’t mean handing over the keys entirely. It likely means every engineer will have an AI pair programmer, every codebase will have some AI-enhanced CI checks, every project will have some scripts or agents to take care of repetitive chores – much like how virtually every developer today uses version control and automated testing. AI will be woven in as another layer of automation. The nature of coding work will adjust: devs might spend less time typing out boilerplate and more time validating, integrating, and deciding what to build next. In effect, the creative and architectural aspects of software engineering will become even more paramount. As one tech observer noted, it’s similar to pilots with modern autopilot systems – the plane can fly itself in routine situations, but the pilot must handle the complexities and be ready to take over in an instant davidlozzi.com. Likewise, tomorrow’s engineers will leverage AI to handle the straightforward 80% of coding, while they focus on the hard 20% – and crucially, step in when things go off-script.
In the near term, expect to see improvements addressing current pain points of agentic tools: better reliability (reducing those hallucinations and mistakes), better alignment with team conventions (perhaps your AI assistant will be trained on your company’s codebase and style guides), and more seamless integration into dev environments. The leading platforms are already moving this way – for instance, GitHub is rolling out “Copilot X” features that integrate chat and task automation in the IDE, and offering configuration so it follows a project’s coding styles. There’s also work on fine-tuning models for specific languages or frameworks to increase accuracy. All this will gradually increase trust. We might soon get to a point where an AI agent can confidently handle, say, a routine CRUD module implementation with minimal fixes needed, because it has been specialized for that domain.
Crucially, engineering culture will need to evolve hand-in-hand. Code reviews might start to include checking the prompts used to generate code (did we ask the AI the right thing?). Continuous integration might include a step where an AI explains the changes it made, so humans can quickly approve. Senior devs might be measured not just on the code they write, but how effectively they can direct AI to get the job done (a new kind of productivity metric?). And ethical guidelines will firm up around AI usage – for example, making sure any third-party code the AI inserts is properly licensed, or that sensitive code isn’t leaked into an AI’s training data. These are all active areas of discussion.
One thing we are not seeing, despite the fears, is a wave of developer layoffs attributable to AI. On the contrary, demand for good engineers remains high – but the skill set is shifting. Familiarity with AI tools is becoming a expected skill for new hires. Already, some job descriptions mention experience with AI coding assistants as a plus. It’s less “the AI took our jobs” and more “the AI is becoming part of our jobs, so you better learn to use it.” In that sense, agentic engineering is simply becoming engineering, just with more advanced tools. As a contemporary framing: a decade ago, “DevOps” blurred the lines between dev and ops – now “AI Dev” or “prompt-coding” is blurring lines between coding and directing an intelligent machine. The best engineers will be those who harness both human creativity and machine efficiency.
Next Steps: Topics to Explore from Here
Agentic engineering is a rapidly evolving field. Today’s reality will keep shifting as both the tech and practices improve. To continue our exploration, here are a few next-step topics and questions emerging from the current state:
- 🔍 The Rise of the Agent Director – What does leadership and project management look like when a team includes AI agents? (Exploring the new “AI orchestrator” role for senior engineers in depth, and how to mentor junior devs in an AI-rich environment.)
- 🔐 AI Coding Assistants and Software Quality – A deep dive on maintaining code quality, security, and consistency in a codebase partially written by AI. How do teams set up linting, testing, and governance to keep AI-generated code in check?
- ⚖️ Ethics and Risks of Autonomous Coding – From licensing and IP concerns to bias and security vulnerabilities, what ethical guidelines should organizations follow for responsible use of AI in development? (And how do we audit an AI’s contributions?)
- 🚀 Beyond Coding: Agents in DevOps and Testing – The future of CI/CD with AI in the loop, and case studies of companies using AI for infrastructure management, monitoring, or automated testing at scale. Are fully self-healing systems on the horizon?
- 🎓 Training the Next Generation of Engineers – Strategies for educating and upskilling developers alongside AI. How can universities and companies ensure new engineers still learn critical problem-solving skills when “there’s an AI for that”?
Exploring these topics will help us prepare for an era where working with AI is just a normal part of software engineering. The buzz around agentic engineering is exciting, but understanding the reality – the nuanced, practical integration of AI into our workflows – is even more crucial. By staying grounded in real-world data and experiences, we can cut through hype and focus on making this collaboration between human and machine truly effective. Agentic engineering isn’t about ceding control to machines; it’s about leveling up what we humans can build by working with them. And that future, now steadily materializing, is one where software teams wield AI as a powerful new tool – while keeping eyes wide open to both its promise and its pitfalls.
Subscribe to my blog and get posts like this in your inbox. Share your email below, or follow me on Threads, LinkedIn, or BlueSky.
