Leadership in the Age of Agentic AIYour Team Just Got Bigger — And Half of Them Don’t Have a Pulse
AI agents are joining your workforce. The technology is the easy part. The real challenge? Nobody’s told managers how to lead a team that’s half human, half machine.
Your team lead — let’s call her Sarah — manages a team of six. She knows them well. She knows Tom does his best thinking before lunch, that Priya needs structured briefs or she’ll spiral into perfectionism, and that the whole team performs better when Mondays are blocked for focused work instead of meetings.
Then three AI agents get added to Sarah’s workflow. One handles first-draft property descriptions. Another triages client enquiries. A third generates weekly campaign reports.
Nobody told Sarah how to manage them. Nobody gave her a dashboard. Nobody explained what would happen when the property description agent started producing subtly off-brand copy — because someone updated the style guide and forgot to update the agent’s instructions. Nobody warned her that her human team would quietly start fixing the AI’s mistakes rather than flagging them, because raising concerns felt like admitting the technology wasn’t working.
Sarah isn’t a bad manager. She’s a good one who’s been handed a fundamentally new job without anyone acknowledging that it changed.
This is happening in organisations everywhere right now. And the management conversation hasn’t caught up to the technology.
The human side needs more attention, not less
Here’s the part that gets lost in the AI capability conversation: managing humans alongside increasingly capable AI requires more emotional intelligence from leaders, not less.
Your team members are working next to systems that never tire, never complain, and grow more capable every quarter. Whether they voice it or not, they’re asking themselves uncomfortable questions. Am I still valuable here? Is my role shrinking? Does my manager understand what I contribute versus what the AI contributes?
Left unaddressed, those questions compound into one of two outcomes — passive resistance, where people quietly undermine AI workflows, or disengagement, where they mentally check out because they feel replaceable. Both are damaging, and both are preventable.
The manager’s job is to be specific and consistent about why human roles matter. Not vague reassurances about “the human touch” — teams see through that. What’s needed are clear, defensible articulations of where human judgment, creativity, and relationship-building create value that AI genuinely can’t replicate. And because AI capabilities keep shifting, that articulation needs continuous updating. Set and forget won’t work here.
AI agents need management, not just maintenance
Managing AI agents isn’t IT administration and it isn’t traditional project management. It sits somewhere between the two, and the sooner we treat it as a distinct management discipline, the better.
From direct experience deploying agents in production, I’ve found that AI agents need five categories of management input. Most organisations are doing one or two of these. The ones seeing real value are doing all five.
Clear tasking and scope definition. The quality of an AI agent’s output is directly proportional to the quality of its brief. Sounds obvious, but I’ve repeatedly seen teams blame the AI when the real problem is that nobody invested in writing a proper brief. Managers need to define explicit success criteria, set clear boundaries, and specify escalation triggers — the conditions under which the agent should stop and hand off to a human.
Performance monitoring and benchmarking. You can’t manage what you can’t measure, and AI agents won’t volunteer their own performance data. At a minimum, track task completion rates, accuracy, processing times, escalation frequency, output quality as rated by the humans who use the work, drift detection, and cost per task. These are the structured equivalent of the informal awareness good managers carry about their human team — except with AI, you have to build the system to surface it.
Continuous improvement. This is the critical one. Left unmanaged, an AI agent will perform at the same level in month twelve as it did in month one — or worse, if the data it depends on has shifted while its instructions stayed static. The fix is a deliberate cycle: regularly sample and review outputs, identify recurring failure patterns, refine the agent’s instructions, test the changes, document what you did, and repeat. This isn’t a setup task. It’s the ongoing core of the role.
Integration management. The hardest problems aren’t in the humans or the AI — they’re in the handoffs between them. Every hybrid workflow has seams where work passes from person to machine or machine to person, and those seams are where quality breaks down. Getting handoff protocols, trust calibration, and feedback loops right is where most of the operational effort lives.
Strategic workforce planning. The most senior-level skill: deciding what should be done by humans, what should be done by AI, and what should be done collaboratively. This means understanding real costs, recognising where human judgment adds irreplaceable value, and planning for how AI capabilities will evolve over the next six to twelve months.
Stop caring about the AI’s feelings — start caring about these three things instead
AI agents don’t have feelings. They don’t experience frustration, satisfaction, or pride. The temptation to anthropomorphise them is natural, but it leads to poor management decisions. You skip performance reviews because the agent “hasn’t caused any problems.” You avoid retiring an underperforming agent because it feels like firing someone.
What you should care about instead is operational health (is the agent functioning correctly, with stable integrations and clean inputs?), contextual freshness (are the agent’s instructions still aligned with current business priorities, or are they running on last quarter’s playbook?), and ethical governance (is the agent producing outputs you’d be comfortable defending publicly?).
None of this is emotional care. It’s systems maintenance with accountability attached. The car doesn’t have feelings either, but you still service it on schedule.
What this actually looks like on a Monday morning
Frameworks are useful, but here’s the practical version.
Monday, you’re reviewing the AI performance dashboard. Two agents are flagged — one has a rising error rate, the other has slowed down. You schedule time to investigate. Tuesday, you’re in a team standup explaining a new human-AI workflow and addressing a team member’s concern that their analytical role is being sidelined — it’s a fair point, and you reframe their position as the strategic layer that makes the AI’s raw analysis actually useful. Wednesday, you’re elbow-deep in an agent that’s been silently producing incorrect outputs for three weeks because a data source changed format and nobody noticed. You fix the instructions, test the change, and log it. Thursday, you’re running one-on-ones that now include conversations about how each person’s work intersects with the AI agents. Friday, you’re reviewing cost-value analysis and drafting a business case to retire one agent and expand another.
None of it is glamorous. All of it is the difference between AI that delivers sustained value and AI that becomes an expensive problem nobody owns.
The bottom line
The fundamental challenge most organisations face with AI isn’t technical. It’s organisational. We have the technology. We have the use cases. What we’re missing is the management layer — the people, the skills, the processes, and the reporting that turns AI deployment from a project into a capability.
The managers who develop this competence first will build teams that are greater than the sum of their very different parts. The ones who don’t will spend considerable money on AI agents that quietly underperform while their human teams gradually disengage.
Both outcomes are the result of choices — choices about how seriously you take the management dimension of AI, how much you invest in developing your leaders, and whether you acknowledge that the job description for every manager in your organisation has already changed.
Whether anyone told them or not.
