Categories
AI blog

Your AI Agent Doesn’t Have a Salary — So Stop Managing It Like an Employee

Most executives are still measuring AI through the lens of headcount and hourly cost. But agentic AI doesn’t work that way. The real metric isn’t time — it’s cost per outcome. Here’s what that shift actually looks like in practice.

Every business leader knows the formula. An employee costs $80,000 a year. That buys you roughly 2,080 hours. Efficiency means getting the most out of those hours. Need more output? Hire more people. Need it faster? Extend the hours. It’s the model we’ve built entire organisations around.

Then you deploy an AI agent, and the formula stops making sense.

An AI agent doesn’t have contracted hours. It doesn’t work 40 hours a week. It can run continuously, spin up multiple instances of itself, and scale from ten tasks to ten thousand in minutes. The constraints that define the economics of a human workforce – finite hours, sequential task execution, fixed capacity – don’t apply.

What does apply is a fundamentally different cost model: the cost of an AI agent is tied to the outcome it produces, not the time it takes to produce it.

The Same Technology, Different Economics

Consider two agents running side by side in the same organisation.

A customer service chatbot powered by an open-source language model handles thousands of enquiries daily at near-zero marginal cost. Each additional conversation costs fractions of a cent. Scale is effectively unlimited.

A video generation agent producing property marketing content runs through computationally intensive multimodal models. Each output might cost $5 to $15. Still dramatically cheaper than human production, but a completely different cost profile from the chatbot.

Same category of technology. Vastly different cost-per-outcome characteristics. This is why the old headcount-based budgeting model doesn’t translate — you’re not paying for time anymore, you’re paying for what gets produced.

What This Means for How You Lead

This shift has practical implications that go well beyond the technology team.

Budgeting changes. You need to model cost-per-outcome for your major workflows, not just plan headcount. What does each deliverable cost today in human time? What would it cost with an agent handling the repeatable components?

Measurement changes. AI agents need their own performance dashboards — cost per outcome, quality, throughput, error rates. These metrics will shift as models improve and pricing evolves. What costs $5 today might cost $0.50 in a year.

The real advantage is orchestration. A single chatbot automates a task. An orchestrated system of specialised agents, each handling a different part of a workflow alongside your human team, transforms a business capability. That’s where the compounding returns sit.

Augmentation, Not Elimination

At The Agency, where we support nearly 400 real estate agents and 900 staff with AI-powered systems, the strongest results come from human-AI configurations that are deliberately complementary. The AI handles volume, speed, and data processing. The human handles relationships, judgment, and the strategic advice that only comes from genuine experience.

When a buyer can get property data and market analytics from an AI in seconds, the human agent’s value isn’t in having the information — it’s in knowing what to do with it. The AI handles the data. The human handles the relationship. Together, they deliver something neither could achieve alone.

The Shift Is Already Underway

This isn’t theoretical. We’re seeing AI agents process hundreds of thousands of data events monthly, automate workflows that used to absorb significant team capacity, and enable our people to operate as strategic advisors rather than information processors.

The leaders who learn to think in outcomes rather than hours — who build the measurement systems and orchestration capabilities to manage this new model — will define how their industries operate for the next decade.

The question worth asking isn’t whether this transition is happening. It’s whether you’re building the infrastructure to lead it.

Categories
AI blog

Leadership in the Age of Agentic AI

Leadership in the Age of Agentic AIYour Team Just Got Bigger — And Half of Them Don’t Have a Pulse

AI agents are joining your workforce. The technology is the easy part. The real challenge? Nobody’s told managers how to lead a team that’s half human, half machine.

Your team lead — let’s call her Sarah — manages a team of six. She knows them well. She knows Tom does his best thinking before lunch, that Priya needs structured briefs or she’ll spiral into perfectionism, and that the whole team performs better when Mondays are blocked for focused work instead of meetings.

Then three AI agents get added to Sarah’s workflow. One handles first-draft property descriptions. Another triages client enquiries. A third generates weekly campaign reports.

Nobody told Sarah how to manage them. Nobody gave her a dashboard. Nobody explained what would happen when the property description agent started producing subtly off-brand copy — because someone updated the style guide and forgot to update the agent’s instructions. Nobody warned her that her human team would quietly start fixing the AI’s mistakes rather than flagging them, because raising concerns felt like admitting the technology wasn’t working.

Sarah isn’t a bad manager. She’s a good one who’s been handed a fundamentally new job without anyone acknowledging that it changed.

This is happening in organisations everywhere right now. And the management conversation hasn’t caught up to the technology.


The human side needs more attention, not less

Here’s the part that gets lost in the AI capability conversation: managing humans alongside increasingly capable AI requires more emotional intelligence from leaders, not less.

Your team members are working next to systems that never tire, never complain, and grow more capable every quarter. Whether they voice it or not, they’re asking themselves uncomfortable questions. Am I still valuable here? Is my role shrinking? Does my manager understand what I contribute versus what the AI contributes?

Left unaddressed, those questions compound into one of two outcomes — passive resistance, where people quietly undermine AI workflows, or disengagement, where they mentally check out because they feel replaceable. Both are damaging, and both are preventable.

The manager’s job is to be specific and consistent about why human roles matter. Not vague reassurances about “the human touch” — teams see through that. What’s needed are clear, defensible articulations of where human judgment, creativity, and relationship-building create value that AI genuinely can’t replicate. And because AI capabilities keep shifting, that articulation needs continuous updating. Set and forget won’t work here.


AI agents need management, not just maintenance

Managing AI agents isn’t IT administration and it isn’t traditional project management. It sits somewhere between the two, and the sooner we treat it as a distinct management discipline, the better.

From direct experience deploying agents in production, I’ve found that AI agents need five categories of management input. Most organisations are doing one or two of these. The ones seeing real value are doing all five.

Clear tasking and scope definition. The quality of an AI agent’s output is directly proportional to the quality of its brief. Sounds obvious, but I’ve repeatedly seen teams blame the AI when the real problem is that nobody invested in writing a proper brief. Managers need to define explicit success criteria, set clear boundaries, and specify escalation triggers — the conditions under which the agent should stop and hand off to a human.

Performance monitoring and benchmarking. You can’t manage what you can’t measure, and AI agents won’t volunteer their own performance data. At a minimum, track task completion rates, accuracy, processing times, escalation frequency, output quality as rated by the humans who use the work, drift detection, and cost per task. These are the structured equivalent of the informal awareness good managers carry about their human team — except with AI, you have to build the system to surface it.

Continuous improvement. This is the critical one. Left unmanaged, an AI agent will perform at the same level in month twelve as it did in month one — or worse, if the data it depends on has shifted while its instructions stayed static. The fix is a deliberate cycle: regularly sample and review outputs, identify recurring failure patterns, refine the agent’s instructions, test the changes, document what you did, and repeat. This isn’t a setup task. It’s the ongoing core of the role.

Integration management. The hardest problems aren’t in the humans or the AI — they’re in the handoffs between them. Every hybrid workflow has seams where work passes from person to machine or machine to person, and those seams are where quality breaks down. Getting handoff protocols, trust calibration, and feedback loops right is where most of the operational effort lives.

Strategic workforce planning. The most senior-level skill: deciding what should be done by humans, what should be done by AI, and what should be done collaboratively. This means understanding real costs, recognising where human judgment adds irreplaceable value, and planning for how AI capabilities will evolve over the next six to twelve months.


Stop caring about the AI’s feelings — start caring about these three things instead

AI agents don’t have feelings. They don’t experience frustration, satisfaction, or pride. The temptation to anthropomorphise them is natural, but it leads to poor management decisions. You skip performance reviews because the agent “hasn’t caused any problems.” You avoid retiring an underperforming agent because it feels like firing someone.

What you should care about instead is operational health (is the agent functioning correctly, with stable integrations and clean inputs?), contextual freshness (are the agent’s instructions still aligned with current business priorities, or are they running on last quarter’s playbook?), and ethical governance (is the agent producing outputs you’d be comfortable defending publicly?).

None of this is emotional care. It’s systems maintenance with accountability attached. The car doesn’t have feelings either, but you still service it on schedule.


What this actually looks like on a Monday morning

Frameworks are useful, but here’s the practical version.

Monday, you’re reviewing the AI performance dashboard. Two agents are flagged — one has a rising error rate, the other has slowed down. You schedule time to investigate. Tuesday, you’re in a team standup explaining a new human-AI workflow and addressing a team member’s concern that their analytical role is being sidelined — it’s a fair point, and you reframe their position as the strategic layer that makes the AI’s raw analysis actually useful. Wednesday, you’re elbow-deep in an agent that’s been silently producing incorrect outputs for three weeks because a data source changed format and nobody noticed. You fix the instructions, test the change, and log it. Thursday, you’re running one-on-ones that now include conversations about how each person’s work intersects with the AI agents. Friday, you’re reviewing cost-value analysis and drafting a business case to retire one agent and expand another.

None of it is glamorous. All of it is the difference between AI that delivers sustained value and AI that becomes an expensive problem nobody owns.


The bottom line

The fundamental challenge most organisations face with AI isn’t technical. It’s organisational. We have the technology. We have the use cases. What we’re missing is the management layer — the people, the skills, the processes, and the reporting that turns AI deployment from a project into a capability.

The managers who develop this competence first will build teams that are greater than the sum of their very different parts. The ones who don’t will spend considerable money on AI agents that quietly underperform while their human teams gradually disengage.

Both outcomes are the result of choices — choices about how seriously you take the management dimension of AI, how much you invest in developing your leaders, and whether you acknowledge that the job description for every manager in your organisation has already changed.

Whether anyone told them or not.