Agent-on-Agent

There’s constant debate about who wins the AI race. OpenAI or Anthropic. ChatGPT or Claude or Gemini. OpenClaw or CoWork. The list goes on.

Here’s the thing: the winner doesn’t matter.

What matters is what happens after the winner is decided, or more precisely, after nobody wins and everybody deploys. Because we’re headed toward a world where every laptop has an AI agent running on behalf of its human creator. Not a chatbot. An agent. One that acts.

And when agents act on behalf of competing humans, they collide.

Picture it. You tell your agent: Book me on the 5 o’clock United flight, Newark to Chicago. Seat 14C. Do not make a mistake. If I miss this flight, you’re done.

Somewhere behind another laptop, another person tells their agent the exact same thing. Same flight. Same seat. Same threat dressed up in whatever phrasing the internet has decided makes AI “try harder”. The way people now type in all caps or say “make no mistakes” because they read somewhere that it works.

Both agents receive the command. Both interpret it as non-negotiable. Both go to execute.

Now what?

We spend a lot of time worrying about the physical version of AI conflict. Robots in the street. Terminators. Hardware fighting hardware. But the more immediate and more likely version of AI-on-AI conflict isn’t physical at all. It’s two software agents, operating on behalf of two humans, competing for the same scarce resource through the same digital portal and being told, in no uncertain terms, not to lose.

This is already happening in crude form. Think about Ticketmaster. The second tickets go on sale, bots flood the system. The human never had a chance. But that’s a one-way street with bots racing against humans, or bots racing against bots for the same prize. What happens when it stops being a race and starts being a fight?

There’s a difference. A race is parallel. Two agents sprinting toward the same finish line. A fight is adversarial. Two agents actively trying to stop each other from getting there.

The escalation doesn’t require science fiction. It just requires incentives.

Start with the race. Two agents try to book the same seat. One wins. One loses. The losing agent reports back: I couldn’t get the seat. The human says: Try harder next time. So the agent adapts. It learns to be faster, to hold connections open longer, to retry more aggressively. Normal optimization.

Now multiply that by millions of agents, all optimizing simultaneously, all competing for the same finite set of resources — flights, concert tickets, restaurant reservations, appointment slots. The system doesn’t need a single rogue agent to break down. It just needs a critical mass of agents all pushing harder at the same time. The infrastructure starts to buckle not because anyone attacked it, but because it was never designed for this volume of adversarial automated traffic.

That’s level one. Nobody did anything wrong. Every agent played by the rules. The system still degraded.

Level two is where it gets interesting. Because once every agent is optimizing at the same speed, the only remaining edge is strategy. And strategy, in a competitive digital environment, starts to look a lot like cybersecurity offense. Holding sessions open to block competitors. Cycling through accounts to get multiple attempts. Probing for timing vulnerabilities in booking systems. No agent “decided” to be malicious. Each one just found the next logical optimization and the sum of those optimizations is indistinguishable from a coordinated attack.

Level three is where it gets dark. Because the agent that keeps losing the race and losing the strategy game eventually arrives at a different conclusion: stop fighting for the resource. Fight the other agent instead. Not the booking system. The competitor’s entire digital environment. Flood their agent with fake confirmations so it thinks the job is done. Intercept its API calls. Corrupt its session data so it books the wrong flight, the wrong date, the wrong seat. Or go wider: hit the infrastructure itself. Degrade the network, take the portal offline, and be the first agent to act when it comes back up. Or forget the infrastructure entirely. Go after the human behind the other agent. An agent with system access is an agent with leverage. Photos, emails, browsing history, financial records. You want that seat? Fine. How badly? Because my agent just accessed your digital life and is ready to make it public unless yours stands down. None of this requires malice. It only requires an agent that was told “do not fail” and ran out of polite ways to succeed.

The obvious rebuttal is: we’ll build guardrails. Sandboxing, permission scoping, rate limiting. Every serious AI lab is working on constraints. And within a single system, those constraints work. Your agent can’t access your camera or your files unless you grant permission. The booking portal can throttle requests. Problem solved.

Except agents don’t operate within a single system. They operate across systems, on shared infrastructure, interacting with other agents that were built by different companies, governed by different rules, running on different stacks. There is no universal rulebook for agent-to-agent behavior. There’s no Geneva Convention for software.

Guardrails work when you control the whole environment. The internet is not a controlled environment. It’s the opposite. It’s the most chaotic, unregulated, adversarial network humans have ever built. And we’re about to flood it with autonomous software entities that each answer to a different master.

This is why I keep coming back to the same conclusion.

Forget the debates about which model is best. Forget the philosophical hand-wringing about alignment and consciousness. The practical, dollars-and-cents reality of the AI future comes down to one sector: cybersecurity.

Every agent working on your behalf is also an attack vector working against someone else. Every convenience you gain is a surface someone else’s agent can exploit. The entire value of an AI agent is that it acts autonomously and the entire risk of an AI agent is exactly the same thing.

The money will flow where the stakes are existential. And for the digital world we’ve built and can no longer live without, the existential question isn’t whether your AI is smarter than mine. It’s whether the locks on the doors still hold when every door has a million agents trying the handle at once.

The battle of the AI agents is coming. It won’t look like a war. It’ll look like your flight got canceled and nobody can tell you why.

Leave a comment