Should AI Agents Require a License?
We’re entering a world of a billion AI agents transacting, negotiating, and making decisions.
As everyone debates when AGI will arrive (tonight? A decade from now?), a more immediate shift is about to radically transform our economies: AI agents. Tech companies insist this is the next big frontier, and while we should sometimes take their claims with six grains of salt, they’re not wrong.
AI agents aren’t just chatbots or glorified task managers—they act on our behalf. Instead of suggesting flights, they’ll book the trip, upgrade your seat, and negotiate an early check-in. We’ll each have an army of AI agents transacting, negotiating, and making decisions for us. Businesses will be deploying AI agents to manage inventory, place orders, and oversee full-scale projects.
But the legal and financial systems we rely on today weren’t built for a world where software acts like a billion synthetic people. In the 20th century, we invented the corporation, and with it, corporate accountability—if a company did or said something, it was responsible. That’s why so many people on panels today start with, “These views are my own, not my employer’s.” Companies wanted to make sure there was no confusion—if an employee went rogue, that was on them, not the corporation.
Now, imagine a world where companies aren’t just represented by human employees but by millions of AI agents acting on their behalf. If an agent misleads a customer, breaches a contract, or makes an unauthorized transaction, who is responsible? The company? The AI provider? The employee who set it up? Or does the company claim, “That wasn’t us, that was our AI agent”?
The Accountability Problem
For now, let’s set aside the busload of challenges still plaguing LLMs. AI agents (in most scenarios) should be a welcome innovation—I, for one, would happily outsource the four hours I just spent ordering a lamp, stocking up on winter skincare, and booking flights.
But today, when I book a flight, there’s an entire system of rules I have to follow and checks in place to make sure I do. If I use a stolen credit card, there are consequences if I’m caught. Air Canada has to verify my payment details. My bank already confirmed I’m a real person (thanks to “know your customer” laws). And I personally double-check that I’m flying to Toronto—not Abu Dhabi—before clicking “confirm.”
So what happens when an AI agent makes these decisions for me? Who’s verifying it?
Now scale that up to businesses. We’re entering a world where companies could delegate entire work-streams to thousands AI agents, which then interact with other AI agents across industries. At what point does responsibility blur? Where do the boundaries of the firm end?
A Licensing System for AI Agents?
Not every AI agent needs a license. An agent that autofills forms or helps debug code? No problem. But an AI agent that spends money, signs contracts, or moves financial assets (let’s call this level 5 autonomy)? That may be a different story.
A back of the envelope framework for how a licensing market could take shape: One group of companies (like OpenAI and Anthropic) builds the AI models. Another group—let’s call them BISA, the AI version of Visa—acts as a licensing authority for fully autonomous, transaction-capable AI agents. And then there’s you and me, subscribing to or purchasing these agents.
If I want an AI agent that can transact, it would work similarly to a Visa card. I have to verify I'm a legal person with BISA—just like opening a bank account. My licensed agent then acts as my financial proxy, meaning I’m responsible for its actions, just like I am with my credit card, and its license is traceable back to me. If it books the wrong flight, that’s on me. But it's on the airline to ensure the agent it’s transacting with has a valid license. If I misuse my agent for fraud, its license can be revoked. But if the agent is hacked? That’s where BISA steps in—shutting it down and handling fraud protections, just like Visa does today.
For businesses, the a similar but slightly different logic applies. A company deploying 100,000 AI agents wouldn’t need a license for every agent—an AI reviewing résumés, for example, wouldn’t require one. But under certain conditions, such as handling high-value transactions or negotiating outside the firm, those agents would need to be tied to a corporate license to establish clear accountability, whether the company developed them in-house or purchased them from a third party.
Who’s responsible when something goes wrong?
If an AI agent makes a bad trade or places an incorrect order, that’s on the company—it’s their agent, their liability.
But if the agent is hacked, responsibility depends on whether the company built or licensed the agent for a third party provider. If the breach happened because the company failed to secure its systems, that’s their problem. But if the hack stemmed from a flaw in a third party’s security infrastructure, the third party could be on the hook.
And if the problem traces back to the AI model itself? That’s murkier—AI companies today tend to push liability onto businesses using their models.
Building Regulatory Markets for AI Agents
This doesn’t even begin to touch on AI agents acting as medical providers, legal advisors, therapists—or conducting high-risk scientific research. How will professional licensing systems adapt when AI is making life-altering decisions?
We should also consider licensing or tracking requirements for AI agents shaping online discourse. As AI-generated content floods social media, distinguishing between human and AI contributions will be critical to preserving trust, transparency, and a healthy information ecosystem.
As economist and legal scholar Gillian Hadfield argues, we need to invent new, innovative regulatory markets for the age of AI. The corporation was a 20th-century legal innovation, designed to establish accountability and structure for economic activity.
In the 21st century, AI agents will take on financial, legal, and operational roles that corporations never envisioned. Over time, emerging infrastructures like blockchain could provide a powerful framework for AI accountability and traceability.
The question isn’t if we need a new system—it’s how fast we can build one that works.
Yeah, AI agents are evolving fast, but there’s still a lot of noise. I went through 300+ companies recently and saw some super interesting trends—some industries are adopting them like crazy, while others are barely scratching the surface. Curious, have you seen any AI agents that actually changed the way you work? Also, put together all my findings here if you want to dig in: https://aginfers.substack.com/p/insights-from-analyzing-385-ai-agents
Why do we need millions of people to have licenses in the first place? :)