WHY THIS MATTERS IN BRIEF
AI Agents are revolutionary in their capabilities, but they have risks to be aware of too.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
In the race to automate the workforce, the tech world may have overlooked a crucial step: managing the digital equivalents of employees with the same scrutiny and safeguards as human staff. Now, the cybersecurity industry is scrambling to catch up, as autonomous AI agents begin to proliferate across corporate networks and even successfully hack military grade systems – bringing with them a new category of risk and a striking identity crisis.
These AI agents, powered by generative AI models, are no longer confined to test environments or limited backend tasks. They are booking meetings, drafting E-Mails, querying databases, and in some cases making operational decisions without human oversight.
But here’s the rub: they’re doing it all without the digital equivalent of a passport.
“Every AI agent needs an identity — and that identity must be credentialed,” warns David Bradbury, Chief Security Officer at identity giant Okta. “Without that, you lose trust, compliance, and control.”
AI and the Future of the Workforce, by Keynote Matthew Griffin
Unlike traditional user accounts, AI agents can’t click verification links or enter MFA codes. They don’t sleep, don’t hesitate, and — crucially — they don’t always flag when something’s gone wrong.
At last week’s RSA Conference in San Francisco, one message rang clear across sessions and vendor booths alike: autonomous agents must be governed by a new set of security protocols.
1Password, a firm known for password management, unveiled two tools specifically aimed at developers and IT leaders looking to secure AI identities. Okta, OwnID, and CyberArk have also launched similar offerings in recent months.
Jeff Shiner, CEO of 1Password, put it bluntly: “An agent acts and reasons, and as a result of that, you need to understand what it’s doing — all the time.”
Kevin Bocek, VP of innovation at CyberArk, is advocating for an “AI kill switch” as standard. “If an agent — or worse, its many duplicates — goes rogue, security teams need a one-click way to revoke access instantly.”
In some ways, this isn’t new. Enterprises have long managed so-called nonhuman identities — bot accounts, VPN gateways, file servers. These entities need usernames, passwords, and access controls. But the stakes are different now.
“These agents are fast, tireless, and potentially have access to enormous swathes of company data,” said Jason Clinton, CISO at Anthropic, speaking at the Coalition for Secure AI event. “If they go wrong, they go wrong at scale.”
Security professionals are worried that companies are deploying agents faster than they can secure them — and often without involving cybersecurity teams in the planning.
“They’re not in the room,” Bocek said. “And these conversations are happening very quickly.”
The urgency isn’t theoretical. Deloitte predicts that a quarter of companies using generative AI will begin piloting autonomous agents in 2025. That figure jumps to 50% by 2027.
Bradbury argues this demands a reimagining of trust: “You can’t just copy-paste human security onto nonhuman agents. You need identity frameworks designed from the ground up for machines that think.”
And that thinking may soon extend to managing one another.
Clinton floated a provocative future scenario: “We could be looking at a future workplace where AI agents manage other agents. That means your junior staff may need management training — not for people, but for virtual coworkers.”
Cybersecurity isn’t just about building walls anymore — it’s about defining and defending who gets through the gates. As AI agents evolve from digital assistants into autonomous actors, the companies deploying them face a stark choice: secure their identities now, or risk a future where their most efficient workers become their greatest vulnerabilities.
The post AI Agents running rogue is a threat so companies want to give them all IDs appeared first on Matthew Griffin | Keynote Speaker & Master Futurist.