AI Market Logo
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
BTC $43,552.88 -0.46%
ETH $2,637.32 +1.23%
BNB $312.45 +0.87%
SOL $92.40 +1.16%
XRP $0.5234 -0.32%
ADA $0.8004 +3.54%
AVAX $32.11 +1.93%
DOT $19.37 -1.45%
MATIC $0.8923 +2.67%
LINK $14.56 +0.94%
HAIA $0.1250 +2.15%
Mistaken Identity? AI Agent Oversight Key To Success
identity-access-management

Mistaken Identity? AI Agent Oversight Key To Success

Experts warn AI agent autonomy demands robust identity and access oversight to prevent major security risks and unlock productivity gains.

August 7, 2025
5 min read
Kyle Alspach

Experts warn AI agent autonomy demands robust identity and access oversight to prevent major security risks and unlock productivity gains.

Mistaken Identity? AI Agent Oversight Key To Success

By Kyle Alspach — August 6, 2025 Granting AI increased autonomy poses critical security risks, opening the door for solution providers to tackle access privileges and more. If the tech industry gets things right with AI agents, it could soon be possible to grant significant autonomy to entire teams of virtual assistants to go out and achieve “very complex goals” all on their own, according to Accenture’s Damon McDougald. “That’s where everyone has the vision and desire for agents to go,” said McDougald, global cyber protection lead at Dublin, Ireland-based Accenture, No. 1 on CRN’s 2025 Solution Provider 500. However, if industry efforts come up short—especially when it comes to security—the AI agent revolution and its promise of unprecedented productivity gains could hit major roadblocks, cybersecurity experts and industry executives told CRN. This is especially the case when it comes to security for identities and management of access privileges, already a notoriously difficult area for organizations when it comes to managing human workers. The issue could be exponentially riskier when it comes to autonomous AI workers. In other words, to truly turn the industry vision for AI agents into reality, identity and access considerations should be paramount, experts said.
“You need to be thinking from an identity standpoint of how do [agents] get access to things at different times for different time lengths?” McDougald said. “That’s different than what we usually do today for humans.
“[However], if we get agents right, we’ll see a scale [of productivity improvements] that we haven’t seen before,” he said. “At some point in time, there will potentially be billions of agents running around the internet. There will be a marketplace of agents, and we’re only limited by our compute and imagination on how agents will work. And so the identity tools need to fit in that reality today.”
The potential risks from insecure or misconfigured AI agents are not hard to envision, particularly from an identity and access perspective. The entire purpose of agents is to connect to many different systems and data sources to autonomously accomplish tasks. But if such access is not actually supposed to be authorized, a data breach is the probable result.
“[With AI agents], the need for the access to data is going to typically be greater,” said Matt Shufeldt, chief solutions officer at San Diego-based systems integrator Evotek, No. 92 on CRN’s 2025 Solution Provider 500. “And because of that, you need to solve for those data management issues more quickly.”
Exacerbating the issue is the fact that because agents are charged with executing tasks without constant human oversight, breaches could go entirely unnoticed for a period of time. The risks may sound familiar—akin to those that the industry grappled with from the arrival of GenAI—but are magnified several times over when it comes to agents, experts said.
“Whatever we thought were the problems in the GenAI or LLM world, agentic is multiple times that,” said Ankur Shah, cofounder and CEO at Straiker, a Sunnyvale, Calif.-based startup focused on AI security.
“Agents are basically LLMs that can reason, make decisions and take action on [those decisions]. They’re chained together, so they are nothing but GenAI on steroids,” Shah said. “And so you also have to think about putting your security on steroids in the agentic world.”
Solution providers that can help organizations deal with the challenges around agentic AI will find no shortage of opportunity going forward, solution provider executives told CRN. The bottom line is that the idea of granting AI greater autonomy than ever before raises new security questions that most organizations are not equipped to answer on their own, executives said.
“That’s where I see us just having a ton of opportunity,” said Kevin Lynch, CEO of Denver-based Optiv, No. 28 on CRN’s 2025 Solution Provider 500.
“I think it starts in our advisory motion by looking at helping the client to think holistically [about agentic AI],” Lynch said. “We’re helping that client to think through the implications of their operational choices.”
Experts told CRN that oversight of identity and access issues will be central to any strategy around enabling AI agents. Whereas traditional identities have been tied to humans, the larger scale on which agents will operate means major changes ahead for management, governance and authorization of identities, according to Alex Bovee, co-founder and CEO of ConductorOne, an identity governance startup focused on agentic AI with headquarters in Portland, Ore., and San Francisco.
“It’s just completely different patterns and paradigms for how you would manage those identities at scale,” Bovee said.
Indeed, the maxim that security is only possible when you first have visibility seems truer than ever when it comes to agentic technologies, according to vendor and solution provider executives. Real-time oversight around the actions taken by AI agents will be pivotal, which will likely mean adding an extra layer of enforcement on top of AI agents that will ensure they are not going astray from what is expected, Bovee said.
“[With AI agents], now you have not just a user visibility and transparency challenge,” said Ben Prescott, head of AI solutions at Irvine, Calif.-based Trace3, No. 34 on CRN’s 2025 Solution Provider 500. “Now you [have to know] what is the agentic solution itself actually planning and executing? And how do we understand what the right output is that is actually generating within that agentic workflow?”
One identity security startup working with Trace3 is Descope, which is seeking to become a go-to agentic identity provider for the coming era of AI agents, according to Descope co-founder Rishi Bhargava. Los Altos, Calif.-based Descope is working to give security teams the ability to manage which agents are authorized to connect to which tools inside their organization, as well as the level of permissions the agents receive from the tools, Bhargava said.
“We are able to do pretty much a full life-cycle management on the agent: creating an agent, on-boarding an agent, revoking an agent, removing permissions of an agent,” he said.
The idea, Bhargava said, is that security teams will put in controls and policies for agentic while configuring the dozens of tools involved with creating AI agents. Then, the developers can use Descope’s SDKs to build the agents, he said. If this type of process is not followed, the security risks can be substantial, according to Bhargava.
“The alternative is the security team has no idea about what agents got deployed, what tools they’re connected to, what level of permission these agents have, and they have no control and no visibility,” he said. “We are starting to see customers already engage [around this issue]. They’re saying, ‘We are blocking our developers from deploying agents, but we want to enable them.’ This is the way to securely enable them.”
The larger, well-established identity security vendors—including SailPoint, Okta and Ping Identity—also have quickly embraced the opportunity to help solve some of the foremost agentic security challenges, according to the CEOs of the companies. A recent survey conducted by SailPoint found 80 percent of respondents reporting that AI agents had taken actions that were not intended, such as accessing an unauthorized system or sharing sensitive data. In an interview with CRN, SailPoint founder and CEO Mark McClain said that the Austin, Texas-based vendor is heavily focused right now on addressing many of the “really hard problems in the world of agentic.”
For instance, AI agents will need to be allotted specific levels of controlled access to certain systems or data, McClain said, something that SailPoint has long specialized in for human workers.
“We have to understand an incredibly wide array of [factors] in these large-scale enterprises—and we have to go deep in all of those things to do what we do, to control what you can actually do inside that application,” he said.
San Francisco-based Okta, meanwhile, is putting significant focus into helping developers to build agents that will have strong authentication while using APIs in a secure way, according to Okta co-founder and CEO Todd McKinnon.
There’s no question that it would pose a massive security risk if an agent were compromised and exploited, but there are a number of basic steps that can be taken before even getting to that issue, McKinnon told CRN.
“The basics are you have a bunch of API access tokens strewn all over your company, whether it’s in emails or in Slack or in source code control. You’ve got to get those under control and clean those up,” he said. “Because once you start having these agents, the number of those things is going to be far greater, so the risk is going to be higher.”
For Denver-based Ping Identity, capabilities aimed at enabling adoption of AI agents include helping companies to build automation into their identity services, taking many manual steps out of the process, according to founder and CEO Andre Durand.
This is crucial because the huge scale that agents will be operating on in the future will make it impossible for humans to manage the associated identity and authentication risks without significant automation, Durand said.
“[Ultimately], all of these agents will have incredible access to data on our behalf and will need to be authenticated, authorized and governed,” he said.
The identity and access challenges around AI agents have also received significant attention from Microsoft, whose Entra ID technology is ubiquitous in the business world. In May, Redmond, Wash.-based Microsoft said it is seeking to proactively eliminate top security issues related to the growing adoption of AI agents with the unveiling of its Entra Agent ID offering. Entra Agent ID enables organizations to gain improved visibility into agents while also allowing for application of identity and access policies through Microsoft’s Conditional Access capabilities, according to Alex Simons, corporate vice president for product management and identity security at Microsoft. The capabilities simplify management and security for AI agents and are crucial because “the scale of the sprawl is going to be so big [and happen] so fast” within many organizations, Simons said. The goal of Entra Agent ID, initially available in public preview, will ultimately be to allow customers to “confidently start adopting agentic AI,” he said. On the other end of the spectrum, Microsoft has also been focusing on uncovering security risks and vulnerabilities that are specific to agents, including through red-team assessments, according to Ram Shankar Siva Kumar, head of Microsoft’s AI Red Team. The unique challenges posed by AI agents include their ability to remember information and make decisions, as well as their interactions with other agents, he noted. Key threats include the potential for attackers to “poison” agents, extract sensitive data from an agent’s memory or launch a prompt-injection attack, where the threat actor seeks to manipulate an agent by inserting malicious instructions, Kumar said. The stakes are high when it comes to rooting out security risks posed by agents, in part because organizations will need to know they can trust agents to maximize their value, he said.
“Proactively red-teaming [the technology] before it reaches the hands of the customer is so vital for us,” Kumar said.
The arrival of agentic technology also creates scenarios that will make transparency and security even more critical, experts said. For instance, at Accenture, “our perspective is that you’re going to quickly see hundreds and thousands of agents being created—not only because the technology is very powerful, but by nature, for an agent to complete a task, sometimes it will have to spawn a new agent,” said Daniel Kendzior, global data and AI security practice leader at Accenture.
“For your ability to do that, you have to now create, effectively, a new layer in your stack, which we think of as an agentic platform,” Kendzior said. Notably, such a platform should at the same time provide a mechanism for management, security and control of the agents, he said.
A key piece of the puzzle is undoubtedly also the emergence of standards focused on driving agent interaction, experts said. Anthropic’s Model Context Protocol has become a popular framework for communications between AI models and other systems since its introduction in November 2024, but it has also lacked security specifications until recently. Another protocol for agentic communications, Agent2Agent, originally introduced by Google Cloud, was built with a secure-by-default approach from the start, according to the tech giant. The Agent2Agent protocol is “designed to support enterprise-grade authentication and authorization,” Google Cloud said in its post announcing the protocol in April. To use the protocol, authentication and authorization between agents is required, noted McDougald of Accenture, which was among the partners that worked on the Agent2Agent project. From there, “you can then create a secure tunnel between the agents so the communication between them is secure as well,” he said. The protocol “exposes an agent on the internet and says, ‘This is what the functionality of this agent can [provide].’ So agents can discover agents,” he added. “It looks like it’s the beginning of establishing an agent market.” It’s now abundantly clear that the use cases with AI agents will be far more complex than with the typical LLM-powered applications available so far, according to Evotek’s Shufeldt.
“And because they’re more complex,” he said, “your risks are going to be greater.”

Frequently Asked Questions (FAQ)

What are the primary security risks associated with autonomous AI agents?

The primary security risks involve mistaken identity and improper management of access privileges. If an AI agent's identity is not properly secured or if it's granted excessive access, it can lead to data breaches and unauthorized access to sensitive systems. The autonomous nature of these agents means breaches could go unnoticed for extended periods.

How do AI agents differ from traditional human users in terms of identity and access management?

AI agents require a more dynamic approach to identity and access management. Unlike humans, agents may need varying levels of access for different durations, depending on the complex goals they are tasked with achieving. This necessitates new patterns and paradigms for managing identities at scale.

What are the potential consequences of insecure AI agents?

Insecure AI agents can result in significant data breaches, as their core function is to connect to numerous systems and data sources. If unauthorized access is granted, sensitive information can be compromised. Furthermore, the autonomous operation of these agents means that breaches might not be immediately detected, exacerbating the potential damage.

What is the role of solution providers in securing AI agents?

Solution providers play a crucial role in helping organizations navigate the complexities of AI agent security. They can assist with establishing robust identity and access management strategies, ensuring proper governance and authorization for AI agents, and implementing real-time oversight mechanisms to prevent agents from going astray.

How can organizations prepare for the security challenges posed by AI agents?

Organizations need to prioritize identity and access considerations from the outset of AI agent implementation. This involves thinking holistically about how agents obtain and manage access to resources, ensuring visibility into their actions, and potentially implementing additional layers of security and enforcement to govern their behavior.

Crypto Market AI's Take

The increasing autonomy of AI agents, as highlighted in this article, presents a significant shift in how we interact with and secure digital systems. From a cryptocurrency market perspective, this has profound implications. The ability of AI agents to autonomously execute complex tasks could revolutionize trading strategies, market analysis, and even the management of digital assets. However, as the article emphasizes, the security and identity management of these agents are paramount. If not properly secured, compromised AI agents could pose a substantial risk to individual and institutional crypto holdings. Our platform at Crypto Market AI focuses on providing secure and intelligent solutions for navigating the cryptocurrency landscape, including advanced AI-driven tools for market analysis and trading. Understanding and mitigating these new security challenges is crucial for harnessing the full potential of AI in the digital asset space. We believe robust identity verification and access control are as critical for AI agents as they are for human traders.

More to Read:


Originally published at CRN on August 6, 2025