August 6, 2025
5 min read
Kyle Alspach
Experts warn AI agent autonomy demands robust identity and access oversight to prevent major security risks and unlock productivity gains.
Mistaken Identity? AI Agent Oversight Key To Success
By Kyle Alspach — August 6, 2025 Granting AI increased autonomy poses critical security risks, opening the door for solution providers to tackle access privileges and more. If the tech industry gets things right with AI agents, it could soon be possible to grant significant autonomy to entire teams of virtual assistants to achieve “very complex goals” independently, according to Accenture’s Damon McDougald. “That’s where everyone has the vision and desire for agents to go,” said McDougald, global cyber protection lead at Dublin-based Accenture, No. 1 on CRN’s 2025 Solution Provider 500. However, if industry efforts come up short—especially regarding security—the AI agent revolution and its promise of unprecedented productivity gains could face major roadblocks, cybersecurity experts and industry executives told CRN. This is particularly true for security around identities and management of access privileges, an already challenging area for organizations managing human workers. The risks multiply exponentially when it comes to autonomous AI workers. In other words, to truly realize the industry vision for AI agents, identity and access considerations must be paramount.“You need to be thinking from an identity standpoint of how do [agents] get access to things at different times for different time lengths?” McDougald said. “That’s different than what we usually do today for humans.
[However], if we get agents right, we’ll see a scale [of productivity improvements] that we haven’t seen before,” he added. “At some point in time, there will potentially be billions of agents running around the internet. There will be a marketplace of agents, and we’re only limited by our compute and imagination on how agents will work. And so the identity tools need to fit in that reality today.”The potential risks from insecure or misconfigured AI agents are easy to envision, especially from an identity and access perspective. AI agents connect to many systems and data sources autonomously to accomplish tasks. If such access is unauthorized, a data breach is likely.
“[With AI agents], the need for the access to data is going to typically be greater,” said Matt Shufeldt, chief solutions officer at San Diego-based systems integrator Evotek, No. 92 on CRN’s 2025 Solution Provider 500. “And because of that, you need to solve for those data management issues more quickly.”Compounding the issue is that agents execute tasks without constant human oversight, meaning breaches could go unnoticed for some time. The risks echo those from the GenAI era but are magnified several times over for agents.
“Whatever we thought were the problems in the GenAI or LLM world, agentic is multiple times that,” said Ankur Shah, cofounder and CEO at Straiker, an AI security startup in Sunnyvale, Calif.
“Agents are basically LLMs that can reason, make decisions and take action on [those decisions]. They’re chained together, so they are nothing but GenAI on steroids,” Shah said. “And so you also have to think about putting your security on steroids in the agentic world.”Solution providers that help organizations address agentic AI challenges will find abundant opportunity.
“That’s where I see us just having a ton of opportunity,” said Kevin Lynch, CEO of Denver-based Optiv, No. 28 on CRN’s 2025 Solution Provider 500.
“I think it starts in our advisory motion by looking at helping the client to think holistically [about agentic AI],” Lynch added. “We’re helping that client to think through the implications of their operational choices.”Experts agree that identity and access oversight will be central to any AI agent strategy. Traditional identities tied to humans will give way to new paradigms for managing, governing, and authorizing identities at scale, according to Alex Bovee, co-founder and CEO of ConductorOne, an identity governance startup focused on agentic AI.
“It’s just completely different patterns and paradigms for how you would manage those identities at scale,” Bovee said.Security starts with visibility, which is more critical than ever for agentic technologies. Real-time oversight of AI agent actions will be pivotal, likely requiring an additional enforcement layer to keep agents aligned with expectations.
“[With AI agents], now you have not just a user visibility and transparency challenge,” said Ben Prescott, head of AI solutions at Irvine-based Trace3, No. 34 on CRN’s 2025 Solution Provider 500. “Now you [have to know] what is the agentic solution itself actually planning and executing? And how do we understand what the right output is that is actually generating within that agentic workflow?”One startup working with Trace3 is Descope, aiming to be a go-to agentic identity provider. Descope enables security teams to manage which agents connect to which tools and their permission levels.
“We are able to do pretty much a full life-cycle management on the agent: creating an agent, onboarding an agent, revoking an agent, removing permissions of an agent,” said Descope co-founder Rishi Bhargava.Security teams set controls and policies while developers use Descope’s SDKs to build agents.
“The alternative is the security team has no idea about what agents got deployed, what tools they’re connected to, what level of permission these agents have, and they have no control and no visibility,” Bhargava said. “We are starting to see customers already engage [around this issue]. They’re saying, ‘We are blocking our developers from deploying agents, but we want to enable them.’ This is the way to securely enable them.”Established identity security vendors like SailPoint, Okta, and Ping Identity have also embraced the challenge. A recent SailPoint survey found 80% of respondents reported AI agents taking unintended actions, such as accessing unauthorized systems or sharing sensitive data. SailPoint CEO Mark McClain said the company is focused on solving “really hard problems in the world of agentic.” AI agents will require controlled access levels, a specialty of SailPoint. Okta CEO Todd McKinnon emphasized securing API access tokens, which are often scattered and vulnerable, as a foundational step before agent deployment. Ping Identity CEO Andre Durand highlighted the need for automation in identity services to manage the scale of agent operations.
“[Ultimately], all of these agents will have incredible access to data on our behalf and will need to be authenticated, authorized and governed,” Durand said.Microsoft has also prioritized AI agent security through its Entra ID technology. In May 2025, Microsoft unveiled Entra Agent ID to improve visibility and apply identity and access policies via Conditional Access, helping organizations manage rapid agent proliferation. Alex Simons, Microsoft’s corporate VP for product management and identity security, said the goal is to let customers “confidently start adopting agentic AI.” Microsoft’s AI Red Team, led by Ram Shankar Siva Kumar, focuses on uncovering agent-specific security risks such as memory extraction, prompt-injection attacks, and poisoning.
“Proactively red-teaming [the technology] before it reaches the hands of the customer is so vital for us,” Kumar said.Agentic technology also demands new transparency and security layers. Daniel Kendzior, global data and AI security practice leader at Accenture, explained that agents often spawn new agents to complete tasks, requiring an “agentic platform” for management, security, and control. Standards for agent interaction are emerging. Anthropic’s Model Context Protocol is popular but lacked security specs until recently. Google Cloud’s Agent2Agent protocol was designed with secure-by-default enterprise-grade authentication and authorization. McDougald of Accenture noted that Agent2Agent enables agents to authenticate, authorize, and create secure communication tunnels, facilitating an emerging “agent market.” Evotek’s Shufeldt summed up the complexity:
“The use cases with AI agents will be far more complex than with the typical LLM-powered applications available so far. And because they’re more complex, your risks are going to be greater.”
Originally published at CRN on August 6, 2025.