New Survey Reveals AI Agents Are Operating as “Digital Employees” While 85% of Organizations Lack Proper Security Controls
BeyondID, a leading AI-powered, Managed Identity Solutions Provider (MISP), released a groundbreaking new report warning that AI agents may be emerging as the next major insider threat to enterprise security. The report surveyed US-based IT leaders and uncovered a stark contradiction in how organizations approach AI security.
The report, “AI Agents: The New Insider Threat?”, reveals a widening gap between AI deployment and cybersecurity preparedness. While 85% of organizations say they are “ready for AI in security,” fewer than half monitor access or behavior for the AI systems they deploy. This highlights a dangerous blind spot in enterprise cybersecurity as AI agents operate like digital employees across networks. The survey found the companies are embracing AI for threat detection, but ignoring AI as a potential threat itself.
Cyber Technology Insights : ReliaQuest GreyMatter Further Speeds Detection and Containment of Threats
“AI is no longer just a tool; it’s acting like a user. But most security teams aren’t treating it like one,” said Arun Shrestha, CEO of BeyondID. “AI agents are logging in, accessing sensitive systems, and making decisions just like human employees, but most security teams are still treating them like static infrastructure. This disconnect is creating a massive security vulnerability that’s hiding in plain sight.”
Key findings include:
- AI agents are performing sensitive tasks like logging in, accessing protected systems, and triggering actions. Yet only 30% of organizations regularly map these agents to critical assets.
- Over 50% use AI to detect threats, but few apply access controls or behavioral monitoring to AI agents themselves.
- Only 6% of security leaders rank securing non-human identities as their most difficult challenge, despite AI impersonation being their top concern.
When looking specifically at healthcare organizations, the data shows worrisome risks, as the industry rapidly adopts AI for diagnostics, scheduling, and patient engagement.
Cyber Technology Insights : CrowdStrike Named a Leader in the 2025 IDC MarketScape: Worldwide CNAPP
- 61% of healthcare organizations reported at least one identity-related attack in the past year.
- 42% of healthcare companies failed an identity-related compliance audit. Yet 17% list compliance as a top concern, despite handling sensitive patient data.
- 34% of healthcare organizations name AI impersonation of users as their top emerging threat.
- Only 23% of healthcare organizations offer passwordless authentication, well below other sectors.
BeyondID urges security leaders to treat AI agents as they would any high-risk user, enforcing least-privilege access, continuously monitoring behavior, and incorporating non-human identities into the full IAM lifecycle.
“AI agents don’t need to be malicious to be dangerous,” the report concludes. “Left unchecked, they can become shadow users with far-reaching access and no accountability.”
These findings are based on a 2025 BeyondID survey of US-based IT leaders, including vice presidents, directors, and managers across industries including healthcare, finance, and technology.
Cyber Technology Insights : BIO-key Joins ISMS Forum to Advance Cybersecurity and Identity Management
To participate in our interviews, please write to our CyberTech Media Room at sudipto@intentamplify.com
Source: prnewswire