In a fresh cybersecurity warning, LevelBlue has raised concerns over a growing threat posed by unauthorised artificial intelligence agents operating within enterprise environments. The company has termed this emerging risk “GhostOps,” highlighting how rapidly deployed AI tools are outpacing governance and security controls across organisations.

To begin with, many enterprises are embracing AI agents to streamline operations and automate repetitive workflows. However, this rapid adoption often happens before proper oversight frameworks are implemented. As a result, security teams are left with limited visibility into systems that no longer just process data but actively perform tasks within corporate environments.

Moreover, research referenced by LevelBlue underscores how widespread the issue has already become. Insights from Microsoft Cyber Pulse reveal that 29% of employees have already used unsanctioned AI agents for work-related tasks. At the same time, 80% of Fortune 500 companies are actively running AI agents, further emphasizing the scale of adoption.

Unlike traditional shadow IT, where employees use unapproved tools, GhostOps introduces a more complex risk. These AI agents can retain sensitive prompts, connect to APIs, and execute multi-step processes autonomously without direct human intervention. Consequently, this evolution significantly alters the enterprise risk landscape.

In addition, the deployment of such agents across departments—from developers to business teams—can quickly scale without formal approval or architectural review. In some cases, organisations may unknowingly operate dozens or even hundreds of such agents. This creates vulnerabilities such as exposed credentials, unmanaged integrations, and susceptibility to prompt injection attacks that can manipulate agent behavior.

Growing Blind Spot

Furthermore, the rise of open-source AI agent ecosystems adds another layer of complexity. Many frameworks allow plug-ins or integrations that directly connect to enterprise systems, thereby increasing software supply chain risks. As these dependencies grow, tracking and managing them becomes increasingly difficult.

As a result, security teams often struggle to answer critical questions: Who deployed the agent? What systems did it access? What actions did it perform? This lack of clarity complicates incident response and weakens accountability.

“Unlike traditional shadow IT, the risk is bigger this time because AI agents don’t just store or share data; they can take actions inside company systems. When an AI agent interacts with tools and data, it becomes an operational actor inside the environment. If organisations cannot see that activity clearly, they lose visibility not just of information but of the actions taking place inside their systems,” said Grant Hutchons, Director of Security Solution Engineering and Architecture, APAC, LevelBlue.

Governance Over Restrictions

At the same time, LevelBlue stresses that banning AI agents entirely is not a viable solution. Businesses are under constant pressure to improve efficiency, and restricting AI tools may drive employees to use personal devices or unmanaged accounts, thereby increasing risk rather than reducing it.

Therefore, the company recommends a balanced approach that combines governance with continuous monitoring. Establishing clear deployment guidelines, identity controls, and data protection policies can help organisations manage AI adoption more effectively. Simultaneously, monitoring tools can detect unauthorised agents across endpoints, identities, and cloud environments.

“Security teams often cannot determine who deployed the agent, what systems it accessed, or what actions it performed. This makes the investigation much harder when something does go wrong. Organisations can’t simply ban AI agents because the productivity benefits are too significant, and strict restrictions often see employees experimenting instead on their personal devices or using unmanaged accounts. Instead, organisations must integrate governance models that ensure they can maintain oversight with AI adoption,” Hutchons said.

A Shift in Enterprise Risk

Ultimately, this warning reflects a broader transformation in how technology risks evolve. While earlier concerns around shadow IT focused on decentralised software usage, GhostOps introduces decentralised autonomy—where systems independently execute decisions within business processes.

For organisations, particularly those actively encouraging AI experimentation, the risks may already exist. Autonomous or semi-autonomous agents could be operating across functions such as finance, customer support, and software development without full visibility.

“Organisations should work on the assumption that GhostOps already exist in their environment and start by measuring it. Once it is known where agents are operating and what they are doing, it becomes easier to put the right guardrails in place.

“The emergence of GhostOps signals a broader shift in how technology risk develops for many organisations. Shadow IT reflected decentralised technology adoption, whereas AI agents introduce decentralised autonomy,” Hutchons said.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com