Shadow AI is now widespread within organisations, with its use often going unmonitored. AI adoption is accelerating faster than security teams can keep up, leading to sensitive data being exposed—and in many cases, there’s no clear ownership of AI risk.

Even security teams are going rogue with AI.

In a survey of over 500 cybersecurity professionals at RSA Conference and InfoSecurity Europe 2025, AI security testing company Mindgard uncovered a striking trend: security staff are using AI without approval. This rise in Shadow AI is creating a serious blind spot inside the very teams meant to protect the enterprise.

Shadow AI refers to the use of generative AI tools such as ChatGPT or GitHub Copilot without formal oversight. Much like Shadow IT, this informal adoption bypasses security controls. But with AI, the risks are more acute; these tools can ingest sensitive code, internal documentation, and regulated customer data, significantly increasing the risk of data leakage, privacy violations, and compliance breaches.

Cyber Technology Insights : CISO Global Expands CHECKLIGHT Offering with Tailored Packages

Security teams are part of the problem

86% of practitioners report using AI, with 24% admitting to doing so via personal accounts or unapproved browser extensions. According to the survey, a substantial 76% of respondents suspect that their cybersecurity teammates are using AI tools in their workflows to write detection rules, generate training materials, or review code.

The risk is compounded by the type of data being entered into AI systems. Around 30% of security professionals said internal documentation and emails were being fed into AI tools within their organizations, and a similar number acknowledged the use of customer or confidential business data. One in five admitted to entering sensitive information, while 12% said they didn’t know what data was being submitted at all.

Cyber Technology Insights : CEA-Leti and Soitec Announce Strategic Partnership to Leverage FD-SOI

Oversight is inconsistent—or missing entirely

Monitoring and oversight lag far behind adoption. Only 32% of organizations have systems in place to track AI use. Another 24% rely on manual processes like surveys or manager reviews, which often miss unauthorized use. Alarmingly, 14% of respondents say there is no monitoring at all, leaving their organizations exposed to silent and unmitigated risk.

Peter Garraghan, CEO and Co-founder of Mindgard, said: “AI is already embedded in enterprise workflows, including within cybersecurity, and it’s accelerating faster than most organizations can govern it. Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls, or accountability. Gaining visibility is a critical first step, but it’s not enough. Organizations need clear ownership, enforced policies, and coordinated governance across security, legal, compliance, and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential.”

Cyber Technology Insights : IANS and Artico Search Release Compensation and Budget Report

To participate in our interviews, please write to our CyberTech Media Room at sudipto@intentamplify.com

Source: prweb