The pace at which new Generative AI (GenAI) applications are being introduced and adopted has accelerated exponentially. By 2026, Gartner now predicts that more than 80% of enterprises will have deployed GenAI applications, up from less than 5% in 2023. This growth comes as no surprise as organizations look to reap the benefits of the innovative technology, from increased operational efficiency to significant cost reductions. While OpenAI was the initial trailblazer in the GenAI race, in a bid to capitalize on demand, dozens of new GenAI applications emerge on the market weekly – both from existing large enterprises such as Microsoft and Salesforce, and aspiring startups. DeepSeek, a rival technology to OpenAI which made headlines last month, exemplifies this trend.
There are now thousands of GenAI applications, many of which are being actively leveraged by organizations. The vast array of these applications creates complexities, and many organizations are struggling to manually stay abreast of the volume of GenAI tools that their employees are utilizing. This creates numerous security and compliance complexities. Organizations lack visibility and control over the sensitive data that may be input into these largely unregulated, public systems. Activity that can often result in data privacy and compliance violations.
GenAI applications boast both a risk of data exfiltration and AI content proliferation, drastically increasing organizations’ attack surface and risk of data loss. GenAI risks include unlicensed training data, incorporating protected data into outputs, leaking sensitive data into other organizations’ generated content and generating hallucinations and biased outputs. With an increased sense of urgency to secure GenAI technology, many organizations have turned to existing security solutions for protection.
Recommended: Recovery Over Resistance: Cybersecurity’s Shifting Paradigm
Traditional Security Solutions Cannot Keep Pace with GenAI Complexities
A common approach taken by many organizations is to attempt to block all GenAI services via firewall, SASE, or CASB solutions. While they may start with the likes of ChatGPT or Anthropic, they quickly realize that they are barely scratching the surface. These services and others can be accessed not only through their primary URL, and mobile apps but also through numerous 3rd party applications and browser extensions. To make matters worse, nearly 20 new GenAI services are being introduced each week, making it almost impossible to stay ahead. Finally, with more and more applications embedding GenAI capabilities, it’s becoming easier to inventory which applications do not include GenAI capabilities rather than the ones that do.
Another approach often taken is to leverage traditional data loss prevention (DLP) solutions to implement a data-centric approach to security. Unfortunately, GenAI presents new challenges that traditional DLP simply cannot support. Traditional DLP exclusively focuses on preventing sensitive data from leaving the organization whereas securing GenAI service requires monitoring and moderating data being uploaded to the service as well as ones being presented from the service. Extending legacy architecture to monitor both directions of the data transmission has proven difficult. The multi-modal nature of GenAI services is another hurdle. It’s not uncommon for employees to request a summary based on an image of a whiteboard which contains confidential data in multiple languages and a diagram of sensitive data. The amount of analysis needed to decipher such transmission in order to make intelligent and safe decisions is beyond the scope of most DLP solutions.
Recommended: CyberTech Top Voice: Interview with Abhishek Karnik, Head of Threat Research at McAfee
The Path to Securing GenAI Applications
As you cannot secure what you cannot see, effective GenAI security starts with organizations conducting thorough audits of all AI-powered applications in use across their IT infrastructure. This includes applications that may have been adopted by select departments or individual employees. As ShadowAI can pose significant risks, it is essential that organizations maintain a comprehensive, accurate inventory of these services to gain visibility into potential vulnerabilities and inherent risks associated with GenAI providers. Such tasks simply cannot be conducted manually. Instead, organizations should consider adopting GenAI-specific security solutions that can automatically discover applications in operation and keep an up-to-date inventory as new tools are deployed.
Once organizations determine the specific AI services being leveraged across their entity, security teams must then conduct detailed risk assessments for each application. This includes assessing the security, compliance, and data privacy implications of the tools and implementing guardrails and security controls to mitigate identified risks. The deployment of AI-specific cybersecurity controls can also help in this domain in addition to implementing data encryption and establishing stringent access controls.
It is also important that organizations consider establishing an AI governance board that oversees the adoption and use of AI tools within organizations. Comprised of a representative from each department, an AI governance team can help develop AI policies and guidelines and ensure these are effectively enforced across an organization. These teams can also provide guidance to employees on the appropriate use of these technologies to avoid data exfiltration.
While organizations should encourage employees to experiment and adopt AI tooling to streamline tasks, they must also be educated on the risks of shadow AI and seek approval before adopting new AI tools.
Securing GenAI While Maximizing Benefits
Securing GenAI requires a new approach, and an effective security strategy that encompasses several elements requiring a unique GenAI0-focused approach. Organizations must establish clear policies and guidelines for the adoption and use of such tools, which can be achieved through the creation of AI governance boards and employee training. In addition, the deployment of purpose-built technology that can help organizations analyze and inventory GenAI applications in real time, will help organizations keep pace with the ever-changing landscape of new AI tools.
Recommended: The Importance of API Security Mechanisms Within CI/CD Pipelines
To participate in our interviews, please write to our CyberTech Media Room at sudipto@intentamplify.com