The euphoria surrounding AI has evolved into an overwhelming fear of the unknown. The reality should be more balanced. AI enables individual and organizational innovation and productivity on a level that has never been seen before. When its mission, goal, and guardrails are well-defined, the results are stunning and will only get better.
But, of course, there is the realization that these advancements are available to everyone. LLMs that in one person’s hands could reduce building a new application from weeks to hours could also be used to build sophisticated cyberattacks that are less likely to be detected.
In cybersecurity, AI can exacerbate existing vulnerabilities or introduce net-new threats.
As with all AI-fueled development, there are core capabilities bad actors leverage with AI. Knowing these helps in understanding the threats AI introduces and how companies need to prepare for them.
Recommended: Navigating the Security Maze of GenAI Applications
First is the ability to automate at scale. Earlier this year, a global brute force campaign that lasted for weeks targeted as many as 2.8 million IP addresses daily. Its goal was to discover valid login credentials for critical security infrastructure – firewalls, VPNs, and secure gateways – from trusted vendors, including Palo Alto Networks and SonicWall. Beyond the fact that a relentless attack on the infrastructure protecting so many companies makes this significant, the volume of attacked IP addresses and duration of daily attacks hasn’t been seen until today. This is AI at work.
An additional danger of large-scale brute force attacks is that, by their nature, they are distracting. Even when they aren’t sophisticated, when security professionals need to analyze and respond to swarms of attacks, other attacks can be coordinated to hit an organization at the same time. The noise of an AI automated attack is an effective distraction for more sophisticated activities such as malicious code injected to trusted software updates or the use of deepfakes impersonating company executives.
AI’s ability to learn and evolve adds another significant obstacle for enterprises that rely on after-incident forensics to inform how they fine-tune their detection and response capabilities. An ongoing issue for enterprise cybersecurity response is its failure to adapt. This is exacerbated when bad actors use AI to develop new attack tactics.
As LLMs continue to evolve, the attacks will become more refined or over time evolve into a new tactic. An organization reliant on only security platforms has a higher risk. This is because LLMs can be trained and find attack vectors and vulnerabilities on a known structure and process.
Recommended: The Importance of API Security Mechanisms Within CI/CD Pipelines
Lastly, for now, is vulnerabilities around AI-generated code. By itself, this code isn’t a security risk. From a quality control standpoint, if AI is generating code, best practices dictate that there is a way to easily roll back code, and a human may be added to the process to assess the code. While these both have security benefits, they also help ensure there hasn’t been AI drift introduced into the LLMs, and that the code represents what was expected. From a security perspective, ensuring the right person is kicking off the AI code generation goes a long way to protecting the integrity of code. Similarly, it is paramount to ensure the Copilot or AI Assistant user is who it is expected to be.
We’ve found that automation and AI injections require even more confirmation that the correct person is setting everything in motion. This is where credentials and traditional MFA shows their weaknesses. MFA isn’t verifying a person; it’s validating a specific device at a point in time. While this may sound like nuance to some people, to a bad actor it is additional points of entry for malicious activities.
In addition to verifying the human interacting with the AI, security solutions that are able to adapt themselves have the advantage of not having bad actors’ LLMs training for ways to bypass them and are less likely to have known security holes.
Ideally, security solutions as well as an organization’s protocols are dynamic and more gradual in response options. This is where organizations can apply their own use of LLMs to enhance their purple teams’ strategies or create their organization’s first purple team. In the AI age, organizations must introduce tools that not only analyze but provide context to potential security threats as they are surfaced. A tidal wave of alerts from anomaly detection isn’t helpful if there’s no way to prioritize them. Context in real-time reduces noise in a very noisy SOC and enables responding to threats before any damage is done.
Recommended: Feeding the Tech Talent Pool Early
To participate in our interviews, please write to our CyberTech Media Room at sudipto@intentamplify.com