Orca Security, the leading provider of agentless cloud security, released the inaugural 2024 State of AI Security Report, providing insights into current AI utilization trends, how it impacts organizations’ security postures, and recommendations to mitigate risk. The report highlights that, as organizations invest in AI innovation, most of them are doing so without regard for security.

Cyber Technology News:  Verizon Frontline Survey: Key Issues for First Responders

“The heavy reliance on default settings, and willingness to deploy packages with known vulnerabilities, is telling. The rush to take advantage of AI has organizations skipping the security basics and leaving clear paths to attack open to adversaries.”

Compiled by the Orca Research Pod, the State of AI Security Report is a detailed study based on data from billions of cloud assets on AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud scanned by the Orca Cloud Security Platform in 2024.

The report uncovers a wide range of AI risks, including exposed API keys, overly permissive identities, misconfigurations, and more. Orca researchers trace many of these risks back to the default settings of cloud providers, which often grant wide access and broad permissions. For example, 45% of Amazon SageMaker buckets are using easily discoverable non-randomized default bucket names, and 98% of organizations have not disabled the default root access for Amazon SageMaker notebook instances.

“Eagerness to adopt AI tooling is leading organizations to needlessly increase their risk level by overlooking simple security steps,” said Gil Geron, CEO and co-founder at Orca Security.

Gil added, “The heavy reliance on default settings, and willingness to deploy packages with known vulnerabilities, is telling. The rush to take advantage of AI has organizations skipping the security basics and leaving clear paths to attack open to adversaries.”

Cyber Technology News: Varonis Expands Salesforce Security with Automated Remediation

Report Key Findings

The Orca Security 2024 State of AI Security Report finds that;

  • 56% have adopted their own AI models to build custom applications and integrations specific to their environment(s). Azure OpenAI is currently the front runner among cloud provider AI services (39%); Sckit-learn is the most used AI package (43%) and GPT-35 is the most popular AI model (79%).
  • 62% of organizations have deployed an AI package with at least one CVE. AI packages enable developers to create, train, and deploy AI models without developing brand new routines, but a clear majority of these packages are susceptible to known vulnerabilities.
  • 98% of organizations using Google Vertex AI have not enabled encryption at rest for their self-managed encryption keys. This leaves sensitive data exposed to attackers, increasing the chances that a bad actor can exfiltrate, delete, or alter the AI model.
  • Cloud AI tooling surges in popularity. Nearly four in 10 organizations using Azure also leverage Azure OpenAI, which only became generally available in November 2021. Amazon SageMaker and Vertex AI are growing in popularity.

“Orca’s 2024 State of AI Security Report provides valuable insights into how prevalent the OWASP Machine Learning Security Top 10 risks are in actual production environments,” said Shain Singh, Project Co-Lead of the OWASP ML Security Top 10. “By understanding more about the occurrence of these risks, developers and practitioners can better defend their AI models against bad actors. Anyone who cares about AI or ML security will find tremendous value in this study.”

To share your insights, please write to us at news@intentamplify.com