ReversingLabs (RL), the trusted name in file and software security, revealed a novel ML malware attack technique on the AI community Hugging Face. Dubbed “nullifAI,” it impacted two ML models Hugging Face hosts, employing a corruption for defense evasion on the AI platform. The discovery is outlined in RL’s latest research post, “Malicious ML models discovered on Hugging Face platform,” and is accompanied by a new white paper, “AI is the Supply Chain,” which highlights the larger cybersecurity challenges created by AI impacting software development.

Cyber Technology Insights: Expel Expands SIEM with New Flexible Data Lake Option

In its research post, RL examines how threat actors are seeking hard-to-detect ways to insert and distribute ML malware via unsuspecting hosts, such as the AI platform Hugging Face. The research details how attackers used corrupt Pickle files to evade detection and bypass Hugging Face security protections while simultaneously managing to achieve execution of malicious code. Hugging Face has been notified and the ML models in question were taken down.

“While the files discovered by our researchers appear to be ‘proof of concept’ rather than active threats, the failure to detect their presence points to a larger set of issues that are going to grow significantly and become more problematic as the use of AI coding tools grows,” said Tomislav Peričin, Chief Software Architect and co-founder, ReversingLabs. “Right now, AI is fueling modern software development, populating libraries and emboldening attackers. In fact, it’s safe to say AI is the supply chain, and while the benefits are vast, the security risks that come with it are alarming. To mitigate these new risks, organizations must embrace new modern software supply chain security solutions.”

Securing AI platforms and communities is critical. nullifAI is an example of an evolving category of risks for software supply chains where AI is involved; in this case ML models hosted in an AI community. In its new white paper “AI is the Supply Chain,” RL examines how AI is transforming software development, altering software supply chains and creating significant new cybersecurity challenges for businesses. According to Gartner, 75% of enterprise software engineers will use AI code assistants by 2028. This includes those offered by companies including Hugging Face, GitHub Copilot, Tabnine, and others.

While fueling incredible new innovations, AI-generated code will introduce new cybersecurity challenges to software development organizations. Examples include the growing use of outdated code, and more concerning, compromised code containing exploitable software vulnerabilities, or malicious features that are undetectable by traditional security measures such as static code analysis.

Address AI Risks in Software Development with Spectra Assure
ReversingLabs works with some of the leading AI companies to help secure their LLM and ML models. With the industry’s largest threat repository and RL’s advanced complex binary analysis, Spectra Assure offers the most comprehensive SBOM and risk assessment for applications—identifying malware, tampering, exposed secrets, vulnerabilities, weak mitigations, and more, in minutes and without requiring source code. As AI-generated code continues to explode, Spectra Assure provides the critical build exam for software vendors and AI platforms before shipping or including AI models in their software.

Cyber Technology Insights: PDW Names Dan DeFay VP of Information Security

To participate in our interviews, please write to our CyberTech Media Room at news@intentamplify.com

Source – Globenewswire