Representatives from leading AI security organizations have announced the formation of MOSAIC (Multi-Organization Secure AI Coordination), a new collaborative effort aimed at reducing fragmentation in AI security standards and improving consistency across the industry. The initiative was established on April 21 during the AI Security Policy Forum, held alongside the SANS AI Cybersecurity Summit and convened by the OWASP AI Exchange with co-host SANS Institute.
MOSAIC brings together major organizations shaping AI security frameworks, including OWASP, National Institute of Standards and Technology, Cloud Security Alliance, Center for Internet Security, Coalition for Secure AI, and Berryville Institute of Machine Learning. Additional contributors include representatives from the International Telecommunication Union and the Aspen Institute.
The formation of MOSAIC addresses a growing concern among security professionals: the rapid proliferation of AI security guidance without sufficient coordination. Conflicting definitions and frameworks have made it difficult for organizations to implement effective defenses, contributing to skill gaps and increased exposure to threats.
Rob T. Lee noted that inconsistent guidance has become a major obstacle for defenders, particularly those protecting critical infrastructure. He emphasized that MOSAIC represents the first coordinated effort among standards bodies to address this issue collectively.
Rather than creating a new framework, MOSAIC aims to connect and harmonize existing standards, making them more practical and usable for organizations. Rob van der Veer said the initiative focuses on aligning definitions, improving communication, and enabling collaboration without adding unnecessary complexity or bureaucracy.
Initial priorities for the group include establishing shared definitions for key concepts such as AI risk and security, creating a common communication platform for participating organizations, and defining operating principles for collaboration. The initiative will use an open, GitHub-based coordination model aligned with OWASP’s principles of transparency and inclusivity.
As part of its launch, the OWASP AI Exchange introduced a shared taxonomy built on the OpenCRE platform, designed to map terms, controls, and concepts across different AI security standards.
MOSAIC is structured as an open-membership initiative, allowing additional organizations working on AI security to participate. The effort reflects a broader industry push to standardize approaches to AI risk management as adoption accelerates and security challenges grow more complex.
Recommended Cyber Technology News:
- Vodafone and Google Cloud Expand Partnership with AI and Cybersecurity Solutions
- GitLab Expands Amazon Bedrock Integration for DevSecOps
- NDPC, CIoD Partner to Boost Data Protection in Nigeria
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading





