The world’s first comprehensive Artificial Intelligence (AI) law, the AI Act, was published in the European Union (E.U.) Official Journal this summer. Shortly afterwards, the European AI Office invited eligible individuals, academics and organizations to apply for the opportunity to help create the first general-purpose AI Code of Practice (the Code). The Code of Practice will detail rules for general purpose models and models which present a systemic risk under the AI Act and is designed to bridge the gap between the Act coming into force and the development of European-wide standards. ISC2 has been selected to contribute to this effort, leveraging its expertise as the world’s leading member association for cybersecurity professionals. The AI Act creates obligations for providers to ensure proper cybersecurity protection for models, so it is vital that the Code reflects cybersecurity expertise.
Cyber Technology Insights: Simbian Launches AI Agents for Autonomous Security Operations
Taming the Wild West
Since the introduction of DALL-E in 2021 and ChatGPT in 2022, Generative AI (GenAI) has exponentially expanded, quickly becoming integrated into our work and personal lives in ways that seemed like science fiction only a few short years ago. The proliferation of GenAI has created a Wild West situation, with regulations, rules and a common understanding about its ethical use lagging behind.
With the Code, the European AI Office seeks to “facilitate the proper application of the rules of the AI Act for general-purpose AI models.” While it will not be legally binding, the Code will help AI providers to prepare for compliance with the obligations in the Act. The Code could become the model for other governments’ regulation of AI, so great effort has been made to include experts from around the world in its development.
Code of Practice Plenary
The drafting process is expected to run from September 2024 to April 2025. After selecting those who will collaborate to develop the Code, an initial meeting was held in September for all participants. There will be three online plenaries to work on drafts, with discussions organized into four working groups:
- Working Group 1: Transparency and copyright-related rules
- Working Group 2: Risk identification and assessment measures
- Working Group 3: Risk mitigation measures
- Working Group 4: Internal risk management and governance for Global Partnership on Artificial Intelligence (GPAI) providers
The final version of the first Code of Practice will be published then presented at a Closing Plenary in April 2025. During this meeting, companies that create general-purpose AI models can share their thoughts on whether they plan to follow these guidelines. Following publication of the Code, the AI Office and AI Board will review it. If it is deemed adequate, it will be officially adopted across the European Union. If not, the European Commission will create common rules to follow instead.
ISC2 Involvement in Development of the Code
Jon France, CISSP, Chief Information Security Officer (CISO) at ISC2, will represent the organization in Working Groups 2 and 3.
“It’s truly an honor to have been selected to help craft these landmark guidelines for the safe and ethical use of GenAI,” France said. “The eyes of the world will be upon us as we work to create guidance that will influence global GenAI practices for years to come.”
As CISO at ISC2, France advocates for security, risk management, skills development and awareness within ISC2 and across industry. In 2023, he was recognized with a CSO 30 UK Award, given to the top 30 IT leaders in the U.K., along with special recognition for his outstanding hiring and talent retention work.
Cyber Technology Insights: Hammerspace and Parallel Works Launch Unified Orchestration Tool
To share your insights, please write to us at news@intentamplify.com
Source – isc2