The EU unites global experts to create the first "AI Code of Practice" under its AI Act, focusing on transparency, risk management, and responsible AI development.
The European Union is taking significant steps to shape the future of artificial intelligence by developing the world’s first “General-Purpose AI Code of Practice.” This initiative, part of the EU’s AI Act, aims to establish new standards for transparency, copyright, and risk management within AI models.
In a September 30 announcement, the European AI Office revealed that hundreds of experts from academia, industry, and civil society are collaborating on this project. This extensive effort will culminate in a comprehensive framework addressing crucial issues such as risk assessment, internal governance, and the overall impact of general-purpose AI systems.
Nearly 1,000 Participants Shaping EU’s AI Future
The journey toward creating this Code of Practice began with an online plenary session attended by nearly 1,000 participants. These experts will be working on a draft that is expected to be finalized in April 2025. Once completed, the Code will play a central role in applying the AI Act to general-purpose AI models, including large language models (LLMs) and AI systems integrated into a wide range of sectors.
The development process has been divided into four key working groups, each focused on different aspects of the Code of Practice. These groups are led by renowned experts like Nuria Oliver, a leading artificial intelligence researcher, and Alexander Peukert, a specialist in German copyright law. Their efforts will center around transparency and copyright, risk identification, technical risk mitigation, and internal governance.
Between October 2024 and April 2025, these working groups will meet regularly to draft provisions, gather input from stakeholders, and refine the Code through a series of consultations.
A Global Standard for AI Governance
The AI Act, passed by the European Parliament in March 2024, is a landmark piece of legislation designed to regulate AI technology across the European Union. It categorizes AI systems into different risk levels—ranging from minimal to unacceptable—and imposes strict compliance measures for higher-risk systems, especially general-purpose AI models that could significantly impact society.
Although some major AI companies, including Meta, have expressed concerns about the regulations being overly restrictive, the EU is determined to strike a balance between innovation and safety. The collaborative drafting of the Code of Practice reflects this approach, aiming to foster innovation while ensuring responsible AI development.
The EU’s multi-stakeholder consultation process has already attracted more than 430 submissions, which will influence the final Code of Practice. By April 2025, the EU hopes to set a global precedent for the responsible development, deployment, and management of general-purpose AI systems.
As AI continues to evolve rapidly, the EU’s efforts could shape the future of AI governance not only in Europe but globally. Countries worldwide are increasingly looking to the EU for guidance on regulating emerging technologies, and this initiative could serve as a blueprint for AI policies in other regions.