Elon Musk’s artificial intelligence company, xAI, has announced its commitment to sign the Safety and Security chapter of the European Union’s AI Code of Practice, a voluntary framework designed to guide companies in complying with the bloc’s recently enacted AI regulations.
The EU’s Code of Practice is structured around three key chapters: transparency, copyright, and safety and security. The safety and security chapter is particularly targeted at providers of the most advanced general-purpose AI models. Signing onto the code offers companies increased legal clarity under the EU AI Act, although it is not mandatory.
xAI confirmed its support for AI safety, stating it will participate in the safety and security component of the code. However, the company raised concerns about other aspects of the EU framework. While expressing support for safety efforts, it warned that certain requirements in the broader AI Act could pose significant risks to innovation. In particular, xAI took issue with the copyright chapter, calling its provisions excessive and potentially damaging to technological progress.
The company has yet to commit to the other two chapters of the code transparency and copyright which apply to all developers of general-purpose AI. It remains unclear whether xAI will eventually support these sections or limit its cooperation to safety-related initiatives.
The EU AI Code of Practice was developed by 13 independent experts and aims to serve as a practical implementation guide for companies navigating the new regulatory landscape. By signing any part of the code, AI developers can align themselves with the EU’s push for safe, transparent, and ethically grounded AI development.
Reactions from other major tech firms have been mixed. Some leading players in the AI industry have welcomed the code. Others, however, have expressed concerns over potential legal ambiguities and regulatory overreach. While some have embraced the initiative fully, others have opted out, arguing that certain measures extend far beyond the scope of the AI Act itself.
xAI’s selective endorsement highlights the ongoing debate within the AI sector over how to balance regulatory compliance, innovation, and ethical responsibility as Europe leads the charge in shaping global AI governance.