Meta Refuses to Sign EU’s AI Code of Practice, Warning It Will Throttle AI Innovation in Europe

Meta's Joel Kaplan announced the company will not sign the EU's voluntary AI Code of Practice, calling its guidelines an overreach that risks stifling AI innovation and deployment in Europe. Kaplan warned the Code's broad requirements create legal uncertainties and burdens, potentially slowing AI development and harming Europe's competitiveness in the rapidly evolving AI landscape.

In a significant development in the ongoing debate over artificial intelligence regulation, Joel Kaplan, Meta Platforms’ Chief Global Affairs Officer, announced that Meta will not sign the European Union’s voluntary AI Code of Practice. Published in July 2025, the Code is designed to help companies comply with the EU’s landmark AI Act, which came into force in 2024. However, Kaplan criticized the Code’s guidelines as an “overreach” that would impose excessive constraints on AI developers and ultimately hinder the development and deployment of AI models across Europe.

Background: The EU's AI Regulatory Landscape

The European Union has positioned itself as a global leader in AI regulation. Its AI Act, one of the world’s most comprehensive frameworks, aims to address ethical, safety, transparency, and societal risks posed by AI systems. The legislation categorizes AI applications into risk tiers—prohibiting certain uses outright, requiring risk assessments and compliance for high-risk systems, and encouraging voluntary standards for lower-risk applications.

Complementing the AI Act is the EU’s AI Code of Practice, a voluntary set of guidelines co-created by the European Commission, industry stakeholders, and experts. The Code is intended to provide clarity and foster alignment among AI development organizations regarding transparency, fairness, security, accountability, and respect for fundamental rights. The EU hopes that broad adoption of the Code will promote public trust and facilitate compliance with the AI Act’s legal requirements.

Meta’s Concerns: Overreach and Innovation Chilling Effect

Despite the EU’s intentions, Kaplan argued that the Code’s provisions extend beyond what is reasonable or necessary, creating “legal uncertainties” for companies developing advanced AI models like Meta’s Llama series. Kaplan stated that many of the Code’s requirements are “vague, excessively broad, and could lead to onerous compliance burdens” that would stifle innovation and slow down product deployment for AI developers. He framed the EU’s approach as too restrictive compared to more adaptive regulatory models seen in other parts of the world.

Specifically, Kaplan highlighted concerns about overly stringent transparency rules that could expose proprietary model architectures, heavy documentation and auditing requirements that slow model updates, and potential mandatory pre-market risk assessments that lack clear criteria or consistent enforcement. He warned these regulations could create a “chilling effect” deterring investment and research in AI technologies within the European market.

Broader Industry Pushback

Meta is not alone in its hesitation to embrace the EU’s voluntary Code. Several prominent European industrial conglomerates—including Airbus, Bosch, and Siemens—have also expressed reservations, fearing the guidelines could impose bureaucratic overhead and hinder competitiveness globally. These companies share a broader concern that Europe's regulatory landscape risks putting European AI developers at a disadvantage relative to counterparts in the US, China, and other regions with more flexible regulatory regimes.

Moreover, the tension encapsulates a fundamental debate: whether regulatory safeguards should prioritize strict oversight to mitigate AI risks or encourage rapid innovation with lighter hand controls. Advocates of regulation emphasize risks such as algorithmic bias, privacy violations, disinformation, and safety hazards. On the other hand, opponents argue that overly cautious policies risk slowing technological progress, economic growth, and the European Union’s strategic autonomy in AI.

Meta’s Strategic Positioning

By refusing to sign the AI Code of Practice, Meta is making a clear statement that it favors regulatory environments that balance risk management with innovation incentives. Kaplan suggested that European policymakers should reconsider the Code’s design to avoid an overly fragmented or punitive ecosystem that impairs technology development. Instead, he advocates for international cooperation and “regulatory frameworks that promote transparency and safety without sacrificing competitiveness.”

Meta’s position puts it at odds with EU regulatory ambitions yet aligns with broader calls from major technology companies for harmonized, innovation-friendly AI rules. The move also reflects Meta’s ongoing strategic focus to lead in AI advancements, including large language models and generative AI systems, which require agile development cycles and the ability to experiment with new architectures and training methods.

Implications for Europe’s AI Future

Meta’s rejection of the EU’s AI Code underscores the challenges Europe faces as it seeks to regulate a rapidly evolving technology while maintaining global leadership in AI. If major industry players decline to participate in voluntary frameworks designed to complement formal legislation, the effectiveness of such initiatives may be undermined. This could lead to fragmented compliance approaches, legal uncertainty, and potential competitive disadvantages for European AI startups and multinationals alike.

The debate highlights the delicate balance regulators must strike between ensuring AI safety, fundamental rights, and ethical standards, and enabling innovation, investment, and rapid deployment. It also illustrates the broader tensions between the European regulatory model—often characterized by precaution and rights-centric governance—and the more market-driven approaches favored in the United States and parts of Asia.

Final Thoughts

 

Joel Kaplan’s announcement that Meta will not sign the EU’s AI Code of Practice marks a pivotal moment in the evolving landscape of AI regulation. It exposes the fault lines between industry giants and European policymakers over how best to regulate transformative technologies without stifling innovation. The coming months will be crucial to see whether the EU adapts its regulatory framework to address industry concerns or whether companies like Meta chart alternative paths that might shape the global future of AI development and deployment.