
AI CERTS
11 hours ago
Microsoft Likely to Sign EU AI Code While Meta Declines Participation
As Europe sharpens its regulatory approach toward artificial intelligence, Microsoft is expected to sign the EU AI Code of Conduct, aligning with the European Union’s push for responsible AI governance. Meanwhile, Meta Platforms has declined to participate, raising questions about industry unity in addressing ethical AI development.

Microsoft Steps Toward Compliance and Transparency
Microsoft’s support for the EU AI Code reflects its proactive stance on responsible innovation and AI ethics. According to EU officials, the company has been "constructively engaged" with the European Commission and is finalizing commitments to uphold transparency, fairness, and risk mitigation in AI systems.
Brad Smith, Vice Chair and President of Microsoft, recently stated:
“We believe voluntary commitments, backed by clear legislation, are the path forward for building trust in AI.”
This move is seen as a strategic alignment with Europe's AI Act, which will soon come into full effect. By voluntarily endorsing the code, Microsoft is positioning itself as a leader in AI responsibility, especially important as generative AI tools like Copilot and Azure OpenAI continue to scale across industries.
Meta Refuses to Sign the EU AI Code
Contrary to Microsoft’s approach, Meta has decided not to sign the EU AI Code of Conduct at this time. While Meta did not issue an official statement, insiders suggest the company is cautious about voluntary commitments that could later influence regulatory enforcement.
This refusal has sparked debate among EU policymakers and digital rights advocates. They argue that large tech companies must lead by example, especially as AI systems increasingly affect elections, content moderation, and data privacy.
Thierry Breton, EU Commissioner for Internal Market, remarked:
“We welcome companies that take responsibility. Others will need to explain their absence.”
Implications for AI Governance in Europe
The EU AI Code is part of the Commission’s broader effort to implement the AI Act, aiming to make Europe a global standard-bearer for AI regulation. It sets expectations around:
- Transparency of AI model outputs
- Protection from algorithmic bias
- Clear labeling of AI-generated content
- Risk classification for AI systems
Microsoft’s willingness to participate enhances trust in AI development, whereas Meta’s non-participation raises compliance questions. This divide could influence how tech giants are perceived in future AI-related regulation and public trust metrics.
Industry Momentum Behind the Code
More than 150 companies have been approached to sign the EU AI Code, including Google, IBM, and OpenAI. Some, like Anthropic and Mistral, have signaled cautious interest but await clarity on enforcement mechanisms.
For professionals looking to understand AI regulations and their career implications, check out our article:
👉 Learn How to Become an AI Policy Expert
What’s Next?
With Microsoft likely joining the EU initiative, pressure mounts on other AI leaders to take a stand. The Commission has also warned that while the code is voluntary now, regulatory consequences may follow once the AI Act becomes enforceable in 2026.
For a deeper look at AI laws and frameworks shaping the tech industry, read:
👉 Explore the Future of AI Regulations Worldwide
Conclusion
Microsoft's readiness to sign the EU AI Code marks a pivotal moment in AI accountability, while Meta’s reluctance highlights the ongoing tension between innovation and regulation. As the AI landscape evolves, public trust and ethical responsibility will increasingly define success for tech giants.