Over 100 Companies Sign EU AI Pact Pledges to Drive Trustworthy and Safe AI Development

For organisations seeking compliance with the EU AI Act, adopting best practices is essential. By signing, mirroring, or implementing the principles of the AI Pact, you align with industry standards that promote trustworthy and ethical AI development.

The IST accreditation supports this journey by advancing AI literacy, enabling organisations to meet compliance requirements effectively. Accredited professionals are rigorously trained in these principles, ensuring they are equipped to implement and uphold the standards necessary for responsible AI governance.

Click here to find out more about our professional accreditation.

Click here to find out about our AI training.


The European Commission has announced a major milestone in its journey towards promoting safe and ethical artificial intelligence (AI) development. Over 100 companies have become the first signatories of the EU AI Pact, a voluntary initiative designed to support the principles of the EU’s landmark AI Act before it fully takes effect. The signatories range from multinational corporations to small and medium enterprises (SMEs) across diverse sectors, including IT, telecoms, healthcare, banking, automotive, and aeronautics.

This initiative represents a significant step in fostering collaboration between industry, civil society, academia, and the newly established EU AI Office. By signing the Pact, companies voluntarily commit to incorporating principles of trustworthy AI into their operations while preparing for the legal requirements of the AI Act.


Core Commitments of the EU AI Pact

Participating organisations are required to adopt at least three core actions:

  1. Developing an AI Governance Strategy
    Signatories commit to creating a governance framework that fosters the adoption of AI while ensuring future compliance with the provisions of the AI Act.
  2. Mapping High-Risk AI Systems
    Companies pledge to identify and map AI systems that are likely to be classified as high-risk under the AI Act. This is a proactive measure to ensure compliance with the legislation’s stringent requirements for these systems.
  3. Promoting AI Literacy and Awareness
    Companies are encouraged to promote ethical and responsible AI development by increasing staff awareness and understanding of AI technologies and their societal impact.

In addition to these core actions, more than half of the signatories have gone a step further, committing to additional measures. These include ensuring human oversight of AI systems, mitigating potential risks, and transparently labelling AI-generated content—such as deepfakes—to bolster public trust.

The EU AI Pact remains open to new signatories, with companies able to join and commit to these voluntary pledges until the AI Act fully comes into force.


Boosting EU Leadership in AI Innovation

Beyond the Pact, the European Commission is undertaking initiatives to solidify the EU’s position as a global leader in AI innovation. A cornerstone of this effort is the AI Factories initiative, launched on 10 September 2024. AI Factories will serve as innovation hubs, offering start-ups and industries access to vital resources such as data, talent, and computing power. They are designed to accelerate the development and validation of AI applications across critical sectors, including:

  • Healthcare
  • Energy
  • Automotive and transport
  • Defence and aerospace
  • Robotics and manufacturing
  • Clean and agritech

AI Factories are part of the Commission’s broader AI innovation package, unveiled in January 2024. This comprehensive strategy includes measures such as venture capital and equity support, the deployment of Common European Data Spaces, and the ‘GenAI4EU’ initiative aimed at advancing generative AI capabilities. Start-ups also stand to benefit from the Large AI Grand Challenge, which provides financial backing and access to the EU’s supercomputers.

Further strengthening the EU’s AI ecosystem, the Commission will establish a European AI Research Council to harness the potential of data and encourage innovative industrial applications of AI through the Apply AI Strategy.


The AI Act: A Timeline for Implementation

The EU AI Act officially came into force on 1 August 2024, marking the beginning of a new regulatory era for artificial intelligence. While the entire Act will not be fully applicable for two years, certain provisions have already taken effect:

  • Prohibitions on certain AI uses will be enforced after six months.
  • Governance rules and obligations for general-purpose AI models become applicable after 12 months.
  • Regulations for AI systems embedded in regulated products will apply after 36 months.

These phased timelines provide organisations with the necessary runway to adapt to the legislation while fostering a culture of ethical and trustworthy AI development.


A Collaborative Path Towards Responsible AI

The EU AI Pact exemplifies the importance of voluntary collaboration in advancing safe and trustworthy AI. With over 100 companies already on board, and more expected to join, this initiative is setting a global standard for responsible AI practices. At the same time, the Commission’s investment in AI innovation ensures that Europe remains at the forefront of technological advancement.

As the EU continues its journey towards the full implementation of the AI Act, these collective efforts highlight a shared commitment to balancing innovation with ethical responsibility. Together, industry, academia, and policymakers are shaping a future where AI benefits society as a whole.