AI
The AI Act
What is the purpose of the AI Act?
The AI Act is an EU Regulation (No. 2024/1689 European Union (EU)) meaning it takes direct effect across all EU Member States without the need for national legislative action. The AI Act aims to create a robust framework to ensure the safe and ethical development and deployment of AI technologies in the EU.
It seeks to foster innovation while ensuring that AI systems are developed and used under conditions that protect health, safety, human rights, and intellectual property rights. The Act establishes a risk-based approach to classification and regulation of AI systems, ensuring that higher-risk AI systems are subject to stricter controls while minimising regulatory burdens on low-risk systems.
Key provisions
The AI Act categorises AI systems based on risk levels:
- Unacceptable risk: AI systems with unacceptable risk are prohibited.
- High risk: these systems are regulated and must meet stringent requirements to access the EU market.
- Limited risk: these systems are subject to transparency obligations.
- Minimal risk: these systems are largely unregulated.
The AI Act also recognises the concept of general purpose AI (GPAI); AI systems that can be used for multiple purposes and are usually generative in nature, ie creating output content based on user inputs or 'prompts'. Specific rules apply to GPAI models, particularly those with high-impact capabilities. Such models are subject to stricter requirements due to their potential systemic risks and significant market impacts.
The AI Act provides a legislative framework to regulate entry into the EU internal market and mandates conformity assessments to verify AI systems' compliance with established standards. These assessments can be carried out in two ways: either self assessments by AI system providers, or third-party assessments conducted by designated notifying bodies. Notifying bodies also have auditing authority to ensure proper conformity.
The AI Act creates a number of new advisory and regulatory bodies including:
- The AI Office: attached to the European Commission, the AI Office co-ordinates the implementation of the AI Act across Member States and oversees the compliance of general purpose AI providers.
- The European Artificial Intelligence Board: comprising representatives from each Member State, the Board advises the Commission and facilitates the application of the AI Act, fulfilling a similar role to that of the European Data Protection Board under the GDPR.
- The Advisory Forum: this forum, consisting of a balanced selection of stakeholders from industry, start-ups, SMEs, civil society, and academia, provides technical expertise and advice to the Board and the Commission.
- National competent authorities: Member States must designate national competent authorities responsible for implementing the AI Act and conducting market surveillance. They verify AI systems' compliance, oversee conformity assessments, and appoint third parties for external conformity assessments.
Where will the AI Act apply?
The AI Act applies extraterritorially, covering any provider or distributor of AI whose services or products reach the EU market. This includes providers and users outside the EU if their AI system's output is used within the EU.
Who will have obligations under the AI Act?
The Act imposes regulatory obligations on providers, deployers, importers, distributors, and product manufacturers of AI systems, with a link to the EU market. The development of AI systems for military use is explicitly excluded from the scope of the AI Act.
Are there sanctions for non-compliance?
Organisations can face significant fines for non-compliance which vary depending upon the infraction and the nature of the institution found liable.
- The highest fines are reserved for the use of prohibited AI systems which can be as much as EUR35 million or 7% of worldwide annual turnover for the preceding financial year, whichever is higher.
- Non-compliant GPAI providers can be fined 3% of their annual total worldwide turnover in the preceding financial year or EUR15 million, whichever is higher.
Key dates and deadlines
- 2 August 2024 – the AI Act entered into force.
- 2 February 2025 – provisions on prohibited AI systems come into force.
- 2 May 2025 – codes of practice must be produced.
- 2 August 2025 – provisions on GPAI come into force.
- 2 August 2026 – all provisions (except those relating to harmonised products) come into force.
- 2 August 2027 – provisions relating to harmonised products come into force.