The AI Act: new regulations and their significance for companies
1. Who does the AI Act apply to?
The AI Act applies primarily to providers (organisations that develop, place on the market or put into operation an AI system) and operators (organisations that use AI systems commercially under their supervision) of AI systems that are marketed or used in the EU, regardless of whether they are based in the EU or a third country. As a rule, open source AI systems that are not assigned to the two highest risk levels are exempt (see below). In addition, importers and distributors are also subject to obligations under the AI Act.
In short, the AI Act contains obligations for all actors involved in the development, use, import, distribution or production of AI systems.
2. Classification of AI systems
The AI Act essentially follows a risk-based approach. It categorises so-called single-purpose AI systems (AI systems with a specific intended use) into four different risk categories based on their area of application, in each of which specific prohibitions or compliance and information requirements apply:
Unacceptable risk (Art. 5): Systems that are considered a clear threat to people's rights and safety will be prohibited in the European Union in the future. This includes, for example, social scoring systems, manipulative AI as well as emotion recognition in the workplace and in educational institutions.
High risk (Art. 6-27): High-risk systems, which are the main focus of the regulation, are considered to have a significant impact on the rights and safety of citizens. They are subject to a variety of compliance requirements (see below for details). High-risk systems are divided into two categories:
- Systems for products that are subject to EU safety regulations, e.g. toys, medical devices or lifts (Art. 6(1) in conjunction with Annex I).
- Systems that are operated in certain areas, for example in critical infrastructure, human resources management, education or medical diagnosis (Art. 6(2) in conjunction with Annex III).
Limited risk (Art. 50): The risks of these systems relate to a lack of transparency, which is why specific disclosure obligations apply to them. For example, before using a chatbot, humans must be informed that they will be interacting with an AI. Providers must also ensure that AI-generated content, in particular deepfakes, can be identified as such.
Minimal risk (Art. 4, 95): Many current AI applications, such as AI-powered video games, spam filters and recommendation services, fall into this category. No special regulations apply to them and they can continue to be used freely.
3. Requirements for providers and operators of high-risk AI systems
The majority of the obligations in the AI Act relate to high-risk AI systems, the development and use of which will be subject to a wide range of compliance requirements in future. These obligations include, for example
Conformity test and registration: The system must undergo a conformity test before being placed on the market and must also be registered in an EU database:
Risk management: Providers must establish an appropriate risk management system and carry out a risk assessment over the entire life cycle of the AI system.
Data governance: A high quality of the training, validation and test data sets must be ensured in order to avoid bias and discriminatory results.
Technical documentation: Before the system is placed on the market or put into operation, technical documentation must be drawn up from which the competent authorities can clearly and comprehensibly see that the system fulfils the requirements of the regulation.
Human oversight: High-risk AI systems must be designed in such a way that effective supervision by a human is ensured for the duration of their use.
4. Special requirements for “General Purpose AI” (GPAI)
General-purpose AI systems that can be used for a variety of purposes ("GPAI models") have the potential for widespread use due to their flexible application possibilities, especially through APIs to other applications. At the same time, it is difficult to keep track of the possibilities of these models. Examples of such GPAI models include GPT-4, DALL-E and Midjourney.
These systems are subject to a separate classification framework (Art. 51 et seq.), which again follows a tiered approach. The GPAI models are subject to transparency requirements, which include, for example, technical documentation and instructions for use. More extensive risk management requirements apply to particularly powerful and influential models with "systemic risk" (Art. 55), which include a comprehensive risk analysis, regular monitoring and cybersecurity requirements.
5. Implementation and transition periods
The AI Act will enter into force 20 days after its publication in the Official Journal of the EU, which is expected by the end of June 2024. The provisions will then be implemented in stages:
- 6 months after entry into force (approx. end of December 2024): AI systems with unacceptable risk are banned and must be withdrawn from the market.
- 12 months (approx. June 2025): The provisions on GPAI take effect.
- 24 months (approx. June 2026): All provisions for which nothing to the contrary is stipulated enter into force.
- 36 months (approx. June 2027): The special requirements for high-risk systems become effective.
6. Sanctions and compliance
The AI Act is enforced in a dual system at an administrative level. On EU level, the new "AI Office", an authority within the European Commission, is responsible for the supervision of high-impact GPAIs and for coordinating implementation in the Member States. In addition, the Member States set up national authorities that are responsible for enforcing the regulation.
Depending on their severity, violations of the AI Act can lead to significant fines. These can amount to up to EUR 35 million or 7 % of the company's global annual turnover in the case of infringements relating to prohibited AI systems, and up to EUR 15 million or 1.5 % of annual turnover in the case of more minor infringements. The regulation provides for lower fines for SMEs and start-ups. It should also be noted that the supervisory authorities can force providers to withdraw non-compliant AI systems from the market.
7. Need for action and advice for companies
Companies should use the time remaining until the AI Act comes into force to be prepared for the new requirements at an early stage. The following steps are of particular importance:
- Inventory: Companies should first check internally which AI systems they use, develop or purchase from external providers. If not yet available, it is advisable to create a continuously updated directory, as the use of AI is expected to increase in the future.
- Classification: The identified AI systems can then be categorised according to their risk. This categorisation can sometimes be complex, but must be carried out carefully due to the widely diverging requirements.
- Preparation: Once the requirements have been clarified, the concrete implementation of the respective obligations can begin. This includes, for example, the creation of technical documentation, the introduction of a risk management system and the establishment of a governance structure. In addition, it may be advisable to draw up internal guidelines for dealing with AI systems and to raise employee awareness of the new regulations by providing information and training.
8. Conclusion
The AI Act is a milestone in AI regulation, but also a very extensive and complex set of regulations, the content of which can only be touched on in this article. Companies are required to carefully review their AI systems and processes and adapt them to the new legal requirements within the transition periods. Early preparation and close collaboration with technical and legal experts are crucial in order to ensure compliance and utilise the innovative potential within the new legal framework.
Our law firm is here to advise you. We support you in mastering the scope and complexity of the AI Act and preparing you optimally for the upcoming regulations.
You might also be interested in this
German collecting society GEMA now seems to be going on the offensive against providers of generative AI systems. Following the presentation of a – in their opinion – fair licensing model for generative artificial intelligence at the end of September, an “AI Charter” as a suggestion and guideline for the responsible use of generative AI was presented at the beginning of November, and now a lawsuit has been filed against OpenAI at the Munich Regional Court.
In a landmark decision, the European Court of Justice (ECJ) ruled on 24 October 2024 that the Member States of the European Union are obliged to protect works of applied art, regardless of their country of origin or the nationality of their creators. “Works of applied art” are objects that serve a specific purpose but are also artistically designed. Examples include furniture such as chairs, shelves and lamps, but also – under strict conditions – fashion creations.
The use of cheat or modding software has always been controversial in the world of video games. While many gamers see it as a way to make games easier or more exciting, developers and publishers often see it as a threat to their rights and the integrity of their products. The European Court of Justice (ECJ) had to consider the copyright component of this issue in a dispute between Sony and the UK company Datel over the use of cheat software called “Action Replay”, which allowed users to alter the course of a game to gain unintended advantages. Read our article to find out how the case was decided and what the implications are for software development practice.
According to the decision “Der Novembermann” of the Federal Court of Justice (BGH), the fees for warning letters are to be calculated on the basis of a so-called overall value of the claim (“Gesamtgegenstandswert”) and allocated to the individual warning letters if they are related to each other in such a way that the same matter is to be assumed.