Business, the media and the general public are electrified by the topic of artificial intelligence. The current development in the field of generative artificial intelligence, which has accelerated rapidly with the launch of the ChatGPT application, is arousing enthusiasm, but also concerns about the consequences of artificial intelligence (AI). Even Pope Francis recently commented on the possible consequences of AI development at the G7 summit in Italy. 

The Artificial Intelligence Act of the European Union (“EU AI Act”) adopted by the EU member states on 21 May 2024 comes at just the right time. The technological leap in generative AI has made the mass distribution of AI-generated content and false information in words, images and sound easier than ever before. The EU AI Act, the first draft of which was published in 2021, is the world’s first comprehensive regulation for artificial intelligence. The EU AI Act aims to safeguard the security, values and fundamental rights – such as civil liberties and equality rights – of EU citizens when using artificial intelligence. In addition, the new AI Act is intended to promote investment and innovation in Europe. Thanks to the standardised legal framework and clear guidelines, companies in the field of artificial intelligence now have planning security and can therefore make targeted investments.

However, there are also critical voices warning against over-regulation of this still young technology. This is because fulfilling the requirements of the AI Act involves additional effort and ties up resources that could otherwise be used for other productivity-enhancing purposes. As a common economic area, Europe has so far been more of an observer than a creator in international competition in the development of artificial intelligence and must hurry to reduce the gap to the leading nations, such as the USA. In this respect, however, the EU AI Act should not be seen as a brake on innovation, but rather as an opportunity.

EU AI Act applies targeted approach to AI risk assessment

The EU AI Act is based on a risk-based approach and divides AI applications into four different risk classes. AI systems that fall under the highest risk level of “unacceptable risk” are to be banned in future. These include AI systems that harbour a high risk of potentially causing harm to users. Different requirements apply to the remaining risk levels, which increase as the risk class rises. For example, providers of high-risk AI systems must set up structures for quality and risk management. Separate rules have again been drawn up for general purpose AI that is particularly important for generative artificial intelligence. Among other things, providers must ensure that they comply with copyrights when using training data and label AI-generated content.

The implementation deadlines for the measures required under the EU AI Act vary depending on the risk class. Following the imminent publication of the law in the EU Official Journal, the deadline will be between six and 36 months. The AI systems with an “unacceptable risk” that will be banned in future will be subject to the shortest deadline and will no longer be allowed to be operated in the EU after just six months.

How should the EU AI Act be assessed in terms of its impact on innovation capability?

As an EU law, the AI Act still needs to be implemented at national level. However, it is already clear that compliance with the comprehensive regulations will involve considerable effort. In future, providers, distributors and suppliers of AI systems will have to ensure that they take the necessary precautions to avoid violating the provisions of the EU AI Act and being penalised. The EU has made special provisions for small and medium-sized enterprises and start-ups in the EU AI Act so as not to put them at a competitive disadvantage compared to the big tech players. They are to be granted so-called regulatory sandboxes and practical tests in order to be able to drive innovation in a protected framework.

On the other hand, the EU AI Act can in turn be seen as a compass for identifying where artificial intelligence has significant weaknesses and a need for improvement. This is currently evident, for example, in the ubiquitous large language models. AI models such as GPT-4o from Open AI offer the possibility of summarising content based on written or verbal prompts, creating new text in a specific style or new images. These models are trained with data from which patterns are extracted using machine learning algorithms. Many companies implement existing large language models, as the development of an in-house AI model is very costly. However, “off-the-shelf” AI models are black boxes where the user does not know what data the model is based on and what happens to their own data that they enter into the system. A company that uses a model for generative AI can therefore infringe copyrights without knowing it and place its own sensitive data in unauthorised hands. The EU AI Act puts a stop to this questionable development. Providers and B2B users of general purpose AI must provide technical documentation, which must be made available to the supervisory authorities on request. The end users of general purpose AI should also be able to view a publicly accessible summary of the training data used.

EU AI Act does important pioneering work and promotes good AI

The EU AI Act is a pioneering work in the regulation of artificial intelligence. It provides a good compass for the existing weaknesses of AI systems. It obliges companies to introduce sensible risk management in order to protect users from harm and safeguard fundamental rights in the EU. Companies operating in the field of AI will in turn have planning security thanks to the standardised legal framework. The EU AI Act should therefore not be seen as a stumbling block, but rather as an orientation path for the European AI industry.