Artificial intelligence has recently played an infamous role in the context of misleading content. The development of new applications, such as the recent AI video tool Sora from OpenAI, is making it increasingly easy to create authentic-looking content based on artificial intelligence. The deceptively real-looking falsification or manipulation of existing content, known as deepfakes, is a downside of the advancing technological development. However, artificial intelligence and its ability to extract coherences from billions of data records in a matter of seconds can also be used as ‘good’ AI to specifically identify critical and prohibited content – even when content generated by ‘bad’ AI floods the internet en masse.

The EU Digital Services Act (DSA), which came into force on 17 February 2024, makes this topic even more explosive. In conjunction with the EU Digital Markets Act, which has been in force since 2023, it represents a kind of basic law for the internet. In addition to protecting the fundamental rights of consumers, it also aims to protect consumers and users from illegal and misleading content. With the adoption of the Digitale-Dienste-Gesetz (DDG) in the German Bundestag on 21 March 2024, the DSA was transposed into German law. As a result, online service providers such as digital marketplaces, social media providers, community forums, music and video streaming platforms, booking portals, but also – to a limited extent – cloud and web hosting service providers as well as internet service providers and domain registries are now subject to this obligation. The principle here is that the more content is stored or processed by online providers, the more comprehensive the regulation will be. The most strictly regulated are 19 large online service providers, each with more than 45 million active users per month. Among other things, the Digital Services Act obliges online service providers to set up contact points for the supervisory authority and users, provide transparency reports, comply with fundamental rights and information obligations towards users, take measures against abusive content reporting and counter-notifications, disclose recommendation algorithms to users and not display advertising to minors or based on personal data.

If online providers do not comply with the rules under the DSA or DDG, they face fines of up to six per cent of their global annual turnover. In order to avoid being targeted by the Bundesnetzagentur, the supervisory authority designated under the German Digital Services Act, online providers must now also take swifter action against illegal content. The contact points to be set up will make it easier for users to report offensive content.

Artificial intelligence as part of the solution

The use of AI can already be used today for any type of content to check content for prohibited labelling – which includes not only symbols but also wording – in real time. But hate speech, nudity or other illegal formats can also be identified immediately before they spread on a platform and lead to trouble. Artificial intelligence can also be utilised for biometric procedures and identity verification, thereby contributing to the protection of children and young people. For example, artificial intelligence can be trained to estimate a person’s age very accurately. For instance, if a video streaming platform offers films that are age-restricted, AI could be used in future to determine the age of viewers using a video sensor. If the person being screened turns out to be younger than the age limit, the video can be automatically blocked from being displayed.

One area in which AI can also provide valuable services is content moderation. To prevent illegal content such as scenes of violence from being published, social media companies partly rely on human content moderators. These often work for subcontractors in emerging and developing countries under poor working conditions and high levels of psychological stress. Last year, content moderators in Kenya filed a lawsuit against Facebook and its contractor Sama due to their precarious working conditions. Artificial intelligence has the potential to identify and block dangerous content completely autonomously in the future, making human content moderators obsolete for checking this content.

Artificial intelligence is able to offer extensive support to all service providers that fall under the new ‘Digital Basic Law’. AI can be used today and in the future by online providers to efficiently implement the stricter requirements under the Digital Service Act and the Digitale-Dienste-Gesetz. We are only at the beginning of the development cycle when it comes to the possibilities for the use of AI.