The new AI Act is causing a stir in the EU - and not always for the right reasons. While politicians and companies are working hard to implement the complex regulation, existing laws are threatening to get in the way. The AI Act is intended to regulate AI systems and ensure that dangerous applications are used safely and responsibly in the EU. However, many experts are warning that the regulation will clash with existing regulations and thus create uncertainty. The EU must now act quickly to resolve these conflicts before implementation becomes a hurdle for companies.

The hurdles in implementing the AI Regulation

The AI Act takes an innovative approach: it classifies AI systems according to their risk potential and sets stricter requirements depending on the level of risk. However, this risk-based approach is not perfect. Many AI applications that fall under the AI Act are already regulated by other existing laws - such as the General Data Protection Regulation (GDPR) or the Digital Services Act (DSA). The problem is that these laws are often not coordinated, which can lead to overlaps and even contradictions.

This is particularly problematic in areas such as the financial sector, healthcare and the automotive industry. One example: AI-based credit assessment tools or cancer detection systems could be hindered in their development by different legal requirements. The AI Act could come into conflict with existing regulations here, which could slow down the spread and use of innovative AI technologies.

The danger of uncertainty: legislation without a clear line

What does this mean for companies that offer AI-based products and services? A legal minefield is emerging in which it is unclear which laws will ultimately apply. Data protection rights in particular could fall by the wayside. For example, the civil rights organization Privacy International criticizes the fact that AI models such as ChatGPT or Gemini were trained with personal data without a sufficient legal basis - a clear violation of the GDPR.

Another example: In the automotive industry, where AI is increasingly being used for driver assistance systems, the AI Act could come into conflict with existing product safety and liability regulations. Companies could ask themselves which regulations they need to comply with in order to avoid getting into legal difficulties.

What needs to happen now?

The solution could be to better dovetail existing legal frameworks. The AI Act should not be viewed in isolation, but in harmony with other digital regulations such as the DSA and the GDPR. Better coordination could help to clarify the legal gray areas and avoid duplication.

In the short term, national supervisory authorities should issue clear guidelines on the practical application of the AI Regulation - especially in specific sectors such as finance or healthcare. In the long term, a comprehensive revision and adaptation of existing regulations is needed to ensure sustainable harmonization between the AI Act and other laws. A regular review and adaptation of the regulations to new technological developments is also urgently required.

With the AI Act, the EU has taken a bold step towards the future. However, without precise, coordinated legislation, the regulation could have exactly the opposite effect and drown companies in a thicket of contradictory regulations. There is an urgent need for improvement here before we lose control of the AI market and nip valuable innovation in the bud. One thing is clear: the EU has a huge task ahead of it - and time is running out!

Subscribe to the newsletter

and always up to date on data protection.