flib 50 jaar
Published on: 14 December 2023

The new AI Act: what it means for entrepreneurs

Developments in Artificial Intelligence (AI) are happening at a rapid pace. AI systems can now make their own decisions, generate content and assist people, among other things. Such developments offer many opportunities, but also pose risks to society. For instance, companies can use AI to improve productivity by automating tasks. This only puts groups or (vulnerable) people at risk of discrimination. Take the benefits affair, for example. In addition, training AI systems, for instance, costs a lot of power and thus creates a lot of CO2 emissions which is bad for the environment.

The developments and risks of AI therefore prompted the European Commission to propose new AI regulations in Europe in early 2021: the AI Act. But what exactly does this regulation entail and what does it mean for you as a business owner? In this article, you will read more about the AI Act.

What is the purpose and to whom will the AI Act apply?

The purpose of the AI Act is to make the use and development of AI applications in the EU safer, more transparent, traceable, non-discriminatory and environmentally friendly. People and other users must be confident that AI is used in a safe way and that fundamental rights, such as the right to privacy, non-discrimination and freedom of expression, are guaranteed.

At the same time, the AI Act should encourage companies to continue developing AI systems by facilitating investment and innovation in AI. In this way, society should continue to enjoy the benefits of AI.

As you may already sense, the AI Act will soon mean a restriction of your freedom as an entrepreneur. This is because the AI Act imposes obligations on any company that develops, sells or uses AI systems.

Which obligations will my company soon have to comply with?

The specific obligations your company will soon have to comply with depend on the level of risk posed by the AI system in question. What are the risk levels?

The risk levels

The AI Act lists 4 levels:

  • Unacceptable: consider government AI systems that give citizens points based on behaviour that may or may not be desirable. Such systems create such a risk to security or infringement of fundamental rights that they are banned.
  • High: This includes systems that are used for automated decision-making, for example in the allocation of benefits. Systems falling under this risk level will soon have to comply with the AI Act obligations below.
  • Limited: examples include chatbots. Only a few transparency obligations will soon apply to these types of systems. For instance, people and other users must be clearly informed that they are dealing with an AI system.
  • Minimal: these include systems such as spam filters. No additional obligations will apply to these systems.

How do you know which risk level your AI system falls into? The AI Act has a list of unacceptable-risk systems in Article 5 for this purpose.

For high-risk systems, you can look at Annexes II and III of the AI Act. Annex II lists AI systems that are covered by specific EU regulations and therefore at least high risk. Annex III, on the other hand, only lists categories of systems that may be high risk. It is up to you to assess this yourself. Do this carefully. Are you misclassifying your AI system? Then you will soon run the risk of a fine.

The obligations at a high risk level

Do you develop, sell or use an AI system that falls within the high risk level? Then the following obligations will soon apply to you in brief:

  • Risk management: Risks must be identified, assessed, managed and mitigated before the AI system is placed on the market and during the system’s lifetime.
  • Data management: Will data or data sets be used to train the AI system? If so, the data should be unbiased, qualitative and representative.
  • Technical documentation: The technical documentation of the AI system should demonstrate compliance with the AI Act in such a way that competent authorities can verify it.
  • Transparency: There should be full disclosure on the use of AI systems. This includes providing detailed information on the operation and types of data collected by the system. It is also mandatory to be transparent about the algorithms used and explain how they work.
  • Human oversight: It must be ensured that human supervision is maintained when using the AI system.
  • Conformity assessment: Before an AI system is placed on the market, it should be inspected by you as the provider or importer. This means checking whether the system meets the obligations mentioned above. Note. Will the system be used as a security component for public infrastructure or biometric identification? Then you must have the inspection carried out by an external party.

From when does the AI Act apply?

The AI Act is not expected to apply until 2026. European countries and the European Parliament reached a preliminary agreement on the proposed AI Act on 8 December 2023. The agreement therefore now needs to be submitted to the full European Parliament and European member states for approval, but this is usually just a formality.

Will the AI Act be approved? Then the unacceptable AI systems must be taken off the market already within 6 months. For the rest, there is a 2-year transition period.

We therefore advise you to already take stock of which AI systems your company is using and what level of risk is involved. Can’t figure it out? Then call in a specialist.

Any questions?

Do you have any questions? Then contact one of our lawyers by mail, telephone or fill in the contact form for a free initial consultation. We will be happy to think along with you.

Articles by Britt Beumer

Send us a message

In case you have any questions or would like to schedule an appointment, please feel free to use the form below.