Compliance by design: How can the requirements of the EU AI Regulation be met?


With the EU AI Regulation (AI Act), the European Union is creating a comprehensive legal framework for the use of artificial intelligence for the first time. Providers of high-risk AI systems in particular are thus faced with the challenge of complying with a multitude of requirements – from risk management and data governance to CE marking. A compliance-by-design approach is crucial to meeting these requirements efficiently and sustainably. This means that regulatory requirements are not only checked at the end of a project, but are systematically integrated into the development process from the outset.

From quality by design to compliance by design

While ‘quality by design’ aims to ensure consistent quality objectives such as accuracy, robustness and fairness, ‘compliance by design’ extends this approach to include regulatory requirements. However, there is no traditional approval process for AI systems in the EU and, accordingly, no approval authority for AI systems. Instead, providers themselves declare in the form of a declaration of conformity that their product meets the legal requirements. In certain cases, a notified body checks whether this is the case and issues a certificate. Only then may the CE mark be affixed and the system marketed. Depending on the type of AI system, compliance with these legal requirements can cause considerable effort on the part of providers and operators of AI systems.

Key requirements for high-risk AI systems

The EU AI Regulation prescribes a series of measures that providers must implement. These include, first and foremost, a risk management system (Art. 9) that systematically identifies and assesses potential risks. Another focus is on data and data governance (Art. 10) to ensure that training data is of high quality, representative and free from discriminatory bias. In addition, there is an obligation to provide technical documentation (Art. 11) and records (Art. 12) to ensure full traceability. Transparency obligations (Art. 13) and appropriate human oversight (Art. 14) are also required to ensure that decisions remain verifiable. Finally, the regulation sets out requirements for accuracy, robustness and cybersecurity (Art. 15) and formulates labelling requirements. All these points are incorporated into a comprehensive quality management system that must be maintained on an ongoing basis.

Declaration of conformity and market surveillance

Before the AI system can be marketed, the EU AI Regulation requires a conformity assessment (Art. 43), which results in the EU declaration of conformity (Art. 47) and the affixing of the CE marking (Art. 48). If necessary, providers may need to involve a notified body. Providers must also enter their systems in a register (Art. 49) and are obliged to initiate corrective measures and inform the authorities if problems arise (Art. 20). Appropriate processes must be established in advance for this purpose. At the same time, market surveillance authorities keep an eye on the systems (Art. 74), and providers must ensure continuous monitoring after placing the product on the market (Art. 72, 73). This makes it clear that compliance does not end with the product launch, but remains an ongoing process.

Roadmap: 10 steps to EU AI compliance

To bring an AI system to the EU market in compliance with the rules, a structured approach is recommended, which requires appropriate planning. We suggest a 10-step approach:

1. Roadmap: How do I proceed?

Drawing up a schedule for development and market launch. A structured project plan with clear milestones is the basis for incorporating regulatory requirements in good time.

2. Purpose: What exactly should the AI do?

Precisely defining the tasks the system is to perform. Only when the intended use is clearly described can the appropriate regulatory requirements be determined.

3. Qualification: Is it an AI system?

Checking whether the application actually qualifies as an AI system according to the definition in the EU AI Regulation. This is crucial in order to follow the correct compliance paths.

4. Risks: Which risk category applies?

Classifying the system in the appropriate risk category. The EU AI Regulation takes a risk-based approach. The lower the risks, the lower the regulatory requirements, and vice versa.

5. Laws: Which laws, directives and standards apply?

There are many, not just the EU AI Regulation. Therefore, all relevant legal frameworks are identified here, from EU regulations and national laws to industry-specific standards and guidelines.

6. Stakeholders: Who bears what responsibility?

Clarifiying of who assumes which roles as manufacturer, provider, operator or supplier. Responsibilities must be clearly defined in contractual and organisational terms. This results in different rights, obligations and (liability) risks.

7. Obligations: What requirements must be met in detail?

Only now do we get to the actual specifications: Derivation of specific obligations and technical requirements based on the characteristics of the product, such as risk management, data governance, technical documentation, transparency obligations or technical features. Standards and other (technical) specifications play an important role here as proof.

8. Conformity: How do I obtain the CE marking?

Carrying out the conformity assessment, drawing up the EU declaration of conformity and affixing the CE marking. This is a prerequisite for placing the product on the market. If necessary, a notified body must be involved for the relevant certification.

9. Monitoring: What happens after the product is placed on the market?

Implementation of a monitoring system that monitors the performance, safety and incidents of the product. Providers may need to take corrective action and cooperate with authorities.

10. Market: What special aspects need to be considered when entering the market?

Consideration of industry-specific or national particularities that may go beyond the EU AI Regulation. These include, for example, additional certifications or requirements from supervisory authorities.

Advantages and benefits of the roadmap

Providers of AI systems now have to take high risks to bring innovations to market. Regulations and bureaucracy hinder market access. The EU AI Regulation has further complicated the situation. The cost of marketing AI systems will continue to rise. This makes it all the more important to plan in detail what costs and methodological issues a provider will face. Many problems can be avoided at an early stage if regulatory requirements are taken into account from the outset by means of ‘compliance by design’ and undesirable developments are avoided.

At this point, the roadmap provides essential information and helps to correctly assess the economic risks of the project. It is possible to estimate the expected workload and costs for the entire compliance process. Other costs, such as for licences, registrations, testing services or the involvement of a notified body, as well as an estimate of the total costs, are also possible.

There is another advantage for smaller, young or new technology companies. Here, projects are often financed by external investors. The roadmap helps to make the right assumptions when presenting the financing and to build trust among investors. The compliance effort is often significantly underestimated.

Conclusion

The EU AI Regulation places additional requirements on the use of AI systems. Companies that adopt a compliance-by-design approach early on and take a planned approach benefit twice over: not only do they avoid regulatory risks, they also build trust among customers, financiers and authorities. Integrating compliance into development from the outset saves on subsequent rework, shortens time-to-market and positions the company as a responsible AI provider.