More than 3 years after the draft was presented, the European Artificial Intelligence Act (AIA) was published in the Official Journal of the European Union on July 12, 2024 as Regulation (EU) 2024/1689. The AIA entered into force on August 1, 2024 and from August 2, 2026, most of the provisions of the AIA will apply to AI systems in the EU (Art. 113 AIA). Individual parts of the AIA have a different date of application:
- Chapter I (General Provisions) and Chapter II (Prohibited AI Practices) from February 2, 2025,
- Section 4 (Notifying Authorities and Notified Bodies) in Chapter III (High-Risk AI Systems), Chapter V (General-Purpose AI Models), Chapter VII (Governance) and Chapter XII (Penalties) with the exception of Art. 101 AIA (Fines for Providers of General-Purpose AI Models) and Art. 78 AIA (Confidentiality) from August 2, 2025 and
- Art. 6 (1) AIA (Classification rules for high-risk AI systems) and the corresponding obligations from August 2, 2027.
The AIA creates a uniform and horizontally effective legal framework, particularly for the development, marketing and use of artificial intelligence. The 144-page regulation is divided into 113 articles and 13 annexes.
According to Art. 6 (1) AIA, AI systems that are subject to Regulation (EU) 2017/745 on medical devices (Medical Device Regulation, MDR) or Regulation (EU) 2017/746 on in-vitro diagnostics (In-vitro Diagnostics Regulation, IVDR), as well as conformity assessment by third parties, are classified as high-risk AI systems. Put simply, these are all AI-based medical devices in risk classes IIa-III and in vitro diagnostics in risk classes B-D.
In the following, we provide an overview of the planned content and explain the impact of the AIA on medical device manufacturers.
Definitions and legal scope of application
According to Art. 3 (1) AIA, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. This is the OECD definition. In recital 12, the legislator explains what is meant by the adaptability of an AI system. This “refers to self-learning capabilities, allowing the system to change while in use. AI systems can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded)”. The requirements associated with adaptability, in terms of conformity assessment, are discussed in more detail in the following article section “Requirements in relation to medical devices”.
The AIA applies to the following groups and persons (Art. 2 AIA):
- providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
- deployers of AI systems that have their place of establishment or are located within the Union;
- providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
- importers and distributors of AI systems;
- product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
- authorised representatives of providers, which are not established in the Union;
- affected persons that are located in the Union. The other paragraphs contain restrictions on the scope of application, e.g. for military purposes.
In the context of the scope of application, the term “provider” is used instead of “manufacturer”, and this is defined in Art. 3 (3) AIA as follows: “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge”. Furthermore, the “deployer” in the sense of a user is defined as a “natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” (Art. 3 (4) AIA). Consequently, both medical device manufacturers and clinics, doctors or other users of AI systems are affected by the AIA.
Distributors, importers, operators or other third parties can also become suppliers if, for example, they make significant changes to a high-risk AI system that has already been placed on the market or put into operation. Further details are set out in Art. 25 AIA.
Requirements in relation to medical devices
In contrast to the MDR and IVDR, the AIA does not contain the product requirements in an annex, but describes them in detail in the legal text. Providers of high-risk AI systems are generally subject to the provisions of Art. 16 AIA. The following table lists the requirements that apply to medical devices as high-risk devices within the meaning of the AIA and - if available - the respective equivalents in the MDR:
Requirements | Reference AIA | Reference MDR | Degree of consistency between AIA and MDR |
Risk management system | Art. 9 | Art. 10 (2) | mainly |
Data and data governance | Art. 10 | -- | No |
Technical documentation | Art. 11 | Art. 10 (4), Annex II/III | Partially |
Mandatory automatic recording during operation | Art. 12 | -- | No |
Transparency and provision of information to deployers | Art. 13 | Art. 10 (11) | Partially in relation to instructions for use |
Mandatory human oversight (human-machine interface) | Art. 14 | -- | Partially |
Accuracy, robustness and cybersecurity | Art. 15 | Annex II/III | Partially |
Labeling provisions | --- | Annex I | No |
Quality management system | Art. 17 | Art. 10 (9) | Partially |
Documentation keeping | --- | Art. 10 (8) | Partially |
Automatic generation of logs | Art. 19 | -- | No |
Conformity assessment | Art. 43 | Art. 10 (6) | Partially |
EU declaration of conformity | Art. 47 | Art. 10 (6) | Partially |
CE marking | Art. 48 | Art. 10 (6) | Mainly |
Registration obligations | Art. 49 (1) | Art. 10 (7) | Partially |
Necessary corrective actions and resp. information | Art. 20 | Art. 10 (12) | Mainly |
Demonstration of conformity towards national competent authority | --- | Art. 10 (14) | Partially |
Additional obligations are existing for deployers | Art. 26 | --- | No |
Additional obligations are existing for surveillance authorities | Art. 74 | Art. 93 | partly with the exception of the extensive powers under the AIA |
Post-market surveillance and vigilance (suppliers) | Art. 72, 73 | Art. 10 (10, 12, 13) | Mainly |
Providers of AI-based medical devices have the option of integrating the implementation of the provision of the AIA into their respective MDR processes and documentation (Art. 8 (2) AIA).
In principle, providers of high-risk AI systems must establish and maintain a risk management and quality management system (QMS) (Art. 9 and 17 AIA). For medical devices and IVDs, the respective requirements can be implemented in existing systems.
Art. 10 AIA specifies data and data governance requirements and Art. 15 AIA essentially relates to the development and technical evaluation of AI systems. The contents of both articles largely overlap with the requirements from the questionnaire “Artificial Intelligence (AI) in Medical Devices” of the Interest Group of Notified Bodies for Medical Devices in Germany (IG-NB). It is therefore all the more important to consistently implement the requirements of the questionnaire as a provider (manufacturer) and thus also be prepared for the AIA (AIA Ready). However, some of the requirements in the questionnaire go beyond the legal requirements, which are ultimately decisive.
For AI-based medical devices, the technical documentation to demonstrate compliance with the legal requirements must comply with both Annex IV AIA and Annexes II and III MDR and must be kept available by the provider for a certain period of time (Art. 11, 18). For SMEs and start-ups, a simplified provision of technical documentation in accordance with Annex IV AIA is provided for (Art. 11 (1)). However, a form must be used for this, which has yet to be provided by the European Commission. For medical devices and IVDs, a standardized technical documentation is to be created that simultaneously meets the requirements of the sector-specific legal acts and the AIA (Art. 11 (2)).
Furthermore, providers of high-risk AI systems must implement an “automatic recording of events [...] over the lifetime of the system” (logging) to ensure the traceability of system functions (processes and events) and post-market surveillance (Art. 12 AIA).
In Art. 13 AIA, the legislator provides for extensive transparency obligations. For “AI systems intended to interact directly with natural persons”, it is also required that the “natural persons concerned are informed that they are interacting with an AI system” (Art. 50 (1) AIA). Mentioning AI technology as an operating principle in the intended purpose of a medical device should fulfill this requirement.
The safety of AI-based medical devices is the joint responsibility of providers and users. The AIA takes this into account through separate requirements for users in Art. 26 AIA and mandatory human oversight in Art. 14 AIA.
The legislator provides for certain retention periods for the provider for all documentation (Art. 18 AIA). This also applies to the automatically generated logs (Art. 19 (1) AIA).
The application of harmonized standards (Art. 40 (1) AIA) by the provider of AI systems triggers a presumption of conformity with the respective legal requirements. A corresponding provision can also be found in Art. 8 MDR. In addition, the EU Commission reserves the right to issue common specifications (Art. 41 AIA), as already provided for in Art. 9 of the MDR. The recently published “Standardization request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence” will ensure the creation of the corresponding standards.
In principle, providers of high-risk AI systems must undergo a conformity assessment procedure based on an internal control (Annex VI AIA) or the assessment of the quality management system and the technical documentation by a Notified Body (Annex VII AIA) (Art. 43 (1)). The provider of high-risk AI systems is only permitted to use the less complex first procedure if it complies with harmonized standards and, where applicable, common specifications. If the high-risk AI system is a medical device, the manufacturer must undergo the relevant conformity assessment procedures in accordance with the MDR and include the specified requirements for high-risk AI systems in the assessment (Art. 43 (3)). Points 4.3, 4.4 and 4.5 as well as point 4.6 paragraph 5 of Annex VII must also be applied. The handling of changes to high-risk AI systems that have already undergone a conformity assessment procedure is regulated in Art. 43 (4) AIA. In the event of a significant change, these “shall be subject to a new conformity assessment procedure, irrespective of whether the modified system is to be placed on the market or continued to be used by the current operator”. It also states: “For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that were pre-determined by the provider at the time of the initial conformity assessment and are included in the information in the technical documentation referred to in point 2(f) of Annex IV shall not be considered a substantial change”. Providers of these AI systems must pay particular attention to eliminating or minimizing the risk of "potentially biased spending [...] and ensuring that such feedback loops are adequately addressed with appropriate risk mitigation measures" (Art. 15 (4) AIA). The determination of the changes to the high-risk AI system and its performance at the time of the initial conformity assessment and the identification of associated risks could be carried out according to the procedure recently presented in a VDE-DGBMT recommendation.
For high-risk AI systems as a medical device, the provider issues a single EU declaration of conformity “shall be drawn up in respect of all Union law applicable to the high-risk AI system” (example: in the case of software as a medical device, to MDR and AIA). The declaration contains “all the information required to identify the Union harmonisation legislation to which the declaration relates” (Art. 47 (3) AIA).
Similar to the regulatory life cycle of medical devices, the provider obligations for AI systems do not end with the placing on the market, but continue in the context of post-market surveillance and vigilance. Accordingly, Art. 72 AIA requires the creation of a post-market surveillance plan and the reporting of serious incidents and malfunctions by the provider to the competent authorities (Art. 73 AIA).
Applicability to legacy high-risk AI systems
High-risk AI systems that were placed on the market or put into operation before the date of application of the AIA only have to comply with the provisions of the AIA if “those systems are subject to significant changes in their designs” (Art. 111 (2)).
General purpose AI models
A general purpose AI model (GPAI) is “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Art. 3 (66) AIA). Section 2 of Chapter V AIA contains general obligations for GPAI providers, namely the creation of technical documentation, the provision of information to other manufacturers who wish to integrate a GPAI model into their own AI system, the creation of a copyright policy based on the relevant legislation and the publication of a summary of the training data used. In addition, for GPAI models with systemic risks, assessments including attack testing, cybersecurity measures, self-assessment and mitigation of systemic risks, and reporting of serious incidents are required.
AI real laboratories
The establishment of AI laboratories is intended to enable manufacturers to develop, train, test and validate AI systems for a certain period of time before they are launched on the market (Art. 57 AIA). This is intended to promote innovation and competitiveness and facilitate market access for start-ups and small and medium-sized enterprises (SMEs). In addition, under certain conditions, providers can test their systems under real-life conditions outside the AI real laboratories. These conditions include the granting of an authorization by the competent national authority on the basis of a test plan submitted in advance by the manufacturer. Future implementing acts of the European Commission are expected to define the conditions under which real-world test facilities will be introduced in the individual member states (Art. 58 AIA).
Summary and recommendations
Manufacturers of AI-based medical devices must quickly take into account the increased effort in the technical documentation and the extended conformity assessment in terms of both organizational and financial resources. In this regard, we recommend participating in our hands-on training course “Artificial Intelligence (AI) in Medical Devices”, which we are also happy to conduct as an in-house event at your premises. Furthermore, any relevant guidelines for the AIA and MDR that are published in the future should be observed.