With a medical purpose, artificial intelligence (AI) systems in medicine are subject to regulation by the European Medical Device Regulations (MDR) 2017/745 and In Vitro Diagnostic Device Regulations (IVDR) 2017/746 in the same way as classical software. Furthermore, with regard to the lifecycle of medical AI systems, the application of regulatory processes in accordance with established standards for medical technology is recommended. In this blog post, we will take a closer look at what other requirements currently apply to these products and where manufacturers can find compliance assistance. The focus here is on medical devices within the scope of the MDR.
Approval of AI-based medical devices in Europe
Current legal and normative requirements
As with all software, it must first be clarified whether medical AI systems are medical devices within the meaning of the MDR. This so-called “qualification as a medical device” is carried out in relation to the intended purpose of the respective medical AI system according to the criteria described in the document MDCG 2019-11. The second step is risk classification, which is also carried out in accordance with the procedure in document MDCG 2019-11. With regard to software, the application of Rule 11 in Annex VIII MDR is particularly relevant here. The “Health AI Register” website lists numerous examples of medical AI systems for the “radiology” field of application with corresponding information on risk classification in accordance with the MDR and FDA, certificates and product specifications.
Special requirements for software as a medical device can be found in the MDR in sections 14.2(d), 14.5 and 17.1 to 17.4 of Annex I Chapter II and these are also relevant for medical AI systems. With regard to the regulatory processes during the software life cycle, the EN 62304 standard is applied to medical AI systems. It should be noted that only those parts of the software code that are used to apply the AI model and are therefore used in the medical device, as well as the AI model itself, must fully comply with the requirements for the life cycle of medical device software in EN 62304. Software libraries or AI models used that were not originally developed as medical device software are treated as software of unknown provenance (SOUP) (clauses 5.3.3, 5.3.4, 5.3.6 c), 6.1 f), 7.1.2 c), 7.1.3, 7.4.1, 7.4.2 and 8.1.2 in EN 62304). Software libraries that are used for data management, AI model development and evaluation fall within the scope of section 7.5.6 of EN ISO 13485, i.e. these software libraries must be validated. In the case of stand-alone AI-based software that is not part of a medical device, the EN 82304-1 standard is also used. Neither the MDR nor the aforementioned standards contain AI-specific requirements with regard to safety and performance.
To assist manufacturers with regard to AI-specific content in the technical documentation, IG-NB and Team NB published the document “Questionnaire: Artificial Intelligence in Medical Devices”. This contains numerous questions for the manufacturer relating to responsibilities, competencies, intended purpose, software requirements, data management, AI model development, product development and post-market surveillance. The document “Good practices for health applications of machine learning: Considerations for manufacturers and regulators” from the International Telecommunication Union (ITU) is very similar to the IG-NB/Team NB document and can be used to explain individual requirements.
In addition to the requirements under medical device law, AI-based medical devices must also comply with the provisions for high-risk products in Regulation (EU) 2024/1689 (Artificial Intelligence Act, AIA). At this point, particular reference should be made to Art. 10 and Art. 15 AIA, which contain specific requirements for data management and the development and technical assessment of AI systems. We have discussed the corresponding challenges in detail in another blog article.
The international standards organizations International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) as well as the Association for the Advancement of Medical Instrumentation (AAMI) and the Institute of Electrical and Electronics Engineers (IEEE) have published standards relating to AI technologies. Current developments in the development of standards can be viewed on the ISO/IEC JTC 1/SC 42 website. Even if most of the documents do not currently contain specific product requirements and do not relate to medical devices, manufacturers can find important information in them with regard to the creation of regulatory processes.
Data protection aspects
If personal data is processed during the development and use of AI systems, the European General Data Protection Regulation (GDPR) applies. It also has a significant impact on AI systems. A few important aspects are discussed below. Manufacturers and users must apply the data processing principles set out in Art. 5 GDPR. Furthermore, the manufacturer must implement technical and organizational measures for data protection as early as the development phase (Art. 25 GDPR). As Art. 22 GDPR essentially prohibits automated decision-making, including profiling, manufacturers of autonomous AI systems must take the measures required by law. For new technologies such as AI, which pose a high risk to the rights and freedoms of natural persons, a data protection impact assessment is required as a precautionary measure (Art. 35 (1) GDPR). A detailed legal discussion on the topic was published by Schreitmüller.
CE conformity assessment of AI systems
AI systems can evolve during their life cycle by going through the learning process again. This so-called continuous learning is defined as “incremental training of an AI system that takes place on an ongoing basis during the operation phase of the AI system life cycle” (source: ISO/IEC 22989). A distinction can be made between deterministic and non-deterministic continuous learning. The first variant is also referred to as batch learning, which is defined as “training that leads to the change of a Machine Learning-enabled Medical Device (MLMD) that involves discrete updates based on defined sets of data that take place at distinct points prior to or during the operation phase of the MLMD life cycle” (source: IMDRF). With regard to medical device software, determined continuous learning can take place, for example, with data from other clinics or from additional patient groups.
The IG-NB/Team NB document comments on continuous-learning AI systems as follows: “Practice has shown that it is difficult for manufacturers to sufficiently prove conformity for AI devices, which update the underlying models using in-field self-learning mechanisms. Currently, notified bodies do not consider medical devices based on such models to be “certifiable”, unless the manufacturer takes measures to ensure the safe operation of the device within the scope of the validation described in the technical documentation”.
The “Regulatory Affairs” expert committee of the VDE-DGBMT has issued the recommendation “Market access of continuous-learning AI systems in medicine”. This states that under the current legal framework of the MDR, there is no legal reason not to certify continuous-learning AI systems. The decisive factor is that there is no change in the intended purpose as part of the determined continuous learning and that the manufacturer plans the corresponding procedure at the time of the conformity assessment and has it assessed by the notified body. The VDE-DGBMT recommendation therefore calls for the Predetermined Change Control Plan (PCCP) proposed by the FDA to also be adopted in Europe as part of an anticipatory conformity assessment.
Regulatory approach BAIM to meet regulatory requirements.
VDE has developed the regulatory approach "BAIM - Boost AI to Market" to help manufacturers meet complex legal and regulatory requirements. BAIM essentially adds AI-specific aspects to existing processes at the manufacturer on software lifecycle, risk management, usability engineering, clinical evaluation/follow-up, and post-market surveillance/vigilance.
Conclusion
As the technical and regulatory state-of-the-art in medical AI systems is continuously changing, manufacturers need to monitor the changes and integrate them into their regulatory processes in a timely manner.