Navigating the EU AI Act: What Medical Device & In Vitro Diagnostic Manufacturers Need to Know

AI

The European Artificial Intelligence Act (AI Act) aims to create a comprehensive legal framework for artificial intelligence within the European Union. It is the first regulation of its kind in the world and is designed to ensure that AI systems used in the EU are safe, transparent, and respect fundamental rights and values. The Act takes a risk-based approach, categorizing AI systems into four tiers based on the level of risk they pose to individuals and society.

As the AI Act is a horizontal regulation that applies across all sectors and industries, the medical device & diagnostic sector is significantly affected by this regulation. If AI is part of medical devices or IVDs, the AI Act is likely to apply. The manufacturer is called a provider under the AI Act and must comply with additional requirements when placing their medical device or IVD on the EU market or putting it into service in the EU.

Whether a product qualifies as an AI system is determined by the definition provided by the AI Act. The EU Commission has already published guidelines.

The AI Act introduces four risk categories for AI systems: minimal risk, limited risk, high risk, and unacceptable risk.
  1. Minimal risk: Examples are AI-enabled spam filters or video games.
  2. Limited risk: AI systems with specific transparency obligations, such as informing users when they interact with a chatbot.
  3. High-risk: Medical devices and IVDs that require a notified body to be involved in the conformity assessment fall into this risk category.
  4. Unacceptable risk: Harmful manipulation, deception and exploitation of vulnerabilities, social scoring and any other actions that violate fundamental rights and Union values, as listed in Article 5(1) of the AI Act, are prohibited.

AI systems are considered “high-risk” under the AI Act if they are used as safety components or require third-party conformity assessment under other laws and regulations. While there is no correlation between the medical device risk class and the AI Act risk class, many AI applications that support moderate-risk or high-risk medical devices or IVDs would likely fall into the “high-risk” AI class.

Authorised Representative for High-Risk AI Systems
For all the mentioned high-risk AI systems referenced in this article, the regulation imposes additional requirements on providers regardless of whether they are based in the EU or in a third country, if the AI system’s output is used within the EU. A key requirement is that provider of high-risk AI System without establishment in the EU must appoint an EU Authorised representative in accordance with Art. 22 of the AI Act. The implementation of the Authorised Representative must be no later than the applicable regulatory deadline (outline in the later section). For lower- risk AI systems, the requirement does not preclude providers from voluntarily nominating an authorised representative (Recital 82 of the AI Act).

MDSS will offer the EU Authorised Representative service for AI systems to ensure compliance with the AI Act. If you have questions, please contact us.

 

Requirements for High-Risk AI Systems
The provider of Medical Device AI (MDAI) or In Vitro Diagnostic AI (IVDAI) must ensure appropriate documentation is provided as part of the MDR or IVDR Technical Documentation. This includes the following, as per Annex IV of the AI Act:

  • A general description of the AI system
  • A detailed description of the elements of the AI system and the development process
  • Detailed information about the monitoring, functioning, and control of the AI system
  • A description of the appropriateness of the performance metrics

Because the AI Act aligns with the EU’s New Legislative Framework, it is evident that there are overlaps in the requirements with other EU-harmonized legislation. The AI Act recognizes this and offers providers integration of only the additional AI Act requirements into existing documentation regulated under EU-harmonized legislation (e.g., the MDR and IVDR). The table below summarizes the additional requirements for high-risk AI Systems relevant to the MDAI and IVDAI in the EU.

The conformity assessment for MDAI or IVDAI should align with the MDR or IVDR conformity assessment, and the relevant components of Chapter III, Section 2 of the AI Act for high-risk AI systems should be incorporated. A Declaration of Conformity is required to demonstrate compliance with the AI Act. The provider will receive a separate certificate called a Union technical documentation assessment certificate from its third-party conformity assessment body, such as a notified body.

Relevant requirements for High-Risk AI System
Quality Management (QM) System
The QM requirements under the AI Act are complementary to the QM System established under other harmonized legislation. Here are the key components that must be included in the QMS:
– Regulatory Strategy for compliance
– Design control, including design verification.
– Quality Assurance & Control
– Testing & Validation
– Data Management systems
– Technical Documentation & Standards
– Risk Management
– Post-Market surveillance
– Serious incident reporting
– Regulatory Communication
– Record Management
Data and Data Governance
– Training, validation and testing data are required for High-risk AI System.
– The data must be sufficiently representative of the target population and intended purpose.
– Manufacturers need to implement appropriate measures to detect, prevent, and mitigate possible biases that affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law. This includes the requirement on record keeping  and logging requirements
Technical Documentation
– The AI Act requires additional documentation, focusing on transparency and accountability, including data governance practices, cybersecurity measures, and performance testing outcomes of the High-Risk AI System
– Important: The AI Act supports small and medium-sized enterprises (SMEs) and simplifies SME compliance with Technical Documentation, which the conformity assessment procedure is based on internal control!
Record keeping – Automatic recording of events over the lifetime of the High-Risk AI System, which aims to facilitate traceability regarding potential bias.
Transparency and provision of information
– Accompanying instruction for use
– Provide sufficient information on the behavior of the AI components and their limitations to the deployer for them to understand how the system functions and reaches its outputs.
– Data processing
– Data used for training, validation and testing of the AI component in the System.
Human oversight – Design and develop the High-Risk AI System with appropriate human oversight mechanisms to allow for meaningful human intervention, ensuring that users can monitor, interpret, and override the system’s decisions when necessary
Accuracy, Robustness and Cybersecurity
– The regulation mandates robust cybersecurity measures to prevent unauthorized access, cyberattacks, exploitation, and manipulation of the high-risk AI System
– High-risk AI systems must come with information on how well they perform and how that accuracy was measured.

Timeline for High-Risk AI Systems under AIA:

There are two groups of High-Risk AI Systems defined in the AIA:
  1. AI systems that could affect product safety under the EU harmonized legislation on product safety, as listed in Annex I. This category includes medical devices and IVDs.
    • Mandatory implementation date for high-risk AI Systems under Annex I, which include MDAI and IVDAI: August 2, 2027
  2. AI systems used in certain areas listed in Annex III that could affect safety or fundamental rights, such as critical infrastructure, education, employment, law enforcement, and more.
    • Mandatory implementation date for high-risk AI systems under Annex III: August 2, 2026

High-risk AI systems already placed on the EU market or put into service before August 2, 2026, must comply with the AI Act if there are any significant changes in their design after that date. The MDCG 2025-6 guidance has already clarified that this provision does not apply to high-risk AI Systems under Annex I, as they are not considered high-risk before August 2, 2027.

Request for a dialogue

We would be happy to connect with you on LinkedIn, where you can stay updated with the latest news and insights.

You can also easily book a free 30-minute consultation to explore compliance options and solutions tailored to your business.

Source:
– AI Act
– Whitepaper BSI
– MDCG 2025-6
– https://artificialintelligenceact.eu/high-level-summary/