Home · EU AI Act · Insights · EU AI Act Article 9 — Risk Management System: practitioner guide
Pillar · EU AI Act

Article 9 — Risk Management System practitioner guide.

By Elshan Musayev Published 2026-05-06 14 min read
TL;DR. Article 9 of Regulation (EU) 2024/1689 requires every provider of high-risk AI to establish, implement, document, and maintain a Risk Management System as a continuous iterative process running for the entire lifecycle of the system. It is not a one-off document. Without a defensible RMS, conformity assessment cannot be completed, the technical file under Annex IV is incomplete, and the system cannot be CE-marked or placed on the EU market. This guide covers what Article 9 actually requires, how to operationalise the four-step cycle, what evidence supervisory authorities and notified bodies look for, and how to keep the RMS alive after deployment.

Contents

  1. Why Article 9 is the load-bearing wall of the AI Act
  2. The four-step iterative cycle defined by Article 9(2)
  3. Step 1 — Identification of known and reasonably foreseeable risks
  4. Step 2 — Estimation and evaluation of risks
  5. Step 3 — Risk management measures
  6. Step 4 — Testing throughout the lifecycle
  7. Documentation: what goes into the RMS file
  8. Lifecycle nature: RMS is not a deliverable, it is a process
  9. Where the RMS sits in conformity assessment
  10. Substantial modification triggers RMS re-execution
  11. Practical setup checklist
  12. Common pitfalls observed in scoping conversations
  13. DACH-specific considerations
  14. Next step

Why Article 9 is the load-bearing wall of the AI Act

Of the obligations placed on providers of high-risk AI under Regulation (EU) 2024/1689, Article 9 is the one whose absence breaks every downstream requirement. Article 11 technical documentation under Annex IV cannot be completed without RMS outputs. Article 17 quality management system explicitly references RMS as a constituent component. Article 43 conformity assessment, whether by internal control under Annex VI or third-party assessment by a notified body under Annex VII, requires the assessor to verify that an RMS has been established and is being maintained. Article 72 post-market monitoring requires a post-market monitoring plan grounded in the RMS — without an RMS, there is nothing to monitor against.

Penalties for non-compliance with the obligations on high-risk AI under Articles 9, 10, 13, 14, 15, and 16 are tiered up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher. The supervisory mechanism is not a self-attestation regime. Annex VII assessment by a notified body is itself a documentary and procedural audit; Annex VI internal control still requires the provider to retain the documentation for ten years after placing the system on the market or putting it into service, available to national competent authorities on request.

The practical implication: a provider that intends to place a high-risk AI system on the EU market on or after 2 August 2026 cannot defer Article 9. The RMS must already have produced evidence by the time of conformity assessment.

For a checklist that determines whether your AI system is high-risk under Annex III at all, see Annex III high-risk AI categories: scoping checklist for DACH organizations.


The four-step iterative cycle defined by Article 9(2)

Article 9(2) prescribes the structural form of the RMS. It must be:

"a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating."

The four steps that constitute the cycle are listed in Article 9(2)(a)–(d):

  1. Identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when the system is used in accordance with its intended purpose.
  2. Estimation and evaluation of the risks that may emerge when the system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse.
  3. Evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72.
  4. Adoption of appropriate and targeted risk management measures designed to address the risks identified.

The cycle is iterative. The output of step four feeds back into step one as the system, its environment, its data, or its use cases evolve. This is the structural reason an RMS that exists as a single static document on a SharePoint site does not satisfy Article 9: by the second month of deployment, the document is out of date and the RMS has effectively ceased to exist.

ISO/IEC 23894:2023 is the closest aligned standard for AI-specific risk management and is increasingly used as the practitioner baseline. ISO/IEC 42001:2023 sits one layer above and prescribes the management system within which the RMS lives. Neither is yet a formally harmonised standard under the AI Act, but applying them is the most efficient route to a defensible RMS today.


Step 1 — Identification of known and reasonably foreseeable risks

The risk identification step has two analytical dimensions. The first is the risks under intended use — the risks that emerge when the AI system is used exactly as the provider documented it and the deployer follows the instructions for use. The second is the risks under reasonably foreseeable misuse — the risks that emerge when the deployer or end user departs from intended use in ways that the provider could anticipate. Both dimensions must be in the RMS file.

The categories of risk that Article 9(3) explicitly lists are:

The identification artefact in the RMS file is typically a hazard register: each row is a hazard, each column is the source of the hazard (data, model, deployment, human factor, integration), the affected stakeholder, the lifecycle stage at which the hazard manifests, and the linked test that exercises it. Coverage of Annex III categories that apply to the system must be visible — a credit-decisioning system, for example, must show that fairness, transparency, contestability, and Article 22 GDPR automated decision-making concerns are all in the register, not just model accuracy.

For systems involving processing of personal data, the RMS hazard identification step must be coordinated with the GDPR Article 35 Data Protection Impact Assessment. A separate DPIA does not satisfy Article 9; an Article 9 risk identification that ignores GDPR-relevant hazards does not satisfy GDPR. The two must be coherent.


Step 2 — Estimation and evaluation of risks

Once hazards are identified, each must be estimated for probability of occurrence and severity of consequence, and evaluated against an acceptability threshold defined by the provider. Article 9 does not prescribe a specific scoring scheme. Practitioners typically use either a qualitative matrix (low / medium / high / critical) or a semi-quantitative scale calibrated against historical evidence and analogous deployments.

What must be defensible:

Article 9(2)(c) explicitly requires the evaluation to incorporate post-market monitoring data once available — meaning that for the first version of the RMS pre-deployment, post-market data is by definition empty, but the evaluation framework must be built so that real-world telemetry can flow into it after Article 72 monitoring kicks in.

The output of step two is a prioritised list of hazards: which require treatment before placing on market, which require ongoing monitoring, and which fall below the acceptability threshold and require only documentation.


Step 3 — Risk management measures

Article 9(4) prescribes the hierarchy of measures: the RMS must adopt appropriate and targeted measures in this priority order:

"(a) elimination or reduction of risks as far as technically feasible through adequate design and development of the high-risk AI system;

(b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;

(c) provision of information required pursuant to Article 13, and, where appropriate, training to deployers."

This priority order is binding. Documentation alone is not a substitute for engineering mitigation where technically feasible. Training the deployer is not a substitute for designing safer behaviour into the system. The RMS file must show, hazard by hazard, why each measure was selected, why higher-priority measures were not feasible (where applicable), and what residual risk remains after the measure is in place.

Examples of measure categories practitioners deploy:

The output of step three is a measures register: each measure linked to one or more hazards, the responsible owner, the verification method, and the post-implementation residual risk score that flows back into the cycle.

For deployers reading this — Article 26 places obligations on you that the provider's RMS does not absorb. See Article 26 deployer obligations: what banks and insurers actually owe for the deployer side of the same conversation.


Step 4 — Testing throughout the lifecycle

Article 9(5)–(7) requires testing of the high-risk AI system to ensure consistent performance for the intended purpose and compliance with the requirements of Chapter III, Section 2. Three subsections are operative:

Practitioners face several distinct testing surfaces:

Each test class produces evidence. Evidence is what the conformity assessor reviews. The RMS file must link every identified risk to the test (or tests) that exercise it, with results recorded and pass/fail criteria stated in advance — not retrofitted to the result.


Documentation: what goes into the RMS file

Article 9 mandates documentation as part of the system; Annex IV(3) mandates the RMS as content of the technical documentation submitted for conformity assessment. The minimum contents of the RMS file are:

A defensible practical pattern: the RMS file is structured as a living record with version control (typically in the same configuration management system as the technical file under Annex IV), reviewed at minimum quarterly and on every substantial modification, with the most recent review summary visible to senior management.


Lifecycle nature: RMS is not a deliverable, it is a process

The most consequential single line in Article 9(2) is that the RMS is "a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system." A document produced once for conformity assessment and never updated does not satisfy Article 9. The process must run.

Operationally this requires:

For systems with substantial change frequency (continuous learning, frequent retraining, frequent feature updates), the review cadence may need to be monthly or even continuous, with automated telemetry feeding the hazard register and an escalation protocol when scores cross thresholds.


Where the RMS sits in conformity assessment

The conformity assessment, whether under Annex VI internal control or Annex VII third-party assessment, evaluates the RMS as part of the technical file under Annex IV. The assessor will look for:

Which conformity assessment route applies to your specific system depends on the system type and the harmonised standards available. See EU AI Act conformity assessment: Annex VI internal vs Annex VII notified body for the route decision and what each entails.


Substantial modification triggers RMS re-execution

Article 43(4) defines substantial modification and requires re-execution of the conformity assessment when one occurs. For the RMS, this means:

Many providers under-document substantial modification triggers and find on the next conformity assessment that the RMS file lags reality by twelve to eighteen months. The practical countermeasure is a substantial modification register maintained as part of the RMS file, with entries dated, decided by a named accountable individual, and accompanied by the assessment of whether re-execution is required.


Practical setup checklist

For organizations setting up an RMS for the first time, the following sequence is the minimum viable path:

  1. Confirm the system is high-risk — Article 6 read in conjunction with Annex III. If unclear, run the free 10-minute EU AI Act Quick Scan for a first-pass classification.
  2. Define the system scope — what is in the RMS, what is out. Single high-risk AI system with a clear intended purpose.
  3. Define the RMS owner — named individual within the provider organization with sufficient authority to approve risk management measures.
  4. Establish the risk register format — hazard, source, affected stakeholder, lifecycle stage, scoring, linked measures, linked tests.
  5. Establish the acceptability threshold — approved at appropriate management level, documented in the RMS file.
  6. Run the first identification round — hazards under intended use, hazards under reasonably foreseeable misuse, hazards from data, hazards from human factors. Cover Annex III-specific concerns and rights under the Charter.
  7. Estimate and evaluate each hazard against the threshold.
  8. Adopt measures following the Article 9(4) priority order. Document why higher-priority measures were not selected where applicable.
  9. Define and run the test plan — functional, robustness, bias, security, oversight, integration.
  10. Document the RMS file in the structure prescribed in section 7.
  11. Define the review cadence and the substantial modification register.
  12. Define the post-market monitoring plan under Article 72 and how it feeds back into the RMS.
  13. Integrate with the quality management system under Article 17 — the RMS lives inside the QMS, not next to it.
  14. Submit for conformity assessment under the route applicable to the system.

Most organizations find that steps 6 through 9 take longer than the engineering work that produced the system in the first place. This is not a sign of inefficiency; it is the actual cost of the regulatory regime.


Common pitfalls observed in scoping conversations

The following patterns recur in pre-deployment scoping conversations with DACH-regulated organizations and tend to fail conformity assessment if uncorrected:

The single-document RMS. A document produced for an internal audit, then frozen. Article 9(2) requires a continuous process; the file is the artefact, the process is the obligation. A document without a review cadence and a substantial modification register is not an RMS.

The DPIA-as-RMS. A GDPR Article 35 Data Protection Impact Assessment covers data protection risks. It does not cover safety risks, fairness risks beyond data, robustness risks, or post-market monitoring as defined in Article 72. The DPIA is necessary where personal data is processed; it is not sufficient as the RMS.

The model card-as-RMS. A model card describes the model. It does not constitute risk management. Treating the model card as the RMS file misses the entire process dimension of Article 9.

The retrofitted test results. Testing under Article 9(5)–(7) is evidence — the criteria must be defined before the test, not selected after the result. Conformity assessors increasingly look for the test plan version date relative to the test results to detect retrofit.

Ignoring Article 9(4) priority order. Recording disclosures and training as the only mitigation when design-time elimination or architectural mitigation was technically feasible. Article 9(4) makes the order binding; the RMS must justify why each level of the hierarchy was selected or skipped.

Treating substantial modification as a vague concept. Without a register and a named decision-maker, substantial modifications are made informally by engineering teams without conformity-assessment consequence visible at the right level of the organization. By the next conformity audit, the file no longer reflects reality.


DACH-specific considerations

For German, Austrian, and Swiss organizations subject to additional sector-specific regulation:

Banking and financial services — BaFin's MaRisk AT 9 third-party risk requirements overlay the AI Act when the AI is provided by a third party. The RMS must address the outsourcing dimension explicitly. For credit institutions and insurance undertakings deploying AI for credit scoring or insurance pricing, Article 29a of the AI Act explicitly references the deployer-side Fundamental Rights Impact Assessment, which the deployer's RMS must coordinate with.

Cloud and infrastructure — BSI C5 cloud computing compliance criteria, where the AI runs on cloud infrastructure. The RMS hazards relating to cloud sub-processors, residency, and incident handling must align with the C5 obligations the provider or deployer has separately assumed.

Public sector — additional FRIA requirement under Article 26(6) for bodies governed by public law and private operators providing services of public interest. The deployer-side FRIA is a separate document from the provider's RMS but interacts with it.

Healthcare and medical devices — overlap with the Medical Device Regulation. AI under MDR Annex I high-risk classification is also Annex I high-risk under the AI Act per Article 6(1); coordinated assessment is required and the RMS must reference both regulatory baselines.

Data protection — the Bundesbeauftragte für den Datenschutz und die Informationsfreiheit (BfDI) and the Länder data protection authorities have published guidance on AI and personal data that should be reviewed and reflected in the RMS where personal data processing is in scope.

The DACH supervisory environment is not lenient. Article 9 RMS evidence is expected to be specific, documented, current, and signed off at appropriate seniority. Generic templates do not survive supervisory review.


Next step

The high-risk obligations of Regulation (EU) 2024/1689, including Article 9, apply from 2 August 2026. Organizations that intend to place high-risk AI systems on the EU market on or after that date should already be running the RMS cycle for the first time, with documented results, before the conformity assessment can be initiated.

For a first-pass classification of whether your AI system is in scope, the EU AI Act Quick Scan at app.ekmgc.de produces a personalised risk tier and a 30-day plan in approximately ten minutes. Anonymous, browser-only, no sign-up. Bilingual EN / DE.

For organizations that need to move from classification to a documented RMS, conformity assessment, and Annex IV technical file — including those with sector-specific overlays in financial services, healthcare, public sector, or cloud — the EU AI Compliance practice of EKM Global Consulting GmbH provides scoped advisory engagements. See ekmgc.de/eu-ai-act.html or contact directly via elshan.musayev@ekmgc.de.

Operated by EKM Global Consulting GmbH (Baden-Baden, Germany; founded January 2013). Founder: Elshan Musayev.


Informational summary of Regulation (EU) 2024/1689. Not legal advice. The AI Act is a complex and evolving regulatory regime; specific applicability and obligations require formal scoping against your organization's actual systems and use cases.


Related insights