Home · EU AI Act · Insights · Annex III high-risk AI categories: scoping checklist for DACH organizations
Cluster · EU AI Act

Annex III scoping checklist for DACH organizations.

By Elshan Musayev Published 2026-05-06 8 min read
TL;DR. Annex III of the EU AI Act defines eight high-risk AI categories that trigger a full compliance regime — Article 9 risk management, Article 10 data governance, Article 14 human oversight, and more. This article explains what each category covers, where DACH organizations most commonly carry hidden exposure, and gives a 14-question scoping checklist to determine whether a system qualifies before your obligations become enforceable in August 2026.

Meta description: Determine whether your AI system falls under Annex III of the EU AI Act. A structured scoping checklist for DACH compliance officers, DPOs, and legal counsel — covering all 8 high-risk categories.

Canonical URL: https://ekmgc.de/insights/annex-iii-scoping-checklist.html Pillar: EU AI Act Article 9 — Risk Management System: practitioner guide Published by: EKM Global Consulting GmbH (Baden-Baden, founded 2013) Regulation: Regulation (EU) 2024/1689, in force 1 August 2024; Annex III obligations apply from 2 August 2026



Table of Contents

  1. What Annex III Actually Does
  2. The Eight High-Risk Categories, Expanded
  3. The Scoping Checklist
  4. Provider vs. Deployer: Why the Distinction Matters
  5. What to Do Once an Annex III System Is Identified
  6. DACH-Specific Regulatory Overlay
  7. Closing: Start with the Quick Scan
  8. FAQ
  9. Sources

What Annex III Actually Does

Article 6(2) of Regulation (EU) 2024/1689 establishes that an AI system listed in Annex III is automatically classified as high-risk — irrespective of the risk profile of any individual deployment. This is a list-based classification mechanism, not a consequence-based one. The system does not need to have caused harm. The question is whether it fits a category.

Annex III currently lists eight high-risk domains. Article 7 grants the European Commission authority to expand or narrow the list by delegated act, meaning scope can shift during your compliance programme without legislative process.

The eight categories, in order of their appearance in Annex III, are:

  1. Biometric identification and categorisation — remote identification of natural persons, real-time or post-event, and categorisation by protected attributes.
  2. Critical infrastructure — AI used in the management and operation of road traffic, water, gas, heating, electricity, and digital infrastructure.
  3. Education and vocational training — systems determining access to, progression within, or evaluation of educational institutions and programmes.
  4. Employment, workers' management, and access to self-employment — recruitment, selection, performance monitoring, promotion, contract termination, and task allocation affecting workers.
  5. Access to and enjoyment of essential private and public services — credit scoring, insurance risk pricing, eligibility for public benefits, emergency services dispatching.
  6. Law enforcement — risk assessment of individuals, polygraph-style tools, deepfake detection used by police, crime analytics predicting recidivism.
  7. Migration, asylum, and border control — risk profiling of third-country nationals, document authenticity assessment, border surveillance.
  8. Administration of justice and democratic processes — AI assisting courts in fact-finding, jury risk assessment, influence on elections or referenda.

The Eight High-Risk Categories, Expanded

1. Biometrics

The trigger is automated processing of biometric data to remotely identify or categorise a natural person. Real-time remote biometric identification in publicly accessible spaces is prohibited under Article 5 (with narrow law-enforcement exceptions). Post-hoc systems, and systems categorising individuals by emotion, gender, ethnicity, or political opinion, fall under Annex III as high-risk.

DACH exposure. Workplace time-tracking systems that verify identity by facial image, voice authentication in contact centres, or attendance monitoring in healthcare facilities. If a system compares a person's live image or voice to a database — for any purpose — begin the scoping questions.

Borderline. Liveness detection used only for fraud prevention in a KYC flow where no persistent identifier is created may fall outside Annex III under narrow interpretations. Legal opinion should be sought; do not self-exclude.


2. Critical Infrastructure

The trigger is use in managing or operating road traffic, water, gas, heating, electricity, or digital infrastructure in ways that could affect the safety of persons.

DACH exposure. Grid-stability AI used by energy suppliers in Germany under the EnWG framework; predictive maintenance AI for district heating in Austria; adaptive traffic control systems in Swiss cantonal road authorities. If the system can influence whether a physical resource reaches end users safely, Annex III is relevant.

Borderline. Back-office scheduling tools that optimise maintenance windows but cannot interrupt supply are less clearly within scope. The decisive question is whether incorrect output could reasonably lead to interruption of essential service.


3. Education and Vocational Training

The trigger is automated determination of access to, progression through, or assessment within an educational or vocational training institution.

DACH exposure. Proctoring software used by Fachhochschulen and universities for remote examinations; AI systems scoring German Abitur or Austrian Matura essays at scale; systems ranking applications to apprenticeship programmes (Ausbildungsplatz). Vocational training platforms that automatically assign learners to tracks or certify competence levels also fall within scope.

Borderline. Recommendation engines that suggest supplementary reading or practice exercises, without gating access or affecting grades, are generally outside scope.


4. Employment, Workers' Management, and Self-Employment

The trigger is automation affecting recruitment, selection, promotion, performance evaluation, task allocation, or contract termination of workers or candidates for self-employment.

DACH exposure. This is among the highest-exposure categories for DACH organisations. CV-screening systems, video interview sentiment analysers, algorithmic performance scoring in call centres, gig-economy dispatch platforms that rate and deactivate delivery workers — all fall squarely within Annex III. Even a system that produces a ranked shortlist for human review is within scope if humans routinely follow its output without meaningful independent reassessment.

Borderline. Scheduling tools that allocate shifts based solely on worker preferences stated in a system, without inferring characteristics of the worker, are generally outside scope.


5. Access to Essential Services

The trigger is an AI system that determines eligibility for or restricts access to credit, insurance, public benefits, emergency dispatch, or other services described as essential.

DACH exposure. Credit scoring models used by German Sparkassen and cooperative banks; insurance underwriting engines that price health or liability risk; automated benefit-eligibility assessors in German Jobcenter or Austrian AMS; emergency call triage systems. This category intersects significantly with BaFin MaRisk and EBA ML guidelines for credit institutions.

Borderline. Marketing personalisation that influences which product is shown — but does not determine access or pricing — is less clearly in scope.


6. Law Enforcement

The trigger is AI used by police or security authorities for risk assessment of individuals, crime analytics, polygraph-adjacent affect assessment, or deepfake detection in criminal proceedings.

DACH exposure. Predictive policing systems piloted in several Länder; threat-assessment tools used at Bundespolizei; lie-detection software in asylum hearings. This category primarily concerns public-sector organisations with enforcement mandates, though private security firms providing AI services to such bodies become providers under Article 6.


7. Migration, Asylum, and Border Control

The trigger is AI used to assess risk of irregular immigration, authenticate travel documents, or monitor borders.

DACH exposure. Systems used by BAMF (Bundesamt für Migration und Flüchtlinge) or cantonal migration offices; document verification AI deployed at airport border gates. Private organisations that provide document authentication AI to such authorities are providers of a high-risk system regardless of their own sector classification.


8. Administration of Justice and Democratic Processes

The trigger is AI assisting courts, arbitration bodies, or electoral administration, or AI deployed to influence electoral behaviour.

DACH exposure. AI-assisted legal research tools sold to courts or Notare; campaign targeting systems used by political parties. For commercial entities, the primary exposure is as provider of such tools rather than as deployer.


The Scoping Checklist

The following 14 questions form a sequential decision procedure. Answer each in order. Where a question is answered affirmatively, document your reasoning and the supporting evidence. A single affirmative answer in questions 1–5 is sufficient to initiate a full Annex III classification assessment.

# Question Decision criterion Suggested evidence/artefact
1 Does the system process biometric data to identify or categorise a natural person? Yes → proceed to classification; No → continue Data-flow diagram; DPA records under Art. 30 GDPR
2 Does the system manage, control, or monitor any element of road traffic, water, gas, heat, electricity, or digital infrastructure in a way that affects service continuity or physical safety? Yes → proceed; No → continue System architecture document; scope-of-use contract
3 Does the system determine or materially influence access to, progression within, or evaluation in an educational or vocational training programme? Yes → proceed; No → continue Feature specification; output usage policy
4 Does the system screen, rank, evaluate, or take an action affecting any worker or candidate for employment or self-employment? Yes → proceed; No → continue HR process map; system vendor documentation
5 Does the system determine or materially influence eligibility for credit, insurance, a public benefit, or an emergency response? Yes → proceed; No → continue Credit policy documentation; actuarial model description
6 Does the system assist law enforcement, security authorities, or border control in assessing risk or authenticity? Yes → proceed; No → continue Contract with public authority; use-case description
7 Does the system influence electoral or voting behaviour, or assist in judicial or arbitration proceedings? Yes → proceed; No → continue Customer agreement; deployment context statement
8 Is the output of the system used as input to a human decision that significantly affects a natural person? Yes → high-risk likelihood elevated; No → lower likelihood Decision-process flowchart; escalation policy
9 Can the system's output be individually attributed to a specific natural person? Yes → elevated risk; No → lower but continue Output schema; logging specification
10 Is there meaningful human review of individual outputs before consequential action is taken? No → risk elevated; Yes → document robustness of review SOP for human review; override logs
11 Is the system off-the-shelf (procured) or custom-built (developed internally or under specification)? Off-the-shelf → likely deployer role; Custom → likely provider role Procurement contract; development mandate
12 Does the vendor supply an EU Declaration of Conformity or indicate CE marking for this system? No → your organisation may need to conduct conformity assessment as deployer/co-provider Vendor compliance statement; contract clause
13 Is the system already registered in the EU AI Act public database under Article 71? No (post-August 2026) → obligation gap; Yes → verify accuracy Database query; registration certificate
14 Has a Data Protection Impact Assessment under Art. 35 GDPR been completed for this system, and does it reflect the AI-specific risks? No → immediate gap; Yes → cross-reference with planned AI-Act risk assessment DPIA documentation

Provider vs. Deployer: Why the Distinction Matters

Article 3(3) of Regulation (EU) 2024/1689 defines a provider as any natural or legal person who develops an AI system and places it on the market or puts it into service. Article 3(4) defines a deployer as any organisation that uses an AI system under its own authority.

For Annex III systems, the distinction determines which Article 9 and associated obligations fall on whom.

A bank that purchases a credit-scoring model from a vendor is a deployer under Article 26. It must conduct a conformity assessment of the provider's documentation, ensure human oversight is implemented, and maintain deployment logs. It does not need to build the risk management system from scratch — but it must verify that the provider's system satisfies Article 9, Article 10, and Annex IV, and it assumes residual obligations if the provider's documentation is insufficient.

The same bank, if it commissions a bespoke model developed to its own specification, is likely a provider under Article 3(3) even if the code is written by a third party. Functional specification control is the decisive factor. This is a common source of misclassification in DACH procurement contexts where development is outsourced but requirements are authored internally.

A single system can put your organisation in the provider role for some deployments and the deployer role for others — for example, if a hospital builds an internal triage AI for its own clinical use (deployer) but then licences that tool to another clinic (now a provider). Both roles should be mapped separately.


What to Do Once an Annex III System Is Identified

Identification is not the end point; it is the starting gate. The following obligations activate for every confirmed Annex III system, with the compliance deadline of 2 August 2026 for systems already on the market.

Article 9 — Risk Management System. A documented, iterative process for identifying, estimating, evaluating, and mitigating risks throughout the system lifecycle. This is the subject of the pillar article: EU AI Act Article 9 — Risk Management System: practitioner guide.

Article 10 — Data and Data Governance. Training, validation, and testing data must meet relevance, representativeness, and freedom-from-error requirements. Data governance practices must be documented. For DACH organisations already operating under GDPR Art. 22 restrictions, the Article 10 requirements layer on top of existing data lawfulness obligations.

Article 14 — Human Oversight. The system must be designed, and deployed with procedures, ensuring that natural persons can understand, monitor, and where necessary halt or override the system's output. Delegation of oversight to an AI-assisted checker does not satisfy this requirement.

Article 17 — Quality Management System. Providers must implement a quality management system covering responsibilities, resource allocation, post-market monitoring, and incident handling. Deployers should verify this exists via contractual due diligence.

Annex IV — Technical Documentation. A structured dossier describing system purpose, capabilities, limitations, training data, performance metrics, and risk management measures. This must exist prior to market placement and be kept up to date.

Registration (Article 71). Providers of high-risk systems must register in the EU database before placing the system on the market.

Penalties. Non-compliance with Annex III obligations carries a maximum penalty of EUR 15 million or 3 percent of global annual turnover (whichever is higher) under Article 99.


DACH-Specific Regulatory Overlay

DACH organisations do not operate in a single-regulation environment. Three national or pan-European frameworks create additional obligations that interact with Annex III classification.

BaFin MaRisk AT 9 (German banks and insurers). MaRisk AT 9 governs outsourcing risk management and requires banks to assess and monitor risks arising from third-party relationships, including technology providers. An Annex III AI system procured from a vendor is also an outsourced function under AT 9 if it is material to regulated activities. The risk assessment required under AT 9 and the Article 9 RMS can be aligned but are not identical — the AT 9 focus is operational and contractual; the Article 9 focus is on the AI system's own risks. Both must be satisfied.

BfDI guidance on HR-AI. The Federal Commissioner for Data Protection (BfDI) has issued interpretive guidance indicating that AI systems used in employment contexts — specifically those producing scores or rankings affecting workers — require a DPIA under Art. 35 GDPR and are likely to engage Art. 22 automated decision-making restrictions. This guidance predates the EU AI Act but is consistent with Annex III Category 4 classification. DACH compliance officers should treat a BfDI-flagged HR system as presumptively Annex III and scope accordingly.

BSI C5 for public-sector cloud AI. The BSI Cloud Computing Compliance Criteria Catalogue (C5) applies to cloud services used by German federal and Länder authorities. Where an Annex III AI system is hosted on a cloud service, the deploying authority must confirm that the cloud infrastructure is C5-attested. The AI Act's Article 14 human oversight requirement and the C5 availability and incident-response controls should be reviewed jointly.

Austria and Switzerland. Austrian organisations additionally operate under the Datenschutzgesetz 2018 (DSG) and, for financial institutions, FMA circulars that parallel BaFin AT 9 in structure. Swiss organisations operating under nFADP and preparing for AI Act extraterritorial scope (systems affecting EU persons) should apply the same Annex III scoping logic; FINMA's 2024 guidance on algorithmic models in financial services is the closest Swiss analogue to MaRisk AT 9.


Closing: Start with the Quick Scan

Annex III classification is not a one-time exercise. Systems evolve, use cases expand, and Article 7 gives the Commission authority to update the list. Organisations that build a repeatable scoping discipline — rather than a one-off audit — will manage this with materially less risk of non-compliance at enforcement dates.

If your organisation has not yet completed an inventory of AI systems against the Annex III categories, the structured 10-minute EU AI Act Quick Scan at app.ekmgc.de provides a baseline assessment with category-level output.

For organisations that have identified potential Annex III systems and require a structured gap analysis against Articles 9, 10, 14, and 17, the advisory engagement overview is at ekmgc.de/eu-ai-act.html.


FAQ

Q1. Does an AI system need to be making fully automated decisions to fall under Annex III? No. Article 3(1) of Regulation (EU) 2024/1689 defines an AI system as a machine-based system designed to operate with varying levels of autonomy. A system that produces a recommendation or ranking for human review still qualifies as an AI system. The decisive question for Annex III is whether it falls within a listed category and whether the output influences a significant decision affecting natural persons — not whether the final action is taken without human involvement.

Q2. We purchased a system from a vendor who says it is not high-risk. Are we protected? A vendor's self-declaration does not transfer legal responsibility. Article 26 places independent obligations on deployers, including the obligation to verify that the provider's documentation is complete and that the system as deployed meets the requirements of Chapter III, Section 2. If the vendor's assessment is incorrect, your organisation as deployer remains exposed to enforcement. Contractual indemnification does not substitute for regulatory compliance.

Q3. Our HR software vendor added AI features in an update. Does this trigger Annex III? Potentially yes. Where a previously non-AI system is materially modified to include AI functionality falling within an Annex III category, the updated system may require fresh conformity assessment. Article 6 applies to the system as deployed. Organisations should review vendor update notes against the Annex III categories and request updated compliance documentation when AI features are introduced.

Q4. We are a Swiss company. Does Annex III apply to us? Switzerland is not an EU member state and the EU AI Act does not apply directly. However, any Swiss organisation deploying or providing AI systems that are used within the EU — including systems used by EU subsidiaries, EU customers, or EU workers — is subject to the extraterritorial scope provisions. The same Annex III categories apply to those deployments.

Q5. How does Annex III interact with the GDPR Article 22 prohibition on solely automated decisions? The two frameworks are complementary but distinct. GDPR Art. 22 restricts automated decisions producing legal or similarly significant effects without human review; it applies now. Annex III classification under the EU AI Act imposes a broader set of technical and governance obligations; it applies from August 2026. A system in Category 4 (employment) or Category 5 (essential services) is likely to engage both frameworks simultaneously. DPIA documentation prepared under GDPR Art. 35 should be updated to reflect Annex III classification and the Article 9 RMS requirements.

Q6. What is the difference between prohibited AI under Article 5 and high-risk AI under Annex III? Article 5 lists practices that are banned outright — including real-time remote biometric identification in public spaces (with narrow exceptions), social scoring by public authorities, and manipulation of vulnerable persons. These carry the highest penalty: EUR 35 million or 7 percent of global turnover. Annex III systems are not banned; they are regulated. They may remain in use provided the full compliance regime is satisfied. Any system that might qualify under both Article 5 and Annex III should be assessed for the prohibition first.

Q7. We have a legacy AI system in production before August 2024. Do Annex III obligations apply? Yes, subject to transition provisions. Article 111 provides a phased implementation schedule. AI systems placed on the market or put into service before 2 August 2026 must comply with Annex III obligations from that date, unless they are subject to significant modification before then, in which case compliance obligations apply at the point of modification. "Significant modification" is defined in Article 3(23) and includes changes that affect the system's performance on the intended purpose or affect the persons to whom the system is directed.

Q8. What is the minimum documentation we should prepare before August 2026? At minimum: (1) a complete inventory of AI systems in use or provided, mapped against the eight Annex III categories; (2) a role determination for each system (provider, deployer, or both); (3) for each Annex III system, a draft risk management framework under Article 9; (4) Annex IV technical documentation, either produced by your organisation or obtained and verified from the vendor; (5) human oversight procedures under Article 14. This is not exhaustive — it is the floor for a credible compliance posture.


Sources

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (EU AI Act). EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
  2. EU AI Act Annex III — High-risk AI systems referred to in Article 6(2). EUR-Lex (same document, Annex III).
  3. EU AI Office — High-risk AI systems overview. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  4. BaFin — Mindestanforderungen an das Risikomanagement (MaRisk), AT 9 Outsourcing. https://www.bafin.de/DE/Aufsicht/BankenFinanzdienstleister/Anforderungen/MaRisk/maRisk_node.html
  5. BfDI — Stellungnahme zu KI-Systemen im Beschäftigungskontext. https://www.bfdi.bund.de/DE/Home/home_node.html
  6. BSI — Cloud Computing Compliance Criteria Catalogue (C5:2020). https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Empfehlungen-nach-Angriffszielen/Cloud-Computing/Kriterienkatalog-C5/kriterienkatalog-c5_node.html
  7. European Data Protection Board — Guidelines on automated decision-making and profiling (updated). https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-automated-individual-decision-making-and_en
  8. FINMA — Guidance 05/2024 on the use of AI models in financial services (Switzerland). https://www.finma.ch/en/

Schema.org recommendation: Mark up this page with FAQPage structured data covering the eight FAQ items, and Article schema with author set to the organisation entity. The comparison table in the scoping checklist section benefits from Table structured data where supported by the CMS.

Internal link plan: - This cluster article links to: /insights/article-9-rms-practitioner-guide.html (pillar, Section 5), https://app.ekmgc.de (CTA), https://ekmgc.de/eu-ai-act.html (CTA). - The pillar article should link back to this cluster from its "Scope identification" or "Pre-conditions" section. - Recommended future clusters linking to this article: cluster on Article 10 data governance; cluster on Annex IV technical documentation; cluster on DACH employment-AI and GDPR Art. 22 intersection.


EKM Global Consulting GmbH — Baden-Baden, founded 2013.