Home · EU AI Act · Insights · Article 26 deployer obligations: what banks and insurers actually owe
Cluster · EU AI Act

Article 26 — what deployers actually owe.

By Elshan Musayev Published 2026-05-06 10 min read
TL;DR.

Meta description: Article 26 EU AI Act imposes direct obligations on deployers — banks, insurers, asset managers — that vendor contracts cannot absorb. A DACH-focused practitioner breakdown.

Canonical URL: https://ekmgc.de/insights/article-26-deployer-obligations.html Pillar: EU AI Act Article 9 — Risk Management System: practitioner guide Schema.org recommendation: Article, FAQPage Published: 2026-05-06 | Author: EKM Global Consulting GmbH


TL;DR The widespread assumption that AI vendor contracts transfer regulatory burden from deployer to provider does not hold under Regulation (EU) 2024/1689. Article 26 imposes a distinct set of obligations on every organisation that operates a high-risk AI system under its own authority — regardless of who built it. Banks deploying credit-scoring AI, insurers using underwriting models, and HR teams running automated shortlisting tools each carry direct, non-delegable duties. A technically compliant vendor contract is a necessary condition; it is not a sufficient one, and it does not survive supervisory review on its own.


Table of Contents

  1. Provider vs deployer — the Article 3 framing
  2. Article 26 sub-obligations — a full breakdown
  3. The "we are only the deployer" misconception
  4. DACH financial services overlay
  5. Use-case examples
  6. Procurement contract implications
  7. What deployers must commission
  8. Frequently Asked Questions

1. Provider vs deployer — the Article 3 framing

Regulation (EU) 2024/1689 (the EU AI Act) draws a precise boundary between two roles. A provider (Art. 3(3)) is any natural or legal person that develops a high-risk AI system and places it on the market or puts it into service under their own name. A deployer (Art. 3(4)) is any natural or legal person that uses a high-risk AI system under their own authority for a professional purpose.

The critical consequence: when a German Landesbank buys an automated credit-scoring engine from a technology vendor and operates it within its own loan origination process, the bank is the deployer and the vendor is the provider. Both carry obligations. The provider's obligations — technical documentation, CE marking, logging capability, instructions for use — do not dissolve the bank's obligations under Art. 26. They operate in parallel.

This distinction matters because the Annex III list of high-risk AI use cases in financial services includes AI systems used for creditworthiness evaluation and credit scoring (Annex III, item 5b), as well as systems used for life and health insurance risk assessment and pricing. HR recruitment and worker management systems (Annex III, item 4) are also in scope, which affects any large regulated employer deploying automated shortlisting or performance-evaluation tooling.


2. Article 26 sub-obligations — a full breakdown

Art. 26(1): Operate per the provider's instructions for use

The deployer must use the high-risk AI system in accordance with the instructions for use that the provider is required to supply under Art. 13. This is not merely a contractual obligation — it is a direct regulatory requirement. If a bank's operations team runs a fraud-detection model in a configuration that the provider did not document or did not test, the bank bears the compliance failure.

This sub-obligation ties directly to the Article 9 Risk Management System: the deployer's own RMS must incorporate the risk profile and operational boundaries the provider has defined, then identify residual risks that arise from the deployer's specific deployment context.

Art. 26(2): Assign Article 14 human oversight

Every deployer of a high-risk AI system must assign oversight to natural persons who are competent for the task and have been appropriately trained. Article 14 (human oversight) is one of the most operationally demanding provisions in the Act. It requires that the assigned personnel:

Competence and training cannot be outsourced to the provider. The deployer must specify the role, the qualifying criteria, and the override authority in its internal governance documentation.

Art. 26(3): Input data must be relevant and sufficiently representative

Where the deployer controls the data fed into the system, that data must be relevant and sufficiently representative for the intended purpose. For a credit institution, this means that input data pipelines — customer financial history, bureau scores, behavioural signals — must meet a representativeness standard. Feeding a model with structurally biased or temporally stale inputs creates deployer-level liability regardless of the model's intrinsic quality.

Art. 26(4): Monitor operation; suspend if risk identified

Deployers must monitor the operation of the high-risk AI system on an ongoing basis, in accordance with the instructions for use, and must suspend its use where they have reason to believe that operation may pose a risk to health, safety, or fundamental rights. This monitoring obligation implies a formal programme — not ad hoc review — with defined KPIs, escalation criteria, and decision authority for suspension.

Art. 26(5): Retain automatically generated logs — minimum 6 months

Where the deployer controls the log infrastructure, automatically generated logs must be retained for a minimum of six months, unless Union or national law specifies otherwise. For regulated financial institutions subject to record-keeping requirements under DORA (Reg. (EU) 2022/2554), MaRisk, or sector-specific supervision, the applicable retention period may be longer. The important point is that log retention is a deployer obligation, not merely a vendor feature.

Art. 26(6): Fundamental Rights Impact Assessment (FRIA)

Deployers that are either: - (a) public-sector bodies, or - (b) private operators providing services of general interest

must conduct a Fundamental Rights Impact Assessment (FRIA) before putting a high-risk AI system into service (Art. 26(6), detailed in Art. 27). The FRIA must identify the affected persons, assess risks to fundamental rights, and document mitigation measures.

Crucially, Art. 29a extends FRIA requirements explicitly to credit institutions conducting creditworthiness assessment or credit scoring, and to operators of life and health insurance risk-assessment and pricing models. These are private-sector entities that might otherwise believe the FRIA does not apply to them. It does.

The FRIA is a pre-deployment document. Completing it after go-live is not compliant, and an incomplete FRIA is a gap that does not survive supervisory review.

Art. 26(7): Suspend and inform the provider when risk emerges

Where a deployer identifies, during operation, that continued use may pose a risk to health, safety, or fundamental rights, the deployer must both suspend the system and inform the provider. This is a dual obligation. Suspension alone — without provider notification — is not compliant. This provision creates a contractual need: the deployer must have a documented notification channel to the provider, with agreed response times.

Art. 26(8): Inform workers before deploying AI in the workplace

Before deploying a high-risk AI system that affects workers — including AI used for recruitment, performance evaluation, task allocation, or monitoring — deployers must inform workers' representatives and affected workers, in accordance with applicable national law. In Germany, this intersects with works council (Betriebsrat) co-determination rights under the Betriebsverfassungsgesetz. In Austria, similar requirements apply under the Arbeitsverfassungsgesetz. Deployers that skip this step face dual exposure: under the AI Act and under domestic labour law.

Art. 26(11): Cooperate with national competent authorities

Deployers must cooperate with national competent authorities and provide all information and access necessary for supervisory purposes. For German banks and insurers, the relevant authority is BaFin. For healthcare providers, it is the competent authority designated under national implementation. This is not a passive obligation — it requires that deployers maintain records in a form that can be produced on request.


3. The "we are only the deployer" misconception

A pattern recurs in procurement discussions: the assumption that a well-drafted vendor contract — specifying that the provider is responsible for CE marking, technical documentation, and conformity assessment — transfers the regulatory burden and insulates the deployer from supervisory exposure.

This assumption is incorrect on three counts.

First, Art. 26 obligations apply to the deployer directly by operation of law. They are not displaced by contract. A national supervisory authority inspecting a bank's AI governance will assess whether the bank has complied with Art. 26 independently of whether the bank has a contract that says the vendor bears responsibility.

Second, many Art. 26 obligations cannot be physically performed by the provider. The provider cannot assign the bank's staff to human oversight roles (Art. 26(2)). The provider cannot control the representativeness of the bank's proprietary input data (Art. 26(3)). The provider cannot notify the bank's workers' representatives (Art. 26(8)). These obligations sit structurally with the deployer.

Third, the FRIA (Art. 26(6), Art. 27, Art. 29a) must be completed by the deployer. It requires knowledge of the deployer's specific operational context, the specific population affected, and the specific fundamental rights at stake. A vendor cannot produce this document on the deployer's behalf because the vendor does not have this knowledge.

The contract remains important — as a means of obtaining the provider's instructions for use, securing log access, establishing change-notification procedures, and structuring the Art. 26(7) information channel. But it is a means, not a substitute.


4. DACH financial services overlay

BaFin MaRisk AT 9

The Mindestanforderungen an das Risikomanagement (MaRisk) AT 9 module governs third-party risk for German credit institutions. MaRisk AT 9 requires documented risk analysis prior to outsourcing, ongoing monitoring of material service providers, contractual rights of audit and information, and exit strategies. Where an AI vendor qualifies as a material outsourcing partner — which is the case for any credit-scoring or fraud-detection system integral to the institution's risk processes — MaRisk AT 9 and Art. 26 obligations run in parallel and must be addressed in the same governance framework.

DORA ICT third-party risk

For entities in scope of DORA (Reg. (EU) 2022/2554), the ICT third-party risk management requirements under Articles 28 to 39 DORA must be read alongside Art. 26 EU AI Act. DORA requires contractual provisions covering audit rights, service levels, incident notification, and business continuity. The Art. 26(7) suspension-and-notify obligation should be embedded in the DORA incident notification framework.

NIS2 supply-chain security

Directive (EU) 2022/2555 (NIS2) requires that entities implement measures to manage risks arising from the supply chain. Where an AI system is part of a digital service or network, NIS2 supply-chain security assessments and the EU AI Act deployer obligations are complementary due diligence requirements.

GDPR Art. 22 automated decision-making

Where a high-risk AI system produces decisions that affect individuals — credit refusals, insurance pricing — GDPR Art. 22 is engaged. The Art. 22 right not to be subject to solely automated decisions with significant effects creates an additional documentation and human-review layer that must be integrated with the Art. 14 human oversight specification. The BfDI and Landesdatenschutzbehörden have enforcement authority here.

Art. 29a: FRIA for credit and insurance

As noted in Section 2, Art. 29a explicitly extends the FRIA requirement to credit institutions and insurance operators. German banks and insurers conducting automated credit scoring or life/health pricing are in scope. The FRIA must be completed before go-live and registered with the national market surveillance authority.


5. Use-case examples

Credit decisioning AI at a German Sparkasse

A regional savings bank deploys a vendor model for automated consumer credit assessment. The Annex III, item 5b classification means this is a high-risk system. The Art. 26(6) / Art. 29a FRIA must be completed before deployment. Human oversight (Art. 26(2)) must be assigned to loan officers with documented competence criteria. Input data — local bureau data feeds — must meet the Art. 26(3) representativeness standard. Automatically generated decision logs must be retained under Art. 26(5). GDPR Art. 22 human review must be documented for every adverse decision.

HR shortlisting AI at a large Austrian insurer

An insurer deploys an AI shortlisting tool for high-volume recruitment. Annex III, item 4 classifies this as high-risk. Art. 26(8) requires prior notification to the Betriebsrat under the Arbeitsverfassungsgesetz before deployment. Art. 26(2) requires trained HR reviewers with documented authority to override shortlisting outputs. The insurer cannot rely on the vendor's model card as a substitute for its own FRIA where the insurer qualifies as a provider of services of general interest in that jurisdiction.

Fraud detection AI at a Swiss reinsurer

A reinsurance entity deploys a transaction-fraud detection model across its claims intake process. Art. 26(4) monitoring obligations require a formal operational monitoring programme — not quarterly reports. The suspension obligation (Art. 26(7)) must be embedded in operational incident procedures. Log retention (Art. 26(5)) must be coordinated with DORA ICT incident record-keeping requirements. The Article 9 Risk Management System must document how residual risks from the fraud model's false-positive rate are managed.

Document classification AI in insurance claims handling

An insurer deploys a document classification model to triage incoming claims. If the classification outputs trigger consequential decisions — priority queuing, fraud flag, coverage assessment — the system is likely in scope as a high-risk AI system affecting access to services. Art. 26(1) compliance requires adherence to the provider's operational instructions; any deviation must be documented and risk-assessed. Art. 26(3) requires that the training and input document corpus is representative of the actual claims population.


6. Procurement contract implications

The deployer's contract with the AI provider is the primary instrument for obtaining what Art. 26 requires. The following provisions are operationally necessary — not optional best practice.

Instructions for use (Art. 13 / Art. 26(1)): The contract must specify that the provider will deliver, maintain, and update instructions for use as a contractual deliverable, with version control and notification of material changes.

Log access and retention support (Art. 26(5)): Where logs are held on provider infrastructure, the contract must guarantee deployer access, export capability, and a retention period consistent with Art. 26(5) and applicable sector rules.

Substantial modification notice: The EU AI Act treats substantial modification as an event that may re-trigger conformity obligations. The deployer must receive advance notice of any planned substantial modification to the model, scoring logic, or data inputs, with sufficient lead time to reassess the FRIA and Art. 26(3) representativeness.

Art. 26(7) information channel: The contract must specify a documented escalation path by which the deployer notifies the provider of identified risks, with agreed response timelines. This is a regulatory obligation, not merely a service-level arrangement.

Authority cooperation (Art. 26(11)): The contract should require provider cooperation in the event of a supervisory inquiry, including access to technical documentation and audit logs.

For the structure and documentation requirements that underpin these contractual provisions, see the Article 9 Risk Management System practitioner guide.


7. What deployers must commission

Deployers in the DACH financial services and healthcare sectors typically need to commission the following before going live with a high-risk AI system.

  1. FRIA per use case — a structured assessment per Art. 27 and Art. 29a, covering affected populations, fundamental rights at stake, risk likelihood and severity, and mitigation measures. For credit and insurance use cases, this is mandatory regardless of whether the deployer is a public body.

  2. Article 14 human oversight specification — a documented specification naming the oversight role(s), qualifying competence criteria, required training (including model-specific training), decision authority, and override procedures. This specification must be integrated with the deployer's operational procedures and staff training records.

  3. Log retention configuration — a documented configuration specifying which logs are generated, where they are stored (provider vs deployer infrastructure), who holds the retention obligation, and the applicable minimum retention period.

  4. Monitoring KPIs and suspension protocols — a formal monitoring programme with defined metrics (accuracy drift, bias indicators, false positive/negative rates), escalation thresholds, and a documented suspension protocol that specifies decision authority and the Art. 26(7) provider notification procedure.

  5. Workforce notification documentation (Art. 26(8)) — for workplace-affecting AI, evidence that works council consultation or worker notification has occurred in accordance with national law, prior to deployment.

  6. Operational Risk Management System integration — the deployer's Art. 9 RMS must incorporate all of the above. The Article 9 RMS practitioner guide sets out the structure for integrating deployer-specific risk identification, evaluation, and mitigation measures into a documented system.


8. Frequently Asked Questions

Q: If our AI vendor holds a CE mark for the system, are we as deployer automatically compliant?

A CE mark issued by the provider demonstrates that the system met conformity requirements at the point of market placement. It does not transfer or discharge any of the deployer obligations under Art. 26. The deployer's FRIA, human oversight assignment, log retention, and monitoring programme remain required independently.

Q: Does Article 26 apply to us if we use the AI system only internally, not to make decisions about customers?

Scope under the EU AI Act is determined by the Annex III classification of the use case, not by whether outputs are customer-facing. An HR shortlisting tool used internally to screen job applicants is an Annex III, item 4 system. Internal use does not exclude Art. 26 obligations.

Q: How long must deployers retain automatically generated logs under Art. 26(5)?

The minimum retention period is six months from the date of generation, where the deployer controls the logging infrastructure. National law or sector-specific regulation — including DORA for financial entities and MaRisk for German banks — may require longer periods. The applicable period must be documented in the deployer's log retention policy.

Q: Can a deployer rely on the provider's instructions for use as its FRIA?

No. The FRIA is a deployer-specific document. It must assess the fundamental rights impacts of the specific deployment — the specific population affected, the specific operational context, and the specific mitigation measures available to the deployer. The provider's instructions for use inform the FRIA but cannot substitute for it.

Q: Does Art. 26(8) apply to all AI systems, or only high-risk ones?

Article 26(8) applies to deployers of high-risk AI systems that affect workers — for example, systems used for recruitment, performance evaluation, task allocation, or workplace monitoring. The obligation runs alongside, and is coordinated with, applicable national co-determination and worker-information requirements.

Q: What is the relationship between Art. 26 and GDPR Art. 22 for automated credit decisions?

They operate in parallel. GDPR Art. 22 grants data subjects the right not to be subject to solely automated decisions with significant effects, and requires the existence of human review on request. Art. 26(2) of the EU AI Act independently requires the deployer to assign trained human oversight to the system. Both requirements must be addressed in the governance design for any automated credit decision system.

Q: Who enforces Art. 26 obligations against deployers in Germany?

The EU AI Act requires Member States to designate national competent authorities. For financial services deployers in Germany, BaFin is the expected primary supervisory authority. BfDI and Landesdatenschutzbehörden retain concurrent jurisdiction over personal data processing. Enforcement timelines depend on the transitional provisions in Regulation (EU) 2024/1689, with obligations for high-risk AI systems in Annex III becoming applicable from August 2026.

Q: What does "suspend" mean in practice under Art. 26(4) and Art. 26(7)?

Suspension means halting the operational use of the system for the decision-making purpose in question — not merely pausing automated output delivery. The deployer must document the suspension decision, the reason, the date, and the alternative process in place. Art. 26(7) adds the requirement to notify the provider in writing. Both steps should be codified in the deployer's incident and business-continuity procedures before deployment.


Ready to assess your Art. 26 compliance posture?

EKM Global Consulting GmbH (Baden-Baden) works with banks, insurers, asset managers, and healthcare entities across the DACH region on structured EU AI Act deployer assessments, FRIA development, and Article 14 human oversight specifications.

Quick Scan — run a structured self-assessment at app.ekmgc.de to identify which Art. 26 obligations apply to your current AI deployments.

Advisory — for a practitioner-led review of your deployer obligations across Annex III use cases, visit ekmgc.de/eu-ai-act.html.


Citations and primary sources

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (EU AI Act) — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689
  2. Art. 3 (definitions), Art. 13 (transparency), Art. 14 (human oversight), Art. 26 (deployer obligations), Art. 27 (FRIA), Art. 29a (FRIA for credit and insurance), Annex III (high-risk AI systems) — Regulation (EU) 2024/1689
  3. Regulation (EU) 2022/2554 on digital operational resilience for the financial sector (DORA) — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022R2554
  4. Directive (EU) 2022/2555 on measures for a high common level of cybersecurity across the Union (NIS2) — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022L2555
  5. Regulation (EU) 2016/679 (GDPR), Art. 22 — EUR-Lex: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679
  6. BaFin Mindestanforderungen an das Risikomanagement (MaRisk) — Rundschreiben 05/2023 (BA): https://www.bafin.de/SharedDocs/Veroeffentlichungen/DE/Rundschreiben/2023/rs_2305_ba_MaRisk.html
  7. EU AI Office — guidance and regulatory resources: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence