Home · EU AI Act · Insights · EU AI Act vs GDPR: when does each apply, where do they overlap?
Comparison · EU AI Act

EU AI Act vs GDPR: scoping both regimes.

By Elshan Musayev Published 2026-05-06 10 min read
TL;DR. GDPR (Regulation (EU) 2016/679) protects personal data. The EU AI Act (Regulation (EU) 2024/1689) protects health, safety, and fundamental rights from AI-driven risk — regardless of whether personal data is involved. High-risk AI systems that also process personal data fall under both regimes simultaneously. Article 35 GDPR (DPIA) and Article 9 AI Act (risk management system / RMS) address overlapping but legally distinct obligations and must be coordinated, not collapsed into one document, and not duplicated as two disconnected exercises. Automated decision-making with significant effects on individuals triggers Article 22 GDPR and, where Annex III classification applies, the deployer's fundamental rights impact assessment (FRIA) under Article 26(6) AI Act. Running either regime in isolation creates audit gaps.

Meta description: Compliance officers and DPOs in DACH-regulated organisations ask: does a GDPR DPIA replace the EU AI Act risk management system? This guide clarifies where the two regimes overlap, where they diverge, and how to scope both in a single coordinated workflow.



Contents

  1. What each regulation governs
  2. Where they overlap
  3. Where they diverge
  4. Side-by-side comparison
  5. Practical scoping flow
  6. Common mistakes
  7. DACH-specific overlay
  8. Next steps

What Each Regulation Governs

GDPR — Regulation (EU) 2016/679

GDPR has applied since 25 May 2018. Its object is the protection of natural persons with regard to the processing of personal data and the free movement of such data. Foundational obligations include:

GDPR's scope is defined by the presence of personal data. A system that never touches data attributable to an identified or identifiable natural person sits outside its perimeter entirely.

EU AI Act — Regulation (EU) 2024/1689

The AI Act entered into force on 1 August 2024. High-risk AI obligations under Annex III apply from 2 August 2026. Its object is protection of health, safety, and fundamental rights from AI systems classified by the risk they pose — independent of whether those systems process personal data.

Obligations for high-risk AI systems include:

The AI Act's scope is defined by the presence of an AI system as defined in Article 3(1) and its risk classification. A human analyst making manual credit decisions with no AI inference layer is outside AI Act scope even if the decision has the same effect as an AI-assisted one.


Where They Overlap

The densest dual-regime area is high-risk AI systems that process personal data — a category that describes the majority of enterprise AI deployments in regulated sectors.

Article 22 GDPR and Annex III(5) AI Act

Article 22 GDPR grants data subjects the right not to be subject to decisions based solely on automated processing that produce significant effects. Annex III(5) AI Act classifies AI systems used for access to essential services — including creditworthiness and insurance risk assessment — as high-risk. Any system that sits in Annex III(5) and produces a decision with legal or similarly significant effect on a natural person activates both Article 22 GDPR and the full Annex III high-risk regime. Neither provision displaces the other.

Article 35 GDPR DPIA and Article 27 FRIA / Article 9 RMS

Article 35 GDPR requires a data protection impact assessment before high-risk processing. Article 26(6) AI Act requires deployers of certain Annex III systems to conduct a fundamental rights impact assessment (FRIA) before deployment. Article 9 AI Act requires providers to establish and maintain a continuous risk management system.

These three instruments address overlapping factual territory — the same AI system, the same data, some of the same affected persons — but they serve legally distinct purposes and are owned by distinct actors (controller vs. provider vs. deployer). See the Article 9 RMS practitioner guide for a structured approach to integrating these assessments without producing redundant documentation.

Article 28 GDPR and Article 26 AI Act Deployer Obligations

Article 28 GDPR governs the controller–processor relationship through a data processing agreement. Article 26 AI Act assigns obligations specifically to the deployer — the legal or natural person who deploys a high-risk AI system under their own authority. These are not the same relationship. A vendor may be both a GDPR processor and an AI Act provider; a deployer may be a GDPR controller and simultaneously bear AI Act deployer obligations. The Article 28 DPA does not transfer AI Act compliance from deployer to vendor. Both instruments must be addressed in the supplier governance framework. See Article 26 deployer obligations for a clause-level breakdown.


Where They Diverge

Understanding where the regimes do not intersect prevents over-scoping, which wastes resources, and under-scoping, which creates regulatory gaps.

AI Act only (no GDPR relevance): Industrial robotics operating on shop floors without any personal data processing, infrastructure monitoring systems that process only sensor telemetry, and AI-enabled quality control systems that never generate or consume personal data are subject to AI Act classification and, where Annex III applies, full high-risk obligations — without any GDPR obligation arising.

GDPR only (no AI Act relevance): Manual case management by human analysts, traditional rule-based decision engines without machine-learning inference, and data warehousing or archiving workflows that involve no AI component are subject to GDPR without any AI Act obligation arising. A compliance officer reviewing loan applications using a structured checklist, however sophisticated, is outside AI Act scope.

This divergence is significant for scoping: an organisation may operate systems that require GDPR compliance programmes with no AI Act work, AI Act compliance programmes with no GDPR work, and systems requiring coordinated programmes under both — all simultaneously.


Side-by-Side Comparison

Dimension GDPR (Regulation (EU) 2016/679) EU AI Act (Regulation (EU) 2024/1689)
Scope trigger Processing of personal data by controller or processor Placing on market, putting into service, or deploying an AI system as defined in Article 3(1)
Risk classification Processing operations classified by risk to data subjects (Article 35 threshold list) AI systems classified by tier: unacceptable, high-risk (Annex III), limited, minimal
Required documentation Records of processing activities (Article 30); DPIA where required (Article 35) Technical documentation (Article 11); logs (Article 12); Article 9 RMS records
Pre-deployment assessment DPIA before high-risk processing; prior consultation with DPA where residual risk remains high Article 9 RMS (provider); FRIA under Article 26(6) (deployer of certain Annex III systems)
Lawful basis Explicit lawful basis per Article 6 required; special categories under Article 9 No analogous lawful-basis requirement; compliance assessed against Annex III criteria and Article 9 RMS
Data subject / affected person rights Arts. 15–22: access, rectification, erasure, portability, objection, human review of automated decisions Right to explanation of individual decisions (Art. 86); transparency through instructions for use (Art. 13)
Transparency obligations Privacy notice; automated-decision logic on request (Article 22(3)) Transparency for natural persons (Article 13); human oversight measures (Article 14)
Penalties Up to EUR 20 million or 4% of global annual turnover, whichever is higher Up to EUR 35 million or 7% of turnover (prohibited AI); EUR 15 million or 3% of turnover (other infringements)
Supervisory authority (DACH) BfDI (federal), Landesdatenschutzbehörden (Länder); for financial services, sectoral oversight by BaFin National market surveillance authorities; Germany designating authority under Articles 70–71 AI Act
Effective date 25 May 2018 High-risk obligations: 2 August 2026; prohibited AI: 2 February 2025

Practical Scoping Flow

Use the following sequence to determine which obligations a given system triggers and how to coordinate assessment activities.

Step 1 — Does the system process personal data? If yes, GDPR applies. Identify the controller, establish lawful basis under Article 6, determine whether Article 9 special category data is involved, and assess Article 35 DPIA necessity. If no, GDPR does not apply; proceed to Step 2 for AI Act scoping only.

Step 2 — Is the system an AI system per Article 3(1) AI Act? Article 3(1) defines an AI system as a machine-based system designed to operate with varying levels of autonomy that, for a given set of objectives, infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments. Rule-based decision engines without inference do not meet this definition. If the system does not qualify, AI Act does not apply.

Step 3 — Does it fall within Annex III high-risk categories? Annex III lists eight domains including biometric identification, critical infrastructure, education, employment, essential private and public services, law enforcement, migration and border control, and administration of justice. If the system falls within a listed category and the conditions of Article 6 are met, full high-risk obligations apply. If not, assess whether limited transparency obligations under Article 50 apply.

Step 4 — Coordinate Article 35 DPIA with Article 9 RMS. Where both GDPR and high-risk AI Act obligations apply, the DPIA and the Article 9 RMS address overlapping subject matter. Organise documentation to allow each instrument to be read independently (for the respective supervisory authority) while drawing on a shared factual base: system description, data flows, risk identification, and mitigation measures. The Article 9 RMS practitioner guide provides a mapping table of shared input fields.

Step 5 — Apply Article 22 GDPR and Article 26(6) FRIA where automated decision-making produces significant effects. If the system produces decisions with legal or similarly significant effect on natural persons and operates with no meaningful human intervention, Article 22 GDPR applies and the deployer must implement safeguards including the right to human review. Where the system is also Annex III-classified, Article 26(6) AI Act requires a FRIA before deployment. Review Article 26 deployer obligations for the procedural requirements of each instrument. The FRIA and the Article 22 GDPR safeguards may be documented in a single structured assessment provided each regime's requirements are met explicitly.


Common Mistakes

Running two disconnected compliance programmes. Organisations frequently establish a GDPR programme under the DPO and an AI Act programme under a separate workstream with no formal handoff. This produces conflicting system descriptions, duplicated risk registers, and inconsistent mitigation measures — all of which create audit exposure under both regimes. A single cross-functional governance structure with defined ownership is more defensible.

Treating the DPIA as a substitute for the AI Act RMS. The Article 35 DPIA is a point-in-time assessment prior to high-risk processing. The Article 9 RMS is a continuous, lifecycle-long risk management process that must be reviewed at every significant change to the AI system. They are structurally different instruments. A DPIA completed at onboarding does not satisfy the iterative obligations of Article 9.

Assuming the vendor's GDPR DPA covers AI Act compliance. Article 28 data processing agreements assign processor obligations with respect to personal data. They do not transfer AI Act provider obligations to the vendor, and they do not satisfy the deployer's own obligations under Article 26 AI Act. Each organisation deploying a high-risk AI system bears Article 26 obligations regardless of the contractual structure with the provider. Supplier governance frameworks must address both instruments separately.

Conflating the AI Act's transparency obligations with GDPR's privacy notice requirements. Article 13 AI Act requires providers to ensure their systems are sufficiently transparent to enable deployers to fulfil their obligations. GDPR Articles 13–14 require controllers to provide information to data subjects about processing. These serve different purposes, are addressed to different audiences, and are assessed by different supervisory authorities.


DACH-Specific Overlay

German-speaking organisations operate under additional layers of regulatory expectation that sit above both instruments.

BfDI guidance on AI and personal data. The Bundesbeauftragte für den Datenschutz und die Informationsfreiheit has published orientations on the intersection of AI applications and data subject rights, with particular attention to the transparency requirements of Article 22(3) GDPR and the interaction with automated profiling. Compliance officers should monitor BfDI publications as the national AI Act supervisory structure is finalised.

Landesdatenschutzbehörden. Where processing is conducted by Länder public authorities or by private organisations subject to Länder supervisory jurisdiction, the relevant Landesbehörde (for example, the Bayerisches Landesamt für Datenschutzaufsicht or the Berliner Beauftragte für Datenschutz und Informationsfreiheit) may issue its own guidance on AI applications. Programme design should account for the applicable Landesbehörde.

BaFin MaRisk AT 9. Regulated financial institutions subject to the Minimum Requirements for Risk Management (MaRisk) face an additional compliance layer under AT 9, which governs outsourcing arrangements and the assessment of third-party risk. AT 9 requirements apply to AI systems procured from external vendors and create obligations that are structurally analogous to — but legally independent of — both GDPR processor obligations and AI Act deployer obligations. A coordinated governance framework must map all three sets of requirements against the same vendor and system.

BSI C5 and technical controls. The Federal Office for Information Security (BSI) Cloud Computing Compliance Criteria Catalogue (C5) establishes baseline security expectations for cloud services that underpin many AI deployments. Where an AI system is delivered via a cloud service attested under C5, the technical controls documentation can be leveraged as input to both the Article 9 RMS and the Article 32 GDPR security assessment — reducing duplication without conflating the instruments.


Next Steps

Organisations approaching this scoping exercise for the first time benefit from an initial classification of in-scope AI systems before committing to programme design. The EU AI Act Quick Scan provides a structured entry point for classifying a system by risk tier and identifying which GDPR and AI Act obligations apply in combination.

For organisations that have completed initial classification and need to design coordinated DPIA and RMS documentation, or address the Article 26 deployer FRIA, EKM Global Consulting GmbH provides structured advisory engagements. Further detail on the advisory scope is available at ekmgc.de/eu-ai-act.html.


EKM Global Consulting GmbH, Baden-Baden. Advisory on EU AI Act compliance, GDPR programme integration, and third-party risk management for regulated organisations in the DACH region. Founded January 2013. Principal: Elshan Musayev.