DPIA vs FRIA: The Complete AI Impact Assessment Guide
If your organisation deploys an AI system in the EU that processes personal data and falls within an Annex III high-risk category, you are almost certainly required to conduct two separate impact assessments: a Data Protection Impact Assessment (DPIA) under GDPR and a Fundamental Rights Impact Assessment (FRIA) under the EU AI Act. Completing one does not satisfy the other. Missing either exposes you to enforcement action under different penalty regimes — simultaneously.
This guide explains what each assessment requires, where they overlap, where they diverge, and how to build a practical combined methodology that satisfies both in a single workflow.
TL;DR — DPIA vs FRIA key differences
- Two distinct legal obligations: The DPIA is required by GDPR Article 35. The FRIA is required by AI Act Article 27. They come from different regulations with different enforcement authorities, different scopes, and different penalty ceilings.
- Different focus: A DPIA assesses risks to data protection and privacy rights. A FRIA assesses risks to all fundamental rights under the EU Charter — including non-discrimination, human dignity, freedom of expression, access to justice, and more.
- Different triggers: A DPIA is triggered by high-risk personal data processing (profiling, large-scale special categories, systematic monitoring). A FRIA is triggered by deploying a high-risk AI system in specific Annex III domains (credit scoring, insurance, employment, essential services, law enforcement, migration, education).
- Different responsible parties: The DPIA falls on the data controller. The FRIA falls on the AI deployer. These may be different legal entities.
- Different notification rules: A DPIA is an internal document — no registration required (only prior consultation with the DPA in some cases). A FRIA must be registered in the EU database and its results submitted to market surveillance authorities.
- Can be combined: Article 27(4) of the AI Act explicitly allows the FRIA to be conducted together with a DPIA. But the combined document must satisfy the requirements of both.
- Both required before deployment: Neither assessment is a post-deployment exercise. Both must be completed before the system goes live.
Two assessments, two legal bases — why you may need both
The EU's regulatory framework for AI creates a dual-assessment obligation that catches many organisations by surprise. When you deploy an AI system that processes personal data and falls within the AI Act's high-risk classification, two distinct regulations impose two distinct impact assessment requirements.
The DPIA obligation under GDPR
The Data Protection Impact Assessment has been a requirement since May 2018. Under Article 35 of the GDPR, data controllers must conduct a DPIA before carrying out processing that is "likely to result in a high risk to the rights and freedoms of natural persons." For AI systems, this is nearly always triggered — automated decision-making, profiling, and large-scale processing of personal data are all listed as high-risk indicators.
The DPIA focuses on data protection: what personal data is collected, why, how it is processed, what risks arise for data subjects, and what safeguards mitigate those risks.
The FRIA obligation under the AI Act
The Fundamental Rights Impact Assessment is a newer requirement, introduced by the EU AI Act (Regulation (EU) 2024/1689). Under Article 27, deployers of high-risk AI systems in specific categories — public bodies, essential service providers, and entities operating in sensitive domains listed in Annex III — must conduct a FRIA before putting a high-risk AI system into service.
The FRIA goes further than data protection. It requires assessment of risks to the full spectrum of fundamental rights recognised under the EU Charter: human dignity, non-discrimination, freedom of expression, children's rights, disability rights, access to effective remedy, consumer protection, and social security rights, among others.
Completing one does NOT satisfy the other
This is the critical point. A DPIA does not cover the breadth of fundamental rights analysis required by a FRIA. A FRIA does not include the specific data protection risk analysis methodology required by a DPIA. Even if you conduct both assessments in a single document (as permitted by Article 27(4)), you must ensure that every requirement of both frameworks is addressed.
Organisations that assume their existing DPIA practice is sufficient for AI Act compliance are exposed to enforcement risk under the AI Act's penalty regime — up to EUR 15 million or 3% of global annual turnover for non-compliance with deployer obligations, in addition to any GDPR penalties.
For more on how GDPR and the AI Act interact, see our detailed comparison: EU AI Act vs GDPR: Differences and Overlap Guide.
What is a DPIA?
A Data Protection Impact Assessment is a structured process for identifying, assessing, and mitigating risks that personal data processing poses to individuals. It is one of the core accountability mechanisms under the GDPR.
Legal basis: GDPR Article 35
Article 35(1) requires the data controller to carry out a DPIA "prior to the processing" whenever a type of processing — "in particular using new technologies" — is "likely to result in a high risk to the rights and freedoms of natural persons."
This obligation is supplemented by Article 35(3) (specific mandatory triggers), Article 35(4) (national supervisory authority lists of high-risk processing), and Article 36 (prior consultation with the DPA when residual risk remains high). The Article 29 Working Party Guidelines on DPIAs (WP 248 rev.01, endorsed by the EDPB) provide additional implementation guidance.
When a DPIA is required
A DPIA is mandatory when processing is likely to result in a high risk to individuals. Article 35(3) lists three situations where a DPIA is always required:
- Systematic and extensive profiling with legal or significant effects — This covers most AI-powered decision-making that affects individuals: credit scoring, hiring decisions, insurance underwriting, welfare eligibility, academic assessments.
- Large-scale processing of special categories of data — Health data, biometric data, genetic data, racial or ethnic origin, political opinions, trade union membership, religious beliefs, sexual orientation. AI systems in healthcare, HR (diversity data), or law enforcement frequently process these categories.
- Systematic monitoring of publicly accessible areas — CCTV with facial recognition, crowd analytics, behavioural monitoring in public spaces.
Beyond these three, the Article 29 Working Party identified nine criteria indicating high risk (evaluation/scoring, automated decision-making with significant effects, systematic monitoring, sensitive data, large-scale processing, dataset matching, vulnerable data subjects, innovative technology use, and processing that restricts rights). If a processing activity meets two or more, a DPIA should be conducted.
Most AI systems processing personal data meet at least two criteria. In practice, if you are deploying an AI system that makes or informs decisions about individuals, assume a DPIA is required.
Who must conduct it
The obligation falls on the data controller — the entity that determines the purposes and means of processing (GDPR Article 4(7)). In the AI context, this is typically the organisation deploying the AI system, but not always. If an employer uses a third-party recruitment AI but determines which candidates to assess and what criteria apply, the employer is the controller. Joint controllership can also arise when multiple entities share those determinations.
The controller must involve the DPO (if designated) when carrying out the DPIA (Article 35(2)) and seek data subjects' views where appropriate (Article 35(9)).
What a DPIA must cover
Article 35(7) sets out the minimum content requirements:
- Systematic description of the processing operations — What data is collected, from whom, how it is processed, by what technology, for what purpose, and on what legal basis.
- Assessment of necessity and proportionality — Is the processing necessary for the stated purpose? Could the same objective be achieved with less data, less intrusive means, or no AI? Is the legal basis valid and sufficient?
- Assessment of risks to the rights and freedoms of data subjects — What could go wrong? What is the likelihood and severity of harm? Types of harm include: discrimination, identity theft, financial loss, reputational damage, loss of confidentiality, re-identification of pseudonymised data, and inability to exercise rights.
- Measures to address the risks — Technical and organisational safeguards: encryption, pseudonymisation, access controls, data minimisation, purpose limitation, retention policies, human oversight, transparency measures, and data subject rights mechanisms.
DPIA for AI systems — specific considerations
AI processing introduces additional DPIA considerations: training data provenance and bias (does the data represent the target population?), model opacity (can the logic be explained to data subjects under Articles 13-14 and Article 22?), accuracy and differential error rates across demographic groups, data retention for training vs. inference (which may require separate legal bases), automated decision-making safeguards under Article 22, and processor/transfer risks when training or inference uses third-party cloud infrastructure.
For a broader view on AI bias and fairness testing, see: AI Bias Testing and Fairness Under the EU AI Act.
What is a FRIA?
The Fundamental Rights Impact Assessment is a mandatory pre-deployment assessment under the EU AI Act. It requires deployers in specific categories to evaluate how their use of a high-risk AI system may affect fundamental rights — going well beyond data protection into dignity, non-discrimination, access to justice, and more.
For a deep dive into the FRIA specifically, see our dedicated guide: AI Act FRIA: Fundamental Rights Impact Assessment Guide.
Legal basis: AI Act Article 27
Article 27 establishes the FRIA obligation. Key provisions: Art. 27(1) requires the assessment before the system is put into use; Art. 27(2) states the FRIA shall complement an existing DPIA where obligations overlap; Art. 27(4) permits conducting the FRIA in conjunction with the DPIA; and Art. 27(5) requires the deployer to notify the market surveillance authority and register results in the EU database.
When a FRIA is required
A FRIA is required when all of the following conditions are met:
- The system is classified as high-risk under the AI Act (listed in Annex III or falling under the criteria in Article 6).
- The deployer falls into one of the categories specified in Article 27(1) — bodies governed by public law, private entities providing essential services, or entities operating in specific Annex III domains.
- The system is being put into use (first deployment or significant change in deployment context).
The FRIA must be completed before the system is put into service. It is a precondition for lawful deployment, not a retroactive exercise.
Unsure if your system is high-risk? Use our classification guide: Is My AI System High-Risk? Classification Guide.
Who must conduct it
The FRIA falls on the deployer — the entity using the AI system under its authority (AI Act Article 3(4)). Article 27(1) specifies two categories:
- Bodies governed by public law — government agencies, public hospitals, universities, welfare administrations, courts, immigration offices, and any entity performing functions under public law.
- Private entities operating in specific Annex III domains — private entities deploying high-risk AI for credit scoring, insurance, employment, essential services, law enforcement, migration, or education.
The deployer may differ from the data controller under GDPR. A hospital deploying a third-party AI diagnostic tool is both deployer and controller — it conducts both the FRIA and DPIA. The AI provider has separate obligations (conformity assessment, technical documentation) but does not conduct the FRIA.
Specific domains requiring a FRIA
Article 27(1) applies to deployers of high-risk AI systems in the following Annex III areas:
For sector-specific guidance, see:
- AI Act HR & Recruitment Compliance Guide
- AI Act Financial Services Compliance Guide
- AI Act Education & EdTech Compliance Guide
What a FRIA must cover
Article 27(3) specifies the minimum content. The deployer must document:
- The deployer's processes in which the system will be used and the decisions it informs or automates.
- The period of time and frequency of intended use (continuous, periodic, or event-triggered).
- The categories of natural persons and groups likely to be affected, including vulnerable populations and the scale of impact.
- The specific risks of harm to those groups, mapped to specific EU Charter rights, assessed for likelihood and severity.
- Human oversight measures — how reviewers are integrated, their override authority, and escalation procedures.
- Remediation measures in case identified risks materialise — complaint mechanisms, redress procedures, monitoring protocols.
The FRIA must assess risks against the EU Charter of Fundamental Rights, including human dignity (Art. 1), private life (Art. 7), data protection (Art. 8), equality (Art. 20), non-discrimination (Art. 21), children's rights (Art. 24), disability rights (Art. 26), social security (Art. 34), healthcare (Art. 35), access to services (Art. 36), and effective remedy (Art. 47).
Notification requirements
Unlike the DPIA, which is an internal accountability document, the FRIA has mandatory notification obligations:
- Market surveillance authority: The deployer must notify the relevant national market surveillance authority of the FRIA results (Article 27(5)).
- EU database: The deployer must submit the FRIA to the EU database established under Article 71 (cross-referenced with Article 49(4)).
Failure to conduct a FRIA or to notify the relevant authorities constitutes a breach of deployer obligations under the AI Act, subject to administrative fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher (Article 99(4)).
Side-by-side comparison: DPIA vs FRIA
Where DPIA and FRIA overlap
Despite different legal bases and scopes, the DPIA and FRIA share significant common ground. Recognising these overlaps is the key to building an efficient combined methodology.
Both assess risks to individuals before deployment
At their core, both ask: what harm could this system cause to the people it affects? The DPIA frames this as data protection risk; the FRIA frames it as fundamental rights risk. But the analytical exercise — identify affected individuals, map potential harms, assess likelihood and severity, define mitigations — is structurally identical. Both must be completed before the system goes live, creating a natural opportunity for a single pre-deployment process.
Shared factual basis
Both assessments require the same foundation: the system description (what it does, inputs, outputs), data flows (collection, storage, access), deployment context (processes, decisions, operators), affected population (including vulnerable groups), and human oversight arrangements. Documenting these facts once and referencing them in both assessment streams eliminates redundancy.
Explicit legislative permission to combine
Article 27(4) endorses a combined approach: the FRIA can complement an existing DPIA, provided the combined document covers all requirements of both frameworks. Article 27(2) further states that where FRIA obligations are already met through a DPIA, the FRIA shall complement rather than duplicate.
Critical differences you cannot ignore
While the overlaps create efficiency opportunities, the differences between the DPIA and FRIA are substantial and cannot be papered over with a single generic assessment. Understanding these differences is essential for compliance.
Scope: privacy rights vs. all fundamental rights
This is the most important distinction. A DPIA is laser-focused on data protection — the right to privacy under Article 7 of the EU Charter and the right to the protection of personal data under Article 8. A FRIA must assess the system's impact on the entire catalogue of fundamental rights recognised in the Charter.
Consider an AI system used for employment screening. A DPIA would assess: what personal data is collected from candidates, is the legal basis valid, are candidates informed, can they exercise their rights, is the data accurate, is it retained appropriately?
The FRIA must go further: does the system discriminate on grounds of gender, ethnicity, age, or disability (Article 21 CFR)? Does it respect human dignity (Article 1 CFR)? Does it impair freedom of expression — for example, by penalising candidates for social media activity (Article 11 CFR)? Does it respect the rights of persons with disabilities (Article 26 CFR) — for example, could candidates with certain speech patterns or physical characteristics be systematically disadvantaged by video interview analysis?
A DPIA that does not address non-discrimination, dignity, or other Charter rights will not satisfy the FRIA requirement, regardless of how thorough its data protection analysis may be.
Responsible party: data controller vs. AI deployer
In many cases, the data controller and the AI deployer are the same entity — a bank that deploys a credit scoring AI and controls the personal data is both. But this is not always the case:
- A staffing agency (deployer) may use a recruitment AI tool while the hiring company (controller) determines the purposes of data processing.
- A managed service provider (deployer) may operate an AI system on behalf of a government agency (controller).
- A joint venture may create situations where the deployer and controller are different legal entities within the same group.
When the data controller and AI deployer are different entities, both must conduct their respective assessments independently. The controller conducts the DPIA; the deployer conducts the FRIA. Coordination between them is essential but does not transfer the obligation.
Notification: internal document vs. public registration
A DPIA is a private accountability document — no publication or registration required. The only external obligation is prior consultation with the DPA under Article 36 when residual risk remains high.
A FRIA has mandatory public-facing notification: the deployer must notify the market surveillance authority and register results in the EU database. This means the FRIA is subject to scrutiny by regulators, affected individuals, civil society, and the press. Documentation standards must reflect this visibility.
Penalty regimes: different authorities, different ceilings
A failure to conduct a DPIA is enforceable by the Data Protection Authority under GDPR Article 83(4), with fines up to EUR 10 million or 2% of global annual turnover.
A failure to conduct a FRIA is enforceable by the market surveillance authority under AI Act Article 99(4), with fines up to EUR 15 million or 3% of global annual turnover.
These penalties are cumulative, not alternative. An organisation that deploys a high-risk AI system without conducting either assessment could face enforcement from two different regulators imposing separate fines. The total potential exposure is EUR 25 million or 5% of global turnover — before considering any additional penalties for the underlying violations the assessments would have identified.
For a comprehensive overview of AI Act penalties, see: EU AI Act Penalties and Fines Explained.
Practical combined methodology
The following methodology allows organisations to satisfy both DPIA and FRIA requirements through a single coordinated process, while maintaining the distinct analysis streams each regulation demands.
Step 1 — Determine which assessments apply
Before conducting any assessment, determine your obligations:
Flowchart: Do you need a DPIA, a FRIA, or both?
Does your AI system process personal data?
├── YES → Does it meet DPIA triggers (Art. 35(3) criteria)?
│ ├── YES → DPIA is REQUIRED
│ └── NO → Consider Art. 29 WP nine criteria
│ ├── Meets 2+ criteria → DPIA is REQUIRED
│ └── Meets 0-1 criteria → DPIA is NOT required (but recommended)
│
└── NO → DPIA is NOT required
Is your AI system classified as high-risk (Annex III)?
├── YES → Are you a public body or operating in an Annex III domain
│ (credit, insurance, employment, essential services,
│ law enforcement, migration, education)?
│ ├── YES → FRIA is REQUIRED
│ └── NO → FRIA is NOT required
│
└── NO → FRIA is NOT required
If both a DPIA and FRIA are required, proceed with the combined methodology. If only one applies, conduct that assessment alone using the standard requirements.
Step 2 — Conduct joint risk identification
Build a single factual foundation that feeds both assessments:
- System inventory: Document the AI system — provider, technology type, intended purpose, risk classification.
- Data mapping: Map all data flows — inputs, outputs, storage, retention, access controls, international transfers.
- Process mapping: Document the business process — triggers, outputs, human decisions relying on the system.
- Stakeholder mapping: Identify affected individuals and groups from both perspectives — data subjects (DPIA) and rights-holders (FRIA). Flag vulnerable populations.
- Risk brainstorming: Conduct a joint workshop covering data protection harms (unauthorised access, inaccuracy, loss of control) and fundamental rights harms (discrimination, dignity violations, denial of services, barriers for disabled persons).
Step 3 — Separate the analysis streams
With the joint factual basis established, split the analysis into two parallel streams:
Stream A: DPIA analysis
- Evaluate legal basis under GDPR Article 6 (and Article 9 for special categories).
- Assess necessity and proportionality.
- For each risk, assess likelihood and severity (financial loss, discrimination, identity theft, loss of confidentiality, inability to exercise rights).
- Define mitigations: encryption, pseudonymisation, minimisation, retention limits, access controls, transparency notices, data subject rights procedures.
- Determine residual risk. If high, initiate prior consultation with the DPA (Article 36).
Stream B: FRIA analysis
- Map each risk to specific EU Charter rights for each affected group.
- Assess deployment context: period of use, frequency, scale, geographic scope.
- Evaluate discriminatory outcomes (Article 21 CFR) across protected characteristics.
- Assess impact on dignity (Article 1), access to justice (Article 47), social security (Article 34), and other relevant Charter provisions.
- Document human oversight measures and remediation plans (complaint mechanisms, redress, monitoring).
- Prepare notification for the market surveillance authority and EU database.
Step 4 — Document and notify appropriately
Produce a single combined assessment document with clearly delineated sections:
Notification actions:
- DPIA: File internally. Consult DPA if residual risk is high (Article 36 GDPR).
- FRIA: Submit to market surveillance authority and register in the EU database (Article 27(5) AI Act).
For a complete compliance planning framework, see: EU AI Act Compliance Checklist 2026.
When you need only a DPIA, only a FRIA, or both
The following table illustrates common scenarios:
The most common scenario for organisations deploying high-risk AI in Annex III domains is that both assessments are required. The FRIA-only scenario arises primarily where AI systems do not process personal data at all — which is rare for systems that affect individuals.
Common mistakes to avoid
Based on early enforcement signals and supervisory authority guidance:
1. Assuming the DPIA covers fundamental rights — A DPIA addresses data protection only. Discrimination, dignity, and access-to-services harms must be assessed separately in a FRIA.
2. Conducting the FRIA after deployment — Article 27(1) requires the FRIA before the system is put into use. Retroactive assessments do not satisfy the obligation.
3. Treating assessments as one-time exercises — Both are living documents. System updates, new user groups, and context changes must trigger reassessment.
4. Failing to register the FRIA — Unlike the DPIA, the FRIA must be submitted to the market surveillance authority and registered in the EU database.
5. Using generic templates — Both assessments must be specific to your system, context, and affected population. Supervisory authorities expect concrete, evidence-based analysis.
6. Ignoring vulnerable groups — Both GDPR (Article 29 WP criteria) and AI Act (Recital 97, Article 27(3)(c)) require assessing differential impacts on children, disabled persons, employees, and welfare recipients.
7. Siloed controller/deployer assessments — When the controller and deployer are different entities, their assessments must be coordinated. The deployer needs data processing details; the controller needs deployment context.
8. Overlooking cumulative penalties — Failure on both assessments means enforcement from two regulators under two penalty regimes simultaneously.
Frequently asked questions
Can I conduct the DPIA and FRIA as a single document?
Yes. Article 27(4) explicitly permits conducting the FRIA "in conjunction with" the DPIA. However, the combined document must satisfy all requirements of both frameworks — GDPR-specific analysis (legal basis, necessity, proportionality, data protection risks) and AI Act-specific analysis (all fundamental rights, deployment context, EU database registration). A combined document saves effort but does not reduce substantive obligations.
Who is responsible if we use a third-party AI tool?
The DPIA falls on the data controller; the FRIA falls on the deployer. In most cases, the organisation deploying a third-party tool is both and bears both obligations. The AI provider has separate obligations (technical documentation, conformity assessment) but is not responsible for your DPIA or FRIA. You should request the technical information you need from the provider — their obligation under Article 13 and Article 26(1).
What happens if the DPIA reveals high residual risk?
The controller must consult the DPA before commencing processing (GDPR Article 36). The DPA has up to eight weeks (extendable by six) to provide written advice, which may include restricting the processing. This prior consultation has no FRIA equivalent — though a FRIA indicating severe fundamental rights risks should prompt the deployer to reconsider deployment.
Does the AI Act require a FRIA for all high-risk AI systems?
No. Article 27 applies only to specific deployer categories: public bodies and private entities operating in specific Annex III domains (credit, insurance, employment, essential services, law enforcement, migration, education). A private manufacturer deploying high-risk AI for internal quality control may not need a FRIA — though a DPIA may still be required if personal data is processed. See: Is My AI System High-Risk? Classification Guide.
How often must assessments be updated?
Neither is a one-time event. GDPR Article 35(11) requires DPIA review when processing risks change. The FRIA must be updated when deployment circumstances change — new system versions, new affected groups, or new evidence of fundamental rights impact. Best practice: integrate reassessment triggers into your AI governance framework (system updates, incidents, policy changes, annual review).
What is the deadline for the first FRIA?
The FRIA obligation takes effect on 2 August 2026. Deployers already using high-risk AI at that date must have a FRIA in place. For new deployments, the FRIA must be completed before the system goes live. Given the complexity involved, starting well in advance is strongly advisable.
For a full timeline, see: EU AI Act Timeline: Key Dates and Deadlines.
Next steps: prepare your AI impact assessments
The dual DPIA/FRIA obligation is one of the most operationally demanding requirements of the EU's AI regulatory framework. Organisations that address it early — with a combined methodology, clear role assignments, and integrated governance — will find compliance significantly more manageable than those who treat each assessment as a separate last-minute exercise.
Key actions:
- Inventory your AI systems — Map the deployer and controller for each system. See: How to Build an AI Systems Inventory.
- Determine obligations — Use the flowchart and scenario table above to identify which assessments apply.
- Adopt a combined methodology — Joint risk identification, parallel analysis streams, unified documentation, appropriate notification.
- Engage providers — Request technical documentation under Article 13 and Article 26(1).
- Set governance triggers — System updates, context changes, incidents, and annual review cycles should all initiate reassessment.
Ready to assess your AI systems? Start your free AI Act risk assessment to determine which of your systems require a DPIA, a FRIA, or both — and get a prioritised action plan for compliance.
Check your AI system's compliance
Free assessment — no signup required. Get your risk classification in minutes.
Run free assessment


