All articles
AI Act FRIA: Fundamental Rights Impact Assessment
AI Act

AI Act FRIA: Fundamental Rights Impact Assessment

Step-by-step AI Act FRIA guide under Article 27. Who must conduct one, mandatory fields, FRIA vs DPIA comparison, and a practical template.

Legalithm Team20 min read
Share
Read time20 min
TopicAI Act
UpdatedJan 2026
Table of contents

EU AI Act FRIA: How to Conduct a Fundamental Rights Impact Assessment

TL;DR — Key facts about the FRIA requirement

  • Who must conduct one: Public bodies, private entities providing essential services (banking, insurance, energy, healthcare, transport), and education/vocational training institutions that deploy high-risk AI systems.
  • When: Before the high-risk AI system is put into service — it is a precondition for lawful deployment, not a post-deployment exercise.
  • What it covers: All fundamental rights under the EU Charter — not just data protection. This includes dignity, non-discrimination, freedom of expression, children's rights, disability rights, access to justice, and more.
  • How it differs from a DPIA: A DPIA under GDPR covers personal data risks only. A FRIA covers broader fundamental rights and applies even when no personal data is processed.
  • Can it be combined with a DPIA? Yes — Article 27(4) explicitly allows combining both into a single assessment.
  • Registration: FRIA results must be submitted to the market surveillance authority and registered in the EU database.
  • Living obligation: The FRIA must be updated when circumstances change — new system versions, new deployment contexts, or new evidence about fundamental rights impact.

What is a FRIA and why does it matter?

The Fundamental Rights Impact Assessment (FRIA) is one of the most significant — and least understood — obligations for deployers of high-risk AI systems under the EU AI Act. It requires deployers in specific categories to systematically assess how their use of a high-risk AI system may affect the fundamental rights of the people it impacts.

Unlike the Data Protection Impact Assessment (DPIA) under GDPR, the FRIA covers the full breadth of the EU Charter of Fundamental Rights: human dignity, non-discrimination, freedom of expression, access to effective remedy, children's rights, rights of persons with disabilities, and more. The FRIA exists because the EU legislator recognised that AI systems can harm people in ways that go far beyond data protection — through biased decisions, opaque processes, and systemic discrimination.

If you are a public body, essential service provider, or educational institution deploying high-risk AI, this guide shows you how to meet the requirement before the 2 August 2026 deadline.

Who must conduct a FRIA?

Article 27 requires a FRIA from deployers who fall into three categories:

Category 1: Bodies governed by public law

Government agencies, public hospitals, public universities, social welfare administrations, public employment services, municipal authorities, tax authorities, courts, immigration offices, and any entity performing functions under public law.

Example: A municipal government deploying an AI system to prioritise social housing applications. The system ranks applicants by urgency and need, directly affecting access to housing — a fundamental right under Article 34 of the EU Charter (social security and social assistance).

Category 2: Private entities providing essential services

This includes banking and credit institutions, insurance undertakings, energy suppliers, water services, transport operators, telecommunications providers, digital health service providers, and others operating in sectors designated as essential services under EU or national law.

Example: A retail bank deploying a third-party AI system for credit risk assessment. The system evaluates loan applications and produces risk scores that directly influence whether individuals can access credit — affecting economic participation, non-discrimination (Article 21 CFR), and potentially the right to property (Article 17 CFR).

Category 3: Education and vocational training institutions

Schools, universities, and training providers that deploy high-risk AI systems to evaluate students, direct learning processes, make admissions decisions, or allocate educational resources.

Example: A university using an AI system to screen Master's programme applications and rank candidates. The system affects access to education (Article 14 CFR), non-discrimination (Article 21 CFR), and potentially freedom to choose an occupation (Article 15 CFR).

If none of these categories apply to you, the FRIA is not legally mandatory — though conducting a voluntary fundamental rights assessment is considered good practice and can strengthen your overall compliance posture under the deployer obligations.

When must the FRIA be completed?

The FRIA must be completed before the high-risk AI system is put into service. This is a precondition, not a post-deployment exercise. Deploying first and assessing later is a direct violation of Article 27.

The assessment must be updated whenever:

  • The system's use or deployment context changes substantially
  • New information becomes available about the system's impact on fundamental rights
  • The system is significantly modified (new version, new training data, changed parameters)
  • Post-market monitoring reveals unexpected impacts
  • The affected population changes (new user groups, new geographic scope)

All mandatory fields of the FRIA

Article 27(3) specifies the minimum content that every FRIA must include. The following covers each field with practical guidance:

1. Description of the deployer's processes in which the high-risk AI system will be used, in line with its intended purpose. Describe the specific operational workflow — not just what the AI system does in the abstract, but how it fits into your decision-making process.

2. Frequency and period for which the system is intended to be used, along with the specific geographic, temporal, and demographic scope. State whether use is continuous, periodic, or triggered by events. Specify which locations, time periods, and population groups are covered.

3. Categories of natural persons and groups likely to be affected by use of the system in the specific context. Identify both direct subjects (people the AI evaluates) and indirect subjects (people affected by decisions made using AI outputs).

4. Specific risks of harm likely to affect the identified categories, taking into account the information provided by the provider under Article 13. Use the provider's instructions for use as a baseline, but supplement with your own analysis of risks specific to your deployment context.

5. Description of human oversight measures implemented according to the provider's instructions for use. Specify who performs oversight, their qualifications, the tools available to them, and the circumstances under which they can override the system.

6. Measures to be taken in case the risks materialise, including arrangements for internal governance and complaint mechanisms. This includes escalation paths, remediation procedures, and how affected individuals can challenge AI-influenced decisions.

7. Fundamental rights potentially affected, referencing specific rights from the EU Charter of Fundamental Rights (see the detailed list below).

8. Likelihood and severity assessment for each identified risk — how probable it is that the risk materialises and how severe the impact would be on affected persons or groups.

9. Input from stakeholders — where relevant, the deployer should consider input from representatives of the persons or groups likely to be affected.

10. Monitoring procedures — how the deployer will monitor the system's impact on fundamental rights during operation, including metrics, frequency, and responsible parties.

11. Escalation paths — clear procedures for when monitoring reveals unexpected impacts, including who is notified, what actions are taken, and what thresholds trigger escalation.

12. Record-keeping protocols — how the FRIA documentation, monitoring data, and update history will be maintained and made available to authorities.

13. Review schedules — the frequency of periodic FRIA reviews, including the criteria that trigger an out-of-cycle review.

14. Accountability assignments — named roles or functions responsible for the FRIA, ongoing monitoring, and updates.

15. Integration with existing governance frameworks — how the FRIA connects to the deployer's existing risk management, compliance, and governance structures (including GDPR governance where applicable).

Identifying affected fundamental rights from the EU Charter

The FRIA requires you to identify which specific Charter rights may be affected. This is the area where most deployers under-scope their assessment. The following rights are most commonly engaged by high-risk AI systems:

Charter ArticleRightCommonly affected when AI is used in...
Article 1Human dignityWelfare decisions, criminal justice, migration
Article 7Private and family lifeSurveillance, employee monitoring, social services
Article 8Protection of personal dataAny system processing personal data
Article 11Freedom of expression and informationContent moderation, media recommendations
Article 14Right to educationStudent assessment, admissions, learning allocation
Article 15Freedom to choose an occupationRecruitment, job matching, employment decisions
Article 16Freedom to conduct a businessRegulatory AI, licensing decisions
Article 17Right to propertyCredit decisions, insurance, benefits allocation
Article 20Equality before the lawAny system making or supporting legal decisions
Article 21Non-discriminationVirtually all high-risk AI deployments
Article 24Rights of the childSystems that affect minors directly or indirectly
Article 25Rights of the elderlyHealthcare, social services, benefits
Article 26Rights of persons with disabilitiesAccessibility, assessment tools, service allocation
Article 34Social security and social assistanceWelfare allocation, benefits eligibility
Article 37Environmental protectionCritical infrastructure, energy management
Article 41Right to good administrationPublic sector decision-making
Article 47Right to an effective remedyAny system whose outputs can be challenged

Do not limit your assessment to the rights you consider most obvious. Assessors expect a comprehensive mapping. A credit scoring system does not only affect the right to property — it may also affect non-discrimination (Article 21), equality before the law (Article 20), private life (Article 7 — if it uses personal financial data), and potentially the rights of the elderly and persons with disabilities (if it systematically disadvantages these groups).

FRIA vs DPIA: comprehensive comparison

DimensionFRIA (AI Act, Article 27)DPIA (GDPR, Article 35)
Legal basisEU AI Act, Regulation (EU) 2024/1689GDPR, Regulation (EU) 2016/679
Scope of rightsAll fundamental rights under the EU CharterPersonal data protection rights only
TriggerDeploying a high-risk AI system (for specified entity types)Processing likely to result in high risk to data protection rights
Personal data required?No — applies even when no personal data is processedYes — only triggered when personal data is involved
Who must conduct it?The deployer (public bodies, essential services, education)The data controller
Who is assessed?The AI system's impact on affected persons' fundamental rightsThe data processing operation's impact on data subjects' rights
Risk assessmentLikelihood and severity of harm to fundamental rightsLikelihood and severity of risk to data protection rights
Stakeholder consultationInput from affected persons or representatives (where relevant)Views of data subjects or their representatives (where appropriate)
RegistrationSummary submitted to market surveillance authority + EU databaseConsultation with DPA if residual risk is high
EnforcementMarket surveillance authorities + EU AI OfficeData protection authorities
Enforcement deadline2 August 2026In force since May 2018
Can they be combined?Yes — Article 27(4) explicitly allows itYes — joint assessment is recommended
Update obligationWhen circumstances change substantiallyWhen nature, scope, or context of processing changes

When both apply: When a high-risk AI system processes personal data, both assessments are required. Article 27(4) explicitly allows deployers to combine them into a single document. The practical approach is a unified assessment with two clearly labelled sections — one addressing GDPR data protection risks, the other addressing the broader fundamental rights under the EU Charter. This avoids duplication while satisfying both legal requirements.

Step-by-step process: conducting a FRIA

Step 1: Determine whether a FRIA is required

Confirm that your organisation falls into one of the three triggering categories (public body, essential service provider, education institution) and that the AI system you are deploying is classified as high-risk under Article 6 and Annex III. If either condition is not met, the FRIA is not mandatory — though voluntary assessment is good practice. Use the free risk classification tool to confirm your system's classification.

Step 2: Scope and describe the AI system in your deployment context

Document what the system does, how it is used in your specific operational context, the decisions it supports or automates, and the geographic and demographic scope. Use the information provided by the provider in their instructions for use as a baseline, but go beyond it — the provider describes the system in general terms; you must describe it in your specific context.

Example: A public employment service deploying an AI system to match job seekers with vacancies must describe not just the system's matching algorithm, but how case workers use the matches (do they always follow them? can they override?), which job seekers are subject to the system (all registrants, or specific categories?), and what happens when a job seeker is not matched to any vacancy.

Step 3: Identify all affected persons and groups

Map everyone affected by the system's outputs — directly and indirectly. Pay particular attention to:

  • Vulnerable groups: children, elderly, persons with disabilities, persons in economic hardship, asylum seekers, persons with limited digital literacy
  • Groups at risk of disproportionate impact: ethnic minorities, women, persons with certain religious beliefs, LGBTQ+ individuals, persons from specific socioeconomic backgrounds
  • Persons with limited ability to contest: individuals who may not understand they are subject to AI, or who lack the resources to challenge AI-influenced decisions

Step 4: Map affected fundamental rights

For each affected group, identify which specific Charter rights may be engaged. Use the table above as a starting point. Be comprehensive — the AI Act expects deployers to consider the full breadth of the Charter, not just the most obvious rights. Document the causal chain: how does the system's output lead to an impact on a specific right for a specific group?

Example: For the public employment service AI, affected rights include: freedom to choose an occupation (Article 15 CFR — if the system restricts which jobs are shown to certain seekers), non-discrimination (Article 21 CFR — if the matching algorithm disadvantages certain groups), right to good administration (Article 41 CFR — if automated matching replaces individualised case work), and social security (Article 34 CFR — if non-matching leads to reduced benefits).

Step 5: Assess likelihood and severity

For each identified risk, estimate:

  • Likelihood: How probable is it that the risk materialises? Consider the system's known limitations (from the provider's documentation), your deployment context, the volume of affected persons, and any historical evidence of similar harms.
  • Severity: If the risk materialises, how severe is the impact? Consider irreversibility (can the person recover?), the number of affected persons, the importance of the right at stake, and whether affected persons have effective remedy.

Use a structured scale (e.g., low/medium/high/critical for both dimensions) and document your reasoning. Avoid vague qualitative assessments without supporting evidence.

Step 6: Define mitigation measures

For each risk rated above low, document specific measures:

  • Technical measures: accuracy monitoring, bias testing, threshold calibration, input validation, confidence indicators
  • Organisational measures: human oversight procedures (who reviews, when, with what authority), staff training on system limitations and bias risks, escalation paths for edge cases
  • Procedural measures: complaint mechanisms accessible to affected persons, appeal processes with human decision-makers, periodic review of AI-influenced decisions
  • Communication measures: informing affected persons of the AI system's role, explaining how to challenge AI-influenced decisions, providing contact points for queries

Step 7: Establish monitoring, review, and registration

Monitoring:

  • Define metrics to track actual impact after deployment (e.g., decision outcomes disaggregated by demographic group, complaint volumes, override rates by human overseers).
  • Set alert thresholds that trigger investigation.
  • Assign accountability for ongoing monitoring to a named role or function.

Review:

  • Set a review cadence — at minimum annually, or when triggered by material changes, incidents, or new evidence.
  • Document who conducts the review, what they evaluate, and how findings are acted upon.

Registration:

  • Under Article 27(5), submit a summary of the FRIA results to the relevant market surveillance authority.
  • Register them in the EU database under Article 49.
  • Maintain the full FRIA documentation internally and make it available to authorities upon request.

Real-world scenarios

Scenario 1: Public hospital deploying diagnostic AI

A public hospital deploys an AI system that assists radiologists in prioritising imaging scans based on suspected severity. The FRIA must assess: risk of misclassification leading to delayed treatment (right to life and physical integrity, Article 2–3 CFR), risk that the system performs differently for patients of different ages or ethnic backgrounds (non-discrimination, Article 21 CFR), risk that patients are unaware their scan priority was influenced by AI (right to good administration, Article 41 CFR), and risk that vulnerable patients (elderly, persons with disabilities) are systematically deprioritised if the system was trained on data that underrepresents them.

Scenario 2: Bank deploying credit scoring AI

A bank deploying a third-party AI credit scoring system must assess: risk of systematic bias against applicants from certain postal codes, age groups, or employment types (non-discrimination, Article 21 CFR), risk that rejected applicants cannot understand or challenge the decision (right to effective remedy, Article 47 CFR), and risk that the system's reliance on historical lending data encodes past discrimination. The bank must also coordinate the FRIA with a GDPR DPIA, since personal financial data is processed.

Scenario 3: University deploying admissions screening AI

A university using AI to screen and rank Master's programme applicants must assess: risk that the system disadvantages applicants from non-traditional educational backgrounds (right to education, Article 14 CFR), risk of gender or ethnic bias in ranking (non-discrimination, Article 21 CFR), risk that applicants with disabilities are disadvantaged by input data formats the system was not designed for (rights of persons with disabilities, Article 26 CFR), and risk that rejected applicants have no meaningful way to understand why they were not selected.

Coordinating FRIA with GDPR DPIA

When the high-risk AI system processes personal data — which is the case for most deployers — both a FRIA and a DPIA are required. Article 27(4) of the AI Act explicitly allows combining them. The recommended approach:

  1. Single document, two sections. Use a unified structure with a clearly labelled DPIA section (covering GDPR data protection risks) and a FRIA section (covering the broader Charter rights).
  2. Shared risk identification. Many risks overlap — bias in personal data, transparency about automated decisions, security of data. Identify these once and map them to both frameworks.
  3. Distinct scope recognition. The FRIA covers rights the DPIA does not (dignity, expression, children's rights, environmental protection). Do not assume the DPIA covers the FRIA.
  4. Coordinated consultation. If the DPIA requires consultation with the data protection authority (GDPR Article 36), and the FRIA requires registration with the market surveillance authority (AI Act Article 27(5)), coordinate both to avoid inconsistent representations.
  5. Unified monitoring. Design a single monitoring framework that tracks both data protection metrics and fundamental rights impact metrics.

For more on the relationship between these two frameworks, see the AI Act vs GDPR comparison guide.

Common mistakes

Mistake 1: Treating the FRIA as a data protection exercise

The FRIA is not a DPIA with a new name. It covers rights that GDPR does not touch: dignity, non-discrimination, freedom of expression, children's rights, environmental protection, access to justice. If your FRIA reads like a data protection assessment that only discusses privacy and data security, it is incomplete. Map the full Charter.

Mistake 2: Completing the FRIA after deployment

The AI Act is explicit: the FRIA must be completed before the system is put into service. Deploying first and assessing later is a violation of Article 27, regardless of how quickly you conduct the assessment after deployment. Build FRIA completion into your procurement and deployment workflow as a gate.

Mistake 3: Failing to involve stakeholders

Article 27 calls for input from representatives of affected persons where relevant. In public sector contexts — welfare, justice, immigration, education — this means consulting with civil society organisations, affected communities, patient representatives, student bodies, or their designated representatives. Stakeholder input is not optional decoration; it is a substantive requirement that assessors will check for.

Mistake 4: Treating the FRIA as a one-time exercise

The FRIA must be updated when circumstances change — new system versions, new use cases, new affected populations, new deployment contexts, or new evidence about the system's impact. A FRIA completed in 2026 and never updated is a compliance gap by 2027.

Mistake 5: Using generic risk descriptions

"There is a risk of discrimination" is not sufficient. The FRIA requires specific risk identification: which groups, which rights, through which mechanism, with what likelihood, and with what severity. Generic risk statements without supporting analysis will not satisfy a market surveillance authority inspection.

Mistake 6: Not coordinating with the provider

The FRIA is the deployer's responsibility, but the provider's documentation under Article 13 — including known limitations, accuracy metrics, and foreseeable risks — is essential input. Deployers who conduct FRIAs without reviewing the provider's instructions for use are working with incomplete information. Request this documentation and reference it explicitly in your assessment.

Practical FRIA template structure

A practical FRIA document could follow this structure:

  1. Cover page: System name, deployer name, date, version, author, approver
  2. Executive summary: One-page overview of the system, key risks, and overall assessment
  3. System description: Intended purpose, deployment context, operational workflow, geographic and demographic scope (Fields 1–2)
  4. Affected persons mapping: Direct and indirect subjects, vulnerable groups, estimated numbers (Field 3)
  5. Fundamental rights mapping: Table mapping each affected group to specific Charter rights with causal explanation (Field 7)
  6. Risk assessment: For each right-group combination: specific risk description, likelihood, severity, supporting evidence (Fields 4, 8)
  7. Human oversight measures: Who, how, with what authority, under what circumstances (Field 5)
  8. Mitigation measures: Technical, organisational, procedural, and communication measures for each identified risk (Field 6)
  9. Stakeholder input: Who was consulted, when, what input was received, how it was incorporated (Field 9)
  10. Monitoring plan: Metrics, frequency, accountability, alert thresholds (Field 10)
  11. Governance integration: How the FRIA connects to existing governance, escalation, and review processes (Fields 11–15)
  12. DPIA integration (if applicable): Combined data protection assessment section
  13. Appendices: Provider documentation referenced, stakeholder consultation records, supporting data

Next steps

  1. Determine whether your organisation falls into a FRIA-triggering category (public body, essential service, education).
  2. Build your AI systems inventory and identify the high-risk systems you deploy.
  3. Use the template structure above as your FRIA starting point.
  4. If the system also processes personal data, combine the FRIA with your DPIA.
  5. Register FRIA results in the EU database before deployment.
  6. Review the full EU AI Act compliance checklist to ensure the FRIA fits into your broader compliance programme.

Run the free AI Act assessment to classify your systems and identify applicable obligations, including FRIA requirements.

For the full legal text, see Article 27 in the complete AI Act guide.

Frequently asked questions

Is the FRIA mandatory for all deployers of high-risk AI systems?

No. The FRIA is mandatory only for deployers that are bodies governed by public law, private entities providing essential services (banking, insurance, energy, water, transport, digital health), or education/vocational training institutions. Other deployers of high-risk AI systems are not legally required to conduct a FRIA, though they still have other deployer obligations (human oversight, log retention, affected-person notification). Voluntary FRIAs are considered good practice and may become expected by market surveillance authorities in future guidance.

Can I use my existing DPIA instead of a FRIA?

No, but you can combine them. A DPIA covers only data protection risks under GDPR. A FRIA covers all fundamental rights under the EU Charter, many of which are not related to personal data (dignity, non-discrimination, freedom of expression, children's rights). Article 27(4) allows both assessments to be conducted as a single document, which is the recommended approach — but the FRIA section must address the Charter rights that the DPIA does not cover.

What happens if I deploy a high-risk AI system without completing the FRIA?

Deploying without a completed FRIA when one is required is a direct violation of Article 27. Penalties for non-compliance with deployer obligations under the AI Act can reach EUR 15 million or 3% of global annual turnover. Beyond fines, market surveillance authorities can order you to suspend or cease use of the AI system until the FRIA is completed. See the penalties and fines guide.

How detailed must the stakeholder consultation be?

Article 27 requires deployers to consider input from representatives of affected persons "where relevant." The threshold for relevance is not precisely defined, but in practice: if your AI system directly affects members of the public (welfare applicants, students, patients, job seekers), stakeholder consultation is clearly relevant. The consultation does not require formal public hearings — structured engagement with representative organisations, feedback mechanisms, or advisory panels can satisfy the requirement. Document who was consulted, what input was received, and how it influenced the assessment.

Does the FRIA need to be made public?

The full FRIA does not need to be published, but a summary must be submitted to the relevant market surveillance authority and registered in the EU database under Article 49. The EU database is publicly accessible, so the summary will be visible. The full internal FRIA must be maintained and provided to authorities upon request during inspections or investigations.

How does the FRIA relate to the conformity assessment?

The FRIA is a deployer obligation; conformity assessment is a provider obligation. They are legally distinct but practically connected. The provider's conformity assessment and technical documentation (Annex IV) provide essential input for the deployer's FRIA — particularly information about the system's known limitations, accuracy metrics, and foreseeable risks. If the provider has not completed conformity assessment, the deployer may lack the information needed to conduct a thorough FRIA.

Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.

AI Act
FRIA
Fundamental Rights
Impact Assessment
Article 27
Deployers
EU Charter

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment