Next Article in Journal
Chaotic Hénon–Logistic Map Integration: A Powerful Approach for Safeguarding Digital Images
Previous Article in Journal
Partial Fake Speech Attacks in the Real World Using Deepfake Audio
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Navigating Through Human Rights in AI: Exploring the Interplay Between GDPR and Fundamental Rights Impact Assessment

by
Anna Thomaidou
1,2 and
Konstantinos Limniotis
1,3,*
1
School of Pure and Applied Sciences, Open University of Cyprus, Latsia, Nicosia 2220, Cyprus
2
Hellenic Authority for Communication Security and Privacy, Ierou Lohou 3, Marousi, 15124 Athens, Greece
3
Hellenic Data Protection Authority, Kifissias 1-3, 11523 Athens, Greece
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2025, 5(1), 7; https://doi.org/10.3390/jcp5010007
Submission received: 17 December 2024 / Revised: 30 January 2025 / Accepted: 7 February 2025 / Published: 11 February 2025
(This article belongs to the Section Privacy)

Abstract

:
The relationship and the interplay between the EU AI Act and the data protection law is a challenging issue. This paper focuses on exploring the interplay between legal provisions stemming from the AI Act and those stemming from the GDPR, with the ultimate goal of developing an integrated framework that simultaneously implements Fundamental Rights Impact Assessment (FRIA) and Data Protection Impact Assessment (DPIA) within the context of Artificial Intelligence (AI) systems, particularly focusing on systems that utilize personal data. This approach is designed to simplify the evaluation processes for stakeholders managing risks related to personal data protection, as well as to other fundamental rights in AI systems, enhancing both efficiency and accuracy in these assessments as well as facilitating compliance with the relevant legal provisions. The methodology adopted involves developing a holistic model that can be applied not only to specific case studies but more broadly across various sectors.

1. Introduction

The field of Artificial Intelligence (AI), and especially Machine Learning (ML), has seen significant progress over the past decade, resulting in significant changes across various industries and aspects of society [1,2]. This progress stems from advancements in computing power, the availability of large amounts of data and notable enhancements in algorithms and techniques. AI has expanded its original reach beyond computer science to impact sectors such as healthcare, finance, automotive, entertainment, and smart cities [3,4,5]. ML, a subset of AI that focuses on developing algorithms for machines to learn from data and make decisions or predictions [1], has played a crucial role in driving this growth, resulting in precise models capable of handling intricate tasks such as Natural Language Processing (NLP) [6], image recognition and predictive analytics. These investments are motivated by the benefits that AI and ML offer in streamlining operations, reducing costs, and introducing new products and services [3].
However, this rapid progress comes along with important challenges [3]. As AI becomes more common in our lives, it deals with personal information more frequently, raising concerns about individual privacy and personal data protection. The ethical considerations surrounding AI technology involve worries, especially about Automated Decision-Making (ADM) and the potential misuse of AI in ways that may not align with public values and ethics [7]. ADM systems use algorithms to analyze data and make decisions with little or no human intervention [8]; such algorithms may be AI-based, and in such cases, the challenges regarding the possible effects on human rights become more difficult to address. Despite efforts to design fair AI systems, they can unintentionally worsen biases found in their training data or the societal structures they operate within. While essential computer science principles like abstraction and modular design are vital in creating ML systems, they often fall short in addressing the intricacies of real-world contexts [9]. More precisely, ML models rely on data that can introduce biases and discrimination affecting areas such as job recruitment, loan approvals, and eligibility for social welfare benefits. Another significant burden is the complex nature of ML algorithms, especially of the deep learning models, which may not allow for a full understanding of how exactly they operate and why they come to specific conclusions—thus leading to a lack of transparency.
Apart from the aforementioned aspects, incorporating AI into services and critical infrastructure introduces risks related to reliability and safety. Errors and malicious attacks on AI systems can have serious consequences in interconnected systems such as transportation, healthcare, and finance [10]. Tackling these challenges requires an approach that includes not only technological advancements in AI and ML algorithms, but also thoughtful consideration of ethical, legal, and societal implications.
Consequently, there is an increasing demand for oversight and regulation to ensure the deployment of these algorithms. In the European Union, the Artificial Intelligence Act (AI Act), approved by the European Parliament on 13 March 2024, published in the Official Journal of the European Union on 12 July 2024 [11] and came into force on 1 August 2024, is an initiative aimed at overseeing the development and usage of AI across its member nations. By establishing criteria and procedures for assessing AI systems, the European Union (EU) undertakes a leading role in the ethical governance of AI [12]. However, the effectiveness of these efforts will depend on their application and ability to strike a balance between fostering innovation and upholding rights.
One key aspect of the AI Act is the Fundamental Rights Impact Assessment (FRIA), which plays a crucial role in protecting individual’s fundamental rights from being violated by high-risk AI systems. The main goal is to safeguard individuals from risks associated with AI, while promoting innovation and competitiveness in the AI industry. This process assesses the risk of AI systems on human rights such as privacy, non-discrimination, and freedom of speech.
At the same time, the well-known General Data Protection Regulation (GDPR) [13], being the EU framework for personal data protection, necessitates the completion of a DPIA (Data Protection Impact Assessment) for high-risk personal data processing activities, regardless of the technology used. DPIA is a process aimed at helping organizations analyze, identify, and minimize data protection risks associated with data processing. GDPR requires a DPIA for processing activities that pose high risks to individual’s rights and freedoms; these include activities such as individual profiling with significant effects.
Therefore, different legal instruments set several obligations when high-risk processes are to be in place; hence, the interplay of these obligations is of utmost importance when they are simultaneously applicable to the same data process. However, it should be pointed out that both the GDPR and the AI Act go beyond the European Union, since they are applicable to any organization providing services to individuals residing in the European Union, regardless of their establishment.

1.1. Research Objectives

The main objective of this research is to establish a coordinated strategy for implementing FRIA and DPIA simultaneously in a unified framework. The purpose of this approach is to streamline the evaluation process for stakeholders tasked with managing risks related to personal data in AI systems. By creating an integrated framework that merges FRIA and DPIA, this work aims to tackle the complexities and overlapping issues involved in assessing the impact of AI systems on data protection and fundamental human rights. Since the interplay between FRIA and DPIA still constitutes a challenging issue, the output of this work is expected to improve efficiency and efficacy in risk assessment, aiding organizations in recognizing and addressing risks more easily.
The unified framework proposed in this paper integrates FRIA with DPIA, aiming to facilitate adherence to EU regulations such as GDPR and the AI Act, aligning with ethical standards in the region, as well as enhancing accountability and trust in AI by ensuring alignment with individual rights and societal values in changing regulatory frameworks. It is oriented to be applicable not only for a specific case study, but also with broader applicability in various sectors. This holistic model is expected to be a versatile tool, adaptable to different industries and contexts where AI systems are employed. The flexibility of this framework is essential, especially considering the range of uses for AI systems. Spanning from healthcare and finance to public administration and beyond, each sector comes with its distinct features and regulatory demands, yet all stand to gain from a holistic approach to evaluating how AI impacts personal data security and fundamental rights.

1.2. Research Questions

More precisely, this paper focuses on the following research questions:
Q1.
What are the similarities and differences in the scopes, objectives, and methodologies of DPIA and FRIA within the realm of AI?
Q2.
How can we assess and merge existing methodologies for DPIA and FRIA into a cohesive framework that ensures thorough risk evaluation?
Q3.
What advantages and obstacles might stakeholders in the EU encounter when embracing such a unified DPIA/FRIA framework?
Q4.
How does the proposed framework contribute to enhancing transparency, accountability, and public confidence in AI technologies, across diverse industries?

1.3. Structure of the Paper

The paper is structured as follows. In Section 1, the necessary background is set, discussing the main elements of the AI systems, emphasizing how they can affect human rights, as well as the legal provisions stemming from the AI Act and the GDPR. The previous relevant work is presented in Section 2, which describes existing methodologies and approaches to conduct FRIA or DPIA. Section 3, being the main part of the paper, describes the new unified DPIA–FRIA framework, and in Section 4, the holistic framework is theoretically applied in a real-world scenario. A discussion concerning this new framework is given in Section 5, whilst concluding remarks, as well as some thoughts for future steps, are presented in Section 6.

1.4. AI Techniques

AI refers to systems that display intelligent behavior by analyzing their environment and taking actions with a degree of autonomy to achieve specific goals. These AI-based systems can be purely software acting in the virtual world, or embedded in hardware devices functioning in the physical world [14]. The EU’s approach emphasizes that AI should not only perceive its environment and interpret data, but also learn and adapt its behavior by analyzing the effects of its actions to accomplish a complex goal [15].
The genesis of AI in the mid-20th century saw the rise in rule-based systems, also known as expert systems, which emulate decision-making through a set of predetermined rules. While effective in settings, they lacked the adaptability to learn from or adjust to situations [16,17]. The limitations posed by rule-based systems led to the advent of ML within AI. ML algorithms empower systems to learn and enhance their performance through experience without programming. This transition allowed AI systems to tackle intricate tasks and make decisions based on extensive datasets. ML encompasses reasoning, neural networks, and evolutionary algorithms. Deep learning (DL)—a subset of ML—has further elevated the capabilities of AI. DL employs networks with layers (thus, named “deep”) to model complex patterns within data. This method has played a significant role in advancements in fields such as voice recognition, enabling AI systems to achieve performance levels equal to or surpassing those of humans [18].
The transition from rule-based systems of AI in the first decades to the advanced ML and DL models was revolutionary. The advancement of AI is highly expected to continue to progress and may gradually change the way people interact with machines or even solve complex real-life problems in different fields, such as smart logistics [19]. Today, ML and DL are the driving forces of AI. Through refining their performance via experience, DL extends the capabilities further by leveraging brain-like networks for more accurate and refined interpretations, thus enabling learning models to detect subtle variations and complexities within vast datasets, making them particularly useful in tasks such as image classification, speech recognition, and predictive analytics [1]. In essence, ML encompasses an array of techniques and algorithms, while DL specifically focuses on data processing that mirrors the brain’s operations.
AI and ML techniques may be the basis for Automated Decision-Making (ADM) systems, though these systems do not necessarily require AI technology. The significance of ADM systems in today’s world is increasing, especially in important sectors such as healthcare [4], finance [20], public administration [21], e-commerce [22], human resources [23], logistics [19], document analysis, and legal research [24], as well as in the context of smart cities, being used in domains such traffic management, public safety, and resource allocation [5].

1.5. Ethical Challenges: AI and Human Rights

AI ethics is firmly grounded in safeguarding rights as outlined in the legal framework of the European Union and international human rights regulations. The Ethics Guidelines for Trustworthy AI by the European Commission [25] focuses on a “human-centric approach” to AI ethics, highlighting the importance of human dignity and ethical considerations across different societal sectors. The EU Treaties, along with the Charter of Fundamental Rights of the European Union (EU Charter) [26] and international human rights legal instruments, establish a set of rights that must be upheld by EU member states and institutions. Central to this approach is dignity, serving as the cornerstone of a human-centered perspective on AI. While the EU Charter enshrines binding rights within EU law, it acknowledges that not all scenarios or areas are covered by these rights. In situations outside the realm of EU law, international human rights laws such as the European Convention on Human Rights play similarly an important role.
Ethical considerations must drive the development, deployment, and utilization of AI to ensure they do not violate human rights or their underlying principles [25]. Key ethical principles include respect for human dignity, individual freedom, democracy, justice, rule of law, equality, non-discrimination, and solidarity. Additionally, AI must enhance public services without infringing on citizens’ rights.
The ethical dilemmas associated with AI systems are becoming increasingly worrisome concerning privacy issues, biases, and fairness considerations [27]. The primary focus is on developing these systems in a manner that prevents discrimination and ensures fair decisions. In addition to these concerns, transparency, and explainability should also be in place. AI systems often face criticism due to their opacity—prompting a call for more transparent AI solutions that allow for human comprehension and interpretation.
Data privacy and security stand as priorities in AI systems due to their reliance on various data sources. An important aspect to consider is accountability and liability in determining who should be held responsible in cases where an AI/ADM system (Automated Decision-Making System using AI technology) makes a harmful decision. This discussion revolves around establishing lines of accountability and applying legal principles such as liability to automated decisions.
Fjeld et al., in [28], conducted a review of 36 key documents on AI principles issued by a diverse range of entities, including governments, intergovernmental organizations, private companies, professional groups, advocacy organizations, and collaborative initiatives. This analysis highlighted eight major recurring themes across the documents: privacy (noted in 97% of the documents), accountability (97%), safety and security (81%), transparency and explainability (94%), fairness and non-discrimination (100%), human control over technology (69%), professional responsibility (78%), and the promotion of human values (69%).

1.6. Legal Framework

This section includes an overview of the current regulatory landscape relevant to AI and ADM systems as well as a discussion of key regulations and an exploration of the challenges and opportunities presented by the current regulatory environment.

1.6.1. General Data Protection Regulation (GDPR)

The GDPR came into effect on 25 May 2018, aiming to safeguard personal data [13]. The GDPR sets rules for handling data and provides individuals with more control over their personal information, granting them specific rights. Entities worldwide must comply with this regulation if they handle EU citizens’ data. The GDPR has become somehow a “standard” for data protection legislation.
According to the GDPR (see Art. 4(1)) [13], the term personal data refers to any information related to a natural person, in case this person (the so-called “data subject”) can be identified either directly or not. Moreover, the GDPR defines personal data processing as any operation that is performed on personal data, “the collection, recording, structuring, storage, adaptation or alteration, retrieval, use, disclosure by transmission, dissemination, combination and erasure” (see Art. 4(2) in the GDPR) [13]. The main entity that determines the purposes and means of the processing is the so-called “data controller”, whereas the entity that processes personal data on behalf of the controller is the “data processor” (see Art. 4(7) and 4(8), respectively) [13].
According to the GDPR, specific principles need to be in place when personal data are processed, yielding, in turn, specific obligations to data controllers. More precisely, according to Article 5, par. 1 of the GDPR [13], the personal data shall be processed “fairly and in a transparent manner”, being “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed” (purpose limitation principle), whereas the purpose shall be “specified, explicit and legitimate”. Moreover, personal data shall be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed” (data minimization principle). Other basic principles are also defined in the same article in [13], including the security of personal data.
An important requirement for personal data processing that should always be in place is the so-called data protection by design principle (also known as privacy by design—see Art. 25, par. 1) [13]. In simple words, personal data protection should be considered from the very beginning of the design of data processing. It is clear that this provision implies that when an ADM system is to be used, the data controller should examine from the very beginning whether this system is indeed necessary for achieving the intended purpose and, if yes, what risks to rights and freedoms does it entail and can these risks be effectively alleviated by appropriate safeguards.
The GDPR defines also several rights of the individuals that can be exercised at any time, and the respective data controller is obligated to respond. An important right for data subjects related with ADM systems is the one defined in Art. 22, par. 1 in the GDPR [13]: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Hence, human intervention in a decision process is essential, while the whole process should be transparent for the individuals (transparency is an individual’s right that spans horizontally any type of data process).
An important obligation for data controllers, as determined in the GDPR, is the so-called Data Protection Impact Assessment (DPIA), pursuant to Art. 35 of the GDPR [13]. In simple words, a DPIA aims to find out, in a systematic way that can be demonstrated, whether all the data protection risks stemming from intended personal data processing can be addressed by appropriate mitigation measures; therefore, conducting a DPIA, which needs to be performed before the starting of the data process, clearly facilitates the compliance with the data protection by design principle. A DPIA is obligatory for data processes that yield high risks; to this end, as stated in Art. 35, par. 3 of the GDPR [13], a DPIA is required, amongst other cases, when “a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”. Therefore, if the main goal of an AI system for decision-making is to derive conclusions with significant effects for the individuals, it necessitates—in terms of legal obligations—the conduction of a DPIA. More generally, the European Data Protection Board (EDPB) provides guidelines to determine when a DPIA is mandatory [29], emphasizing the assessment of emerging technologies and significant data processing that could impact individual privacy.

1.6.2. AI Act

The processes for developing the EU AI Act started in April 2021 when the European Commission introduced the AI framework. In December 2022, the European Council approved negotiations with the European Parliament, leading to an agreement as of December 2023 to ensure AI systems in the EU meet standards for safety, transparency, traceability, non-discrimination, and sustainability, while fostering innovation. The European Parliament voted in favor of the Act on 13 March 2024 [15] the text was published in the official EU Journal on 12 July 2024 [11] and came into force on 1 August 2024. While the AI Act predominantly relies on national authorities for enforcement, similar to the GDPR, it also incorporates stronger centralized oversight through the EU Commission and the newly established AI Office. Söderlund and Larsson [30] warn that this hybrid approach may lead to challenges like uneven enforcement across member states, echoing issues seen with the GDPR, but suggest that the enhanced central coordination could mitigate some of these potential problems.
The AI Act introduces a risk-based categorization system for AI, ranging from “unacceptable risk” to “high risk” and “limited risk”, each with specific regulations. It includes general-purpose and generative AI systems that encompass technologies such as ChatGPT [11], requiring adherence to transparency standards and high-impact evaluations. Covering a wide array of AI applications, the AI Act ensures safe and ethical AI development, upholding EU values, protecting rights, and building user trust. It mandates AI technologies to prioritize human needs, safety, and legal standards, as per the EU Charter of Fundamental Rights [26]. The regulation applies to both EU-based and external AI Providers whose outputs are used within the EU.
As per the Act’s key definitions (Art. 3), an AI system is “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. A Provider is the entity that “develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market”, whilst a Deployer is “the entity that uses an AI system”.
Under the AI Act (Article 5), certain AI applications are being considered extremely risky in terms of potentially violating human rights and, thus, are prohibited. These include the following:
  • Deceptive AI Practices: AI systems exploiting vulnerabilities in particular groups (e.g., age, physical, or mental disability) to substantially influence a person’s behavior potentially resulting in physical or psychological harm to that individual or others;
  • Government Social Scoring: Use of AI systems by governments to assess the individual’s reliability over time based on their social status or personal traits, leading to negative outcomes for specific individuals or groups in social settings unrelated to the original data gathering context;
  • Real-time Biometric Identification Systems for Law Enforcement: The use of AI systems to identify individuals remotely through biometric data in public spaces is generally prohibited, except for certain cases (such as searching for crime victims, preventing a particular and imminent terrorist threat or identifying and prosecuting a criminal offender or suspect).
The AI Act classifies AI systems based on the risks they pose to safety, rights, and freedoms (Article 6). It categorizes AI systems into four primary groups:
  • Unacceptable Risk: AI systems that threaten safety, well-being, and rights are prohibited in the EU. This includes AI exploiting vulnerabilities in specific groups (e.g., age and disabilities) to alter behavior harmfully and government social scoring systems.
  • High-Risk AI Systems (HRAIS): These systems, used in sectors such as transportation, education, employment, public services, law enforcement, and justice, must meet strict regulatory requirements for safety and transparency before their market introduction.
  • Limited Risk: AI systems with limited risk must meet transparency requirements, e.g., chatbots must disclose their AI nature to users.
  • Minimal Risk: These systems, such as AI-powered video games or spam filters, do not require additional legal obligations under the AI Act, as existing laws address their risks adequately.
In principle, the AI Act lays out rules for AI systems, particularly high-risk ones to guarantee their safe and ethical deployment. More precisely, such key requirements include risk management that includes the whole system’s lifespan (Art. 9), data governance, and management so as to ensure high-quality, unbiased datasets for training, validating, and testing AI systems (Art. 10), technical documentation (Art. 11), record-keeping so as to ensure automatic event recording (logs) for traceability assurance (Art. 12), transparency and information to users (Article 13), human supervision to prevent adverse consequences from automated decision-making (Art. 14), reliability, accuracy, and cybersecurity (Art. 15), conformity assessment so at to ensure they meet legislative requirements before market deployment (Art. 43), as well as post-deployment continuous monitoring (Art. 72).
A scenario-based, proportional methodology is proposed in [31] to improve the European U’s AI Act, which currently categorizes AI risks in broad, static terms. Novelli et al. [31] argue that the AI Act lacks a detailed and dynamic risk assessment framework, leading to potential misestimation of risks and disproportionate regulatory measures. Drawing from the Intergovernmental Panel on Climate Change (IPCC) risk framework, they suggest a more nuanced approach that evaluates risks in specific real-world scenarios, considering factors such as hazard, exposure, vulnerability, and response. Additionally, they propose a proportionality test to balance competing values, ensuring that risk mitigation measures are appropriately aligned with the actual risks posed by AI systems. This method can be applied to improve the AI Act’s implementation, allowing for reclassification of high-risk systems, and enhancing internal risk management practices for AI Deployers.

The Notion of the Fundamental Rights Impact Assessment (FRIA)

The Fundamental Rights Impact Assessment (FRIA) goes beyond DPIA, which primarily focuses on privacy and data protection issues, having a more extensive role in evaluating how AI-based projects, policies, or technologies affect fundamental human rights such as freedom of speech, non-discrimination, privacy, freedom of expression, and the right to a fair trial. The importance of conducting FRIA lies in acknowledging that AI systems can significantly impact on an individual’s rights, underscoring the need for an evaluation to recognize, assess, and address any potential negative consequences.
This assessment focuses on examining the design, functionality, data usage, and deployment context of an AI system, with the aim of understanding its effects on human rights. The FRIA process involves consulting with stakeholders, analyzing the AI system’s impacts, and considering the societal and ethical implications, requiring simultaneously technical, legal, and ethical expertise. This assessment shall cover the system’s intended purpose and geographic/temporal scope, the affected individuals or groups, the compliance with European and national fundamental rights laws, the potential impacts on fundamental rights, the risks to marginalized or vulnerable groups, as well as environmental impacts and mitigation plans for identified harms. A governance system, including human oversight and complaint mechanisms, must also be outlined.
According to the AI Act (Article 27) [11], Deployers are required to conduct an FRIA before deploying a high-risk AI system. More precisely, such obligation occurs for “deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III”, whilst these points of Annex III of the AI Act correspond to “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud” and “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance”. The FRIA should be accompanied by a detailed plan outlining measures or tools to mitigate identified risks to fundamental rights. Deployers should engage with relevant stakeholders and representatives of groups likely to be affected by the AI system to gather necessary information for the impact assessment. Additionally, Article 27 mandates informing the national supervisory authority and other pertinent stakeholders about the FRIA process, whilst other transparency obligations are also in place. It should be pointed out that when a Deployer is also required to perform a DPIA under GDPR (i.e., when the Deployer constitutes, for the said data process, the data controller), the latter should be conducted alongside the FRIA.
A key to the FRIA is the identification and implementation of measures to mitigate any risks to fundamental rights. This might include modifying the AI system, establishing oversight mechanisms, or creating procedures for accountability and redress. The outcomes of the FRIA, including identified risks and mitigation measures, are documented for transparency and accountability. As part of the compliance process for Deployers of high-risk AI systems under the EU AI Act, the FRIA ensures alignment with EU human rights and data protection standards. It should be stressed that FRIA is not a static assessment, but an ongoing process that needs revisiting and updating as the AI system or its context of use evolves.
A FRIA must be performed before the initial deployment of the high-risk AI system. Deployers may rely on previously conducted FRIAs or existing assessments by the Provider in similar cases. If any relevant factors change, Deployers must update the assessment. Moreover, Deployers are required to monitor high-risk AI systems in line with usage guidelines and inform Providers as necessary. If they believe the AI system poses a risk or identifies a serious incident, they must promptly halt its use and notify the Provider, distributor, and relevant market surveillance authority (Article 26).
Finally, Article 27 of the AI Act aims to improve how the AI Act overall and the FRIA in particular, coordinate with the responsibilities associated with DPIA. This suggests the significance of combining and potentially merging various assessment types within AI contexts. High-risk AI system Deployers are required to utilize the information provided by Providers as per Article 13 of the AI Act to fulfill their DPIA obligations. In addition, integration of efforts is proposed where previous assessments under data protection laws are complemented by the additional impact assessment focused on Fundamental Rights (FRs). However, as has been already stated earlier, the interplay between FRIA and DPIA still constitutes a challenging issue.

1.6.3. A Comparison Between DPIA and FRIA

Mantelero [32] emphasizes FRIA’s critical role in balancing innovation with the protection of fundamental rights. The FRIA builds upon existing frameworks, such as the DPIA established by the GDPR, but it expands its scope significantly. While the DPIA primarily focuses on risks to data protection and privacy, the FRIA encompasses a broader range of fundamental rights, such as equality, freedom of expression, and non-discrimination. The FRIA shares the DPIA’s ex-ante, rights-based approach to risk assessment and its iterative nature, ensuring risks are managed throughout the AI system’s lifecycle. However, important differences are highlighted. The FRIA does not adopt the DPIA’s reliance on data protection-specific categories and methodologies, which can obscure the broader implications for fundamental rights. Instead, the FRIA calls for a more transparent and comprehensive evaluation of rights beyond data protection. Mantelero [32] critiques the limited attention DPIAs often give to rights other than privacy and stresses that the FRIA should address this gap by systematically evaluating all fundamental rights affected by AI systems.
FRIA can be applied to AI systems, whether or not personal data are involved, and takes a multidisciplinary approach, integrating societal, ethical, and technical considerations. In contrast, DPIA primarily targets data controllers and processors, centering on compliance with GDPR requirements. European Union Agency for Fundamental Rights [33] calls for better harmonization between the two, as organizations often focus on DPIA but neglect FRIA’s wider scope, which is essential for addressing the societal and ethical challenges posed by AI systems. This alignment would ensure comprehensive protection of both data protection rights and the full spectrum of fundamental rights.
Table 1 below provides a comparative overview of the key differences between DPIA and FRIA.
Both frameworks require detailed documentation of key elements such as the description of processing, purpose, legitimate interests, and risks. However, FRIA goes beyond privacy concerns to include considerations such as discrimination, bias, and human dignity. Below, Table 2 provides an overview of the minimum information requirements for both assessments, illustrating their similarities and differences.

2. Materials and Methods

2.1. Previous Work on DPIA

As stated above, DPIA plays a significant role in the GDPR, also in the context of AI-based data processes, by proactively tackling and addressing biases in AI systems. Ivanova [34] emphasizes the importance of integrating DPIAs into the AI development process to ensure compliance with rights and non-discrimination principles. This not only aligns with the broader goals of data protection and privacy under the GDPR, but also establishes a structured framework for implementing ethical AI practices. More precisely, Ivanova [34] introduces a methodological approach for conducting DPIAs focusing on evaluating discrimination risks through three key steps: assessing processing necessity and proportionality, identifying risks to non-discrimination rights, and implementing measures to mitigate these risks. The research presents a risk assessment approach that categorizes risks based on likelihood (low, medium, and high) and their potential impact on data subject’s rights, proposing a strategy that connects the GDPR with EU equality laws, which could potentially result in creating a specific regulatory framework for ensuring fairness and accountability in algorithms.
Furthermore, Lazcoz and de Hert [35] explore how automated and algorithmic systems are governed according to the GDPR and the AI Act, highlighting the need for human involvement and oversight to avoid evading responsibility. This work also raises concerns on whether relying solely on human intervention is sufficient as a governance mechanism for ADM systems, stating that human intervention should be seen not only as an individual right, but also as a procedural right, which is integral to fostering accountability within the GDPR framework. It is suggested that DPIAs are essential in ensuring meaningful human involvement at all stages of ADM system development and an evidence-based perspective of involvement in the GDPR is proposed, highlighting the importance of meaningful human intervention integrated into the regulatory framework to effectively oversee ADM systems.
Kaminski and Malgieri [36] further explore how DPIAs play a role in connecting individual rights with systemic governance within the GDPR. The study emphasizes viewing DPIAs as Algorithmic Impact Assessments (AIAs), which involve the extensive evaluation of personal aspects concerning individuals through automated processes. However, on the other hand, the study also highlights the limitations of the DPIA approach under the GDPR, particularly regarding the absence of mandatory public disclosure, stating that this could potentially weaken AIA’s effectiveness as a tool for ensuring accountability. The authors urge for collaborations to develop an AIA model that can provide multi-layered explanations, stressing the importance of a more holistic approach to tackle the complexities of algorithmic decision-making and its implications on individual rights and systemic governance. As stated in [36], data controllers should view DPIA as more than a box-ticking exercise for compliance; they should see it as an opportunity to boost transparency and accountability by providing explanations of their decision-making processes involving algorithms. Kaminski and Malgieri [36] suggest that future interpretations and implementations of the GDPR should place greater emphasis on the systemic governance aspects of algorithmic accountability.
While DPIAs help improve understanding of data processing activities, promote transparency, and build trust with data subjects, they also present challenges. They can be resource-intensive and, at times, require subjective judgment in risk assessment. There are gaps in the DPIA related to ADM systems including a lack of guidelines on evaluating the complex risks associated with ADM, insufficient focus on understanding the long-term and indirect effects on individual’s rights, as well as challenges in engaging stakeholders and ensuring public transparency [36,37]. Moreover, there is often a need for more robust mechanisms to ensure accountability and take corrective actions when biases or harms are detected. These gaps underscore the importance of comprehensive frameworks that specifically address the challenges posed by ADM in safeguarding the rights of data subjects.
For the specific case of AI systems, implementing DPIAs presents additional difficulties due to the nature of these technologies. One significant obstacle is the inherent complexity of AI systems, known as “black boxes”, rendering risk assessment challenging due to the fact that their internal processes are not fully transparent. Furthermore, AI’s reliance on large datasets containing probably sensitive personal information raises significant privacy concerns. Moreover, the dynamic nature of AI means that an assessment conducted at one point in time may become outdated due to continuous learning and adaptation that can alter the function of the system. The general guidelines provided by the GDPR for DPIAs do not offer the required accuracy to effectively address the unique characteristics of AI systems. This gap presents a challenge in balancing the drive for technological progress and the imperative need for privacy and data protection [38].
A policy brief published by the Brussels Laboratory for Data Protection and Privacy Impact Assessments (d.pia.lab) [39], provides an in-depth analysis and recommendations for the EU to enhance the requirement for DPIA. The brief highlights the benefits of impact assessments, including informed decision-making, protection of societal concerns, anticipatory thinking, compliance assistance, and enhancement of public trust. Despite their benefits, impact assessments face criticism for adding bureaucracy, and complexity, and being perceived as a formality rather than a meaningful analysis. The authors propose best practices for conducting impact assessments, emphasizing systematic processes, analysis of consequences, necessity based on the nature and scope of initiatives, transparency, and public participation. They suggest broadening the DPIA requirement in the GDPR to cover more types of data processing activities, encouraging the development of methodologies for conducting DPIAs effectively, and recommending setting up ‘reference centers’ on DPIA at data protection authorities to support and guide the process.

DPIA Methodologies

In this context, various jurisdictions and sectors have developed their own DPIA frameworks. Such frameworks include the UK Information Commissioner’s Office (ICO) DPIA Guidance and Template [40], the International Association of Privacy Professionals (IAPP) DPIA Tool [41], and the French Data Protection Authority (CNIL) DPIA Tool [42]. These tools provide comprehensive guidelines, templates, and methodologies to help organizations systematically approach the assessment process, highlight potential risks and implementation measures to protect individual privacy rights effectively.
  • UK Information Commissioner’s Office (ICO) DPIA Guidance and Template [40]: The ICO provides comprehensive guidance and a template for conducting DPIAs. The inclusion of a template simplifies documentation, but may not fully accommodate the intricacies of diverse processing activities, possibly necessitating further resources for complex scenarios.
  • International Association of Privacy Professionals (IAPP) DPIA Tool [41]: The IAPP offers resources and tools for privacy professionals, including a DPIA tool designed to streamline the process of conducting impact assessments. This tool typically includes checklists and questionnaires to help identify and assess privacy risks associated with data processing activities. With its practical checklists and a supportive privacy professional community, it offers a flexible approach to DPIA. However, access to some IAPP resources is not open.
  • French Data Protection Authority (CNIL) DPIA Tool [42]: The CNIL provides an open-source DPIA tool and methodology to guide organizations through the assessment process. This tool is designed to help identify processing operations that require a DPIA, evaluate the risks to the rights and freedoms of natural persons, and determine measures to mitigate those risks (it should be pointed out, though, that emphasis is given to security risks). The CNIL’s approach to DPIAs includes detailed guidance on when a DPIA is required, how to carry it out, and what elements it should contain. The CNIL also provides templates and supporting documents to facilitate the DPIA process, making it more accessible and manageable for organizations of all sizes.
All these frameworks are designed to be broadly applicable; therefore, they do not specifically address the unique challenges and risks associated with AI systems.

2.2. Previous Work on FRIA

In [43], a Fundamental Rights and Artificial Intelligence Impact Assessment (FRAIIA) is introduced, utilizing a dual approach of an open-ended survey and a quantitative matrix to evaluate AI systems’ impact on FRs. The process is structured into four phases, designed to gather contextual information and conduct a detailed assessment of the AI system’s potential effects on FRs. This methodology culminates in a FRAIIA Score, a numeric indicator that dynamically quantifies the risk AI systems may pose to FR, aiding in decision-making about the system’s development and deployment based on its compliance with legal and ethical standards [43]. Initiating FRAIIA at the design phase of an AI system is highly recommended due to its multiple benefits. Early analysis aids in developing trustworthy AI systems by allowing adjustments to the algorithm until it reaches acceptable risk levels. This proactive approach not only helps in implementing effective technical measures to prevent rights infringement, but also in avoiding economic losses by evaluating the feasibility of deployment early on.
Furthermore, Mantelero [32] outlines a robust framework for the FRIA, emphasizing key building blocks to ensure its effectiveness. These include planning and scoping to identify system characteristics, context, and potential rights impacts, followed by data collection and risk analysis using evidence-based methods to assess and estimate risks. A risk management phase implements and tests preventive measures, aligned with a by-design approach to integrate rights considerations early in development. FRIA is presented as an iterative process, requiring updates to address contextual and technological changes, with a focus on transparency, accountability, and stakeholder engagement. It relies on expert input and considers participation and inclusion, particularly for vulnerable groups. Harmonized with related assessments such as the DPIA, FRIA functions as a preventive and ex-ante tool to mitigate risks before harm materializes, ensuring a dynamic and rights-centered approach to AI risk management.

FRIA Methodologies

Recent years have seen significant efforts initiated by various public institutions to develop methodologies tailored for the assessment of AI systems’ impact on fundamental rights. Notably, the Dutch Government introduced its FRIA, the Fundamental Rights and Algorithms Impact Assessment (FRAIA), a structured tool aimed at early identification and management of risks associated with algorithmic decision-making [44]. This initiative emphasizes an interdisciplinary approach to ensure balanced decision-making and FRs compliance. It introduces a methodology to ensure that the use of algorithms by government organizations respects and protects human rights by facilitating an interdisciplinary dialog during the development or application of algorithmic systems. The assessment covers various stages, from the reasoning behind using an algorithm, through data input and algorithm throughput, to implementation and the consideration of FRs. It involves a multidisciplinary approach to identify and mitigate potential risks associated with algorithm use, ensuring decisions are transparent, accountable, and inclusive of societal values. The FRAIA template stands as the most detailed tool for assessing the impact of AI and ADM systems on FRs. Its approach and structure offer significant insights that make it a valuable resource for compliance and evaluation efforts [45].
Similarly, the Danish Institute for Human Rights has offered guidance on human rights impact assessments (HRIA) for digital activities [46], advocating for human rights considerations in digital transformation through a comprehensive methodology. It aims to provide a practical guide for businesses and other stakeholders in the digital ecosystem to evaluate potential human rights impacts, promoting transparency, accountability, and ethical practices in the digital domain. The guidance includes methodologies for planning and scoping digital initiatives, data collection, impact evaluation, and devising remediation strategies to address any identified negative impacts on human rights. The guidance applies to a wide range of digital activities, including but not limited to platforms, search engines, social media, AI, cloud computing, and digital infrastructure. The HRIA phases can be summarized as follows: (a) planning and scoping, (b) data collection and context analysis, (c) analyzing impacts, (d) impact prevention, mitigation, and remediation, and (e) reporting and evaluation.
Beyond Europe, the Canadian Government has mandated an AIA tool [47] for public entities, focusing on quantitatively measuring AI systems’ impacts against a set of acceptability thresholds. Key points from this tool include:
  • The AIA tool aims to guide federal institutions in evaluating the impact of using an automated decision system. It helps identify potential risks to privacy, human rights, and transparency, and proposes measures to mitigate these risks before these systems are deployed.
  • The AIA consists of a questionnaire that assesses the design, use, and deployment of an algorithmic system. Based on the responses, it generates a risk score that indicates the level of impact the system may have on individuals or society. This score helps determine the level of oversight and measures needed to mitigate potential risks.
  • The AIA tool and the broader framework for the responsible use of AI in government are subject to ongoing review and improvement. This iterative approach allows the government to adapt to new challenges and developments in AI technology, ensuring that its use remains aligned with public values and legal obligations.

2.3. The Proposed DPIA—FRIA Framework

In this paper, we developed a new framework for integrating DPIA and FRIA methodologies, designed to comprehensively evaluate AI systems’ impact on fundamental rights. To this end, the DPIA tool from CNIL [42] and the FRAIA tool used by the Dutch Government [44], as well as AIA from Canada [47] have been used as the basis, aiming to provide a holistic view on the ethical and regulatory dimensions of AI systems. The development process involved an examination of existing frameworks, where questions are appropriately grouped to address various assessment areas comprehensively.
More precisely, our methodology focused on developing an integrated framework questionnaire, structured within a spreadsheet (of Excel-type). This spreadsheet consists of two primary sheets, integrating the DPIA framework from the French CNIL and a hybrid set of questions from the Dutch FRIA and Canadian AIA tools, respectively. To create a cohesive and comprehensive questionnaire, sets of questions from the DPIA, FRIA, and AIA tools were meticulously examined, merged, and reformulated where necessary. Duplicate questions were identified and removed to streamline the assessment process. Subsequently, the questions were organized into new groups, categorized under main themes to cover a wide spectrum of considerations, ranging from the technical aspects of AI systems to their social and ethical implications. The analysis of DPIA and FRIA questions revealed (as expected) many interconnected questions, emphasizing the intricate link between data protection and broader fundamental rights (such interconnections are being explicitly pointed out in the framework, to facilitate the proper conduction of the FRIA and DPIA). Each question of the framework is carefully cataloged with critical pieces of information such as assessment category, focus area, question content, and respondent definition (i.e., Provider or/and Deployer) to ensure a cohesive assessment process.
By combining DPIA and FRIA frameworks into one cohesive questionnaire tool, we allow for an in-depth analysis that recognizes the nature of AI systems and their societal implications, while facilitating compliance with legal obligations. Moreover, considering that DPIA and FRIA typically constitute legal obligations for Deployers, whilst essential information on the AI system is mainly maintained by the Providers, we identify for each question of the framework which the main entity is (i.e., the Provider or/and the Deployer) to answer. By these means, we identified how responses to some questions related to the FRIA directly provide useful info to respond to specific DPIA’s questions (and vice versa).

3. Results

3.1. Components of the New DPIA—FRIA Framework

The first two components of the integrated framework are dedicated to the DPIA and the FRIA, respectively. The DPIA component encompasses 71 meticulously crafted questions tailored to assess the data protection implications, whereas the FRIA component presents 146 questions meticulously designed to scrutinize the broader societal impacts of AI systems on fundamental rights. The third component of the framework outlines the intersections between DPIA and FRIA, identifying correlations between questions across the two assessment methodologies. This analysis sheds light on the interconnected nature of data protection and fundamental rights considerations within the context of AI systems, fostering a holistic understanding of their ethical and regulatory implications.
Next, we present a description of the framework.

3.1.1. The DPIA Part of the Framework

The DPIA part is structured into different sections with questions and tasks for assessing and ensuring compliance with the GDPR. There are four Assessment Categories identified and 71 questions(Figure 1). Further, each Assessment Category consists of Focus Areas. All questions in DPIA should be answered by the Deployer (D).
Adapted/enriched based on [42].
The questions per category are provided below indicating in addition, if the answer is connected with a FRIA question, i.e., if it provides output () or receives input (←).

Assessment Category 1: General Framework

The general section aims to describe the corresponding processing activities, clarifies the responsibilities of data controllers and processors, and lists applicable standards and certifications relevant to data processing Additionally, it describes the personal data that is processed, the life cycle of data, and identifies the elements that support data processing, such as systems and applications (Figure 2).
Adapted/enriched based on [42].
  • Overview
1.
What is the processing under consideration? (D)
2.
What are the responsibilities linked to data processing? Describe the responsibilities of the data controller and processor. (D), FRIA: Q10, Q13
3.
Are there standards applicable to the processing? List all relevant standards and references governing the processing that are mandatory for compliance. (D)
  • Data, Procedures, and Support Elements
4.
What types of personal data are processed? (D), ←FRIA: Q34
5.
What is the operational lifecycle for data and processes? (D)
6.
What are the supporting elements for the data? (D)

Assessment Category 2: Core Principles

In the core principles section, we review compliance with Article 5 of the GDPR, assessing whether data processing is transparent, lawful, necessary, and meets data minimization requirements, as well as ensuring that personal data are kept accurate. We also outline the legal basis for processing, how data subjects are informed of the processing, and the ways they can exercise their rights (Figure 3).
Adapted/enriched based on [42].
  • Proportionality and Necessity
7.
Are the goals of the processing clear, explicit, and lawful? (D)
8.
What legal foundation justifies the processing? (D), FRIA: Q123
9.
Do you apply the principle of data minimization? (D), ←FRIA: Q35
10.
Are the data adequate and up-to-date? (D), ← FRIA: Q18
11.
What is the duration of data storage? (D)
  • Legal controls
12.
By what means are data subjects notified about the processing? (D), ← FRIA: Q106–Q108
13.
If applicable, how do you obtain the data subjects’ consent? (D)
14.
In what ways can data subjects exercise their rights to access and data portability? (D), FRIA:Q124.
15.
By what means can data subjects request correction or erasure of their personal data? (D), FRIA:Q124.
16.
By what means can data subjects exercise their rights to restriction and objection? (D), FRIA:Q124.
17.
Are the responsibilities of the processing personnel clearly specified and contractually governed? (D), ← FRIA: Q52.
18.
When transferring data outside the EU, are the data protection measures sufficient? (D)

Assessment Category 3: Preventative and Mitigating Measures

The preventative and mitigation measures section examines measures (existing or planned) covering organizational control responsibilities, including policies for data protection, risk management, personal data breach management, measures related to the personnel, and third-party data access. Physical and logical security controls involve protecting data integrity and accessibility through various security protocols, risk avoidance strategies, and data management techniques (Figure 4).
Adapted/enriched based on [42].
  • Organizational control
19.
Is there a designated individual ensuring compliance with privacy regulations? Is there a dedicated oversight body (or equivalent) offering guidance and monitoring actions related to data protection? (D)
20.
Have you implemented a data protection policy? (D), ←FRIA: Q28
21.
Have you implemented a policy for risk controlling? (D), ←FRIA: Q28
22.
Have you implemented a policy for integrating privacy protection in all new projects? (D), ←FRIA: Q28
23.
Is there an operational entity capable of identifying and addressing incidents that could impact data subjects’ privacy and civil liberties? (D), ←FRIA: Q28
24.
Do you have a policy in place that outlines the procedures for educating new employees about data protection, and the measures taken to secure data when someone with data access leaves the company? (D), ←FRIA:Q28,Q84
25.
Is there a policy and procedures in place to minimize the risk to data subjects’ privacy and rights, when third parties have legitimate access to their personal data? (D), ←FRIA: Q28
26.
Does your organization have a policy and procedures in place to ensure the management and control of the personal data protection that it holds? (D), ←FRIA:Q28,Q115
  • Physical security control
27.
Have measures been put in place to minimize the likelihood and impact of risks affecting assets that handle personal data? (D), ←FRIA: Q28
28.
Have protective measures been established on workstations and servers to defend against malicious software when accessing untrusted networks? (D), ←FRIA: Q28
29.
Have safeguards been put in place on workstations, to mitigate the risk of software exploitation? (D), ←FRIA: Q28
30.
Have you implemented a backup policy? (D), ←FRIA: Q28
31.
Have you implemented ANSSI’s Recommendations for securing websites? (D)
32.
Have you implemented policies for the physical maintenance of hardware? (D), ←FRIA: Q28
33.
How do you ensure and document that your subcontractors provide sufficient guarantees in terms of data protection, particularly regarding data encryption, secure data transmission, and incident management, as required for a comprehensive DPIA? (D), ←FRIA: Q28, Q52
34.
How do you ensure network security? (D), ←FRIA: Q28, Q51
35.
Have you implemented policies for physical security? (D), ←FRIA: Q28
36.
How do you monitor network activity? (D), ←FRIA: Q28
37.
Which controls have you implemented for the physical security of servers and workstations? (D), ←FRIA: Q28
38.
How do you assess and document environmental risk factors, such as susceptibility to natural disasters and proximity to hazardous materials, in the chosen implantation area for a project to ensure safety and compliance with regulatory requirements? (D), ←FRIA: Q28
39.
Have you implemented processes for protection from non-human risk sources? (D), ←FRIA: Q28
  • Logical Security Control
40.
Have adequate measures been established to maintain the confidentiality of stored data along with a structured process for handling encryption keys? (D), ←FRIA: Q28
41.
Are there anonymization measures in place? If so, what methods are used, and what is their intended purpose? (D), ←FRIA: Q28, Q33, Q53
42.
Is data partitioning carried out or planned? (D), ←FRIA: Q28
43.
Have you implemented authentication means, which ones and which rules are applicable to passwords? (D), ←FRIA: Q31
44.
Have you implemented policies that define traceability and log management? (D), ←FRIA: Q28, Q32
45.
What are the archive management procedures under your responsibility, including delivery, storage, and access? (D), ←FRIA: Q28, Q30
46.
If paper documents containing data are used in the processing, what procedures are followed for printing, storage, disposal, and exchange? (D), ←FRIA: Q28
47.
Which methods have you implemented for personal data minimization? (D), ←FRIA: Q35, Q36

Assessment Category 4: Risks

Finally, the risks section aims to evaluate potential negative impacts on data subjects from threats like unauthorized access, data modification, or loss. It focuses on the severity and likelihood of these risks and outlines controls for addressing them; in our framework, we pay attention to the challenges posed by AI systems (Figure 5).
Adapted/enriched based on [42].
The same six questions are applied consistently across the four focus areas. The questions for each focus area are as follows:
Q48–53: Illegitimate access to data
Q54–59: Undesirable modification of data
Q60–65: Data loss
Q66–71: AI systems
In the case of ‘AI systems’ (Q66–71), the responses receive input by specific FRIA questions:
66.
What would be the primary impact on data subjects if the risk materialized? (D), ← FRIA:Q8, Q69–Q82, Q122–Q128
67.
What are the primary threats that could cause this risk to materialize? (D), ← FRIA Q9–Q16, Q18–Q37, Q44, Q48–Q50, Q52, Q64
68.
What are the risk sources? (D), ← FRIA Q9–Q37, Q44, Q48–Q53, Q56, Q59–Q68, Q83–Q146
69.
Which of the implemented or proposed controls help mitigate the risk? (D), ← FRIA Q9–Q37, Q44, Q48–Q53, Q59–Q68, Q83–Q121, Q129–Q146
70.
How is the level of risk determined, considering both its potential consequences and the effectiveness of existing mitigation measures? (D)
71.
How is the probability of the risk assessed, considering potential threats, risk origins, and mitigation measures? (D)

3.1.2. The FPIA Part of the Framework

The FRIA part of the framework, toward providing a holistic approach on the impact of AI systems on fundamental rights consists of 13 assessment categories, 31 focus areas and 146 questions. There is an indication of the respondent of the question: Deployer (D), Provider (P), and Provider/Deployer (P/D). The questions per category are provided below indicating in addition, if the answer is connected with a FRIA question, i.e., if it provides output () or receives input (←).

Assessment Category 1: Purpose Assessment

The purpose assessment section is related to clarifying the motivation behind the idea of using an AI system, defining the problem it aims to solve and its objectives (Figure 6).
Adapted/enriched based on [44,47].
  • Rationale and problem statement
1.
Describe your plan for implementing the AI system. What specific issue is it designed to address? What is the underlying motivation or trigger for its use? (D)
  • Objective
2.
What goal is the AI system expected to accomplish? What is its main purpose, and what additional goals does it aim to fulfill? (D)
3.
What requirements does the Deployer have, and how will the system effectively satisfy them? (P/D)
4.
How effective will the system likely meet Deployer’s needs? (P)
5.
Have alternative non-automated processes been considered? (D)
6.
What impact would result from not implementing the system? (D)

Assessment Category 2: Public Values and Legal Basis

The public values and legal basis section seeks to identify the public values (e.g., equality, personal autonomy, solidarity, security, etc.) that are potentially affected by the use of the AI system, as well as to determine the legal basis for the overall process related with the use of the AI system (Figure 7).
Adapted/enriched based on [44,47].
  • Public values
7.
What societal principles justify the implementation of the system? If multiple values drive its adoption, can they be prioritized? (D)
8.
Which societal values could be negatively affected by the implementation of the system? (D), DPIA Q66
  • Legal basis
9.
On what legal grounds is the system’s implementation based, and what legitimizes the decisions it facilitates? (D), DPIA: Q67 or Q68 or Q69, depending on the answer.

Assessment Category 3: Stakeholders Engagement and Responsibilities

The stakeholders’ engagement and responsibilities section aims to establish accountability by setting and clarifying the roles and responsibilities among all parties involved in the system’s lifecycle. It also identifies stakeholders involved in consultation and clarifies ownership and management agreements if the AI system is externally developed.
Adapted/enriched based on [44,47].
  • Involved Parties and Roles:
10.
Which organizations and individuals contribute to the system’s creation, deployment, operation, and maintenance? Have clear responsibilities been assigned? (P/D), ← DPIA: Q2, DPIA: Q67 or Q68 or Q69, depending on the answer.
11.
Have the duties related to the system’s development and implementation been clearly defined? What measures ensure that these responsibilities remain transparent after deployment and during its operational phase? (P/D), DPIA: Q67 or Q68 or Q69, depending on the answer.
12.
Who is ultimately responsible for the AI system? (P/D),  DPIA: Q67 or Q68 or Q69, depending on the answer.
13.
Which internal and external stakeholders will be engaged during consultations? (P/D), ← DPIA: Q2, DPIA: Q67 or Q68 or Q69 depending on the answer.
14.
If the system was created by a third party, have definitive arrangements been made concerning its ownership and governance? What do these arrangements specify? (D),  DPIA: Q67 or Q68 or Q69, depending on the answer.

Assessment Category 4: Data Sources and Quality

The data sources and quality section focuses on the specifics of data selection and preparation for the AI system, including the sources and the type of data, as well as the collection methods. It emphasizes the importance of ensuring and documenting data quality and reliability, managing ownership and handling data, addressing biases, and verifying the fairness of data representation in gender analysis (Figure 8). Many questions from this section are related with info that can be available by the Provider of the system.
Adapted/enriched based on [44,47].
  • Data sources and quality
15.
What categories of data will be utilized as input for the system, and what are its sources? Which is the method of collection? (D), DPIA: Q67 or Q68 or Q69 depending on the answer.
16.
Does the data possess the necessary accuracy and trustworthiness for its intended application? (P/D), DPIA: Q67 or Q68 or Q69 depending on the answer.
17.
Are you planning to establish a process for recording how data quality issues were addressed during the design phase? Will this information be made publicly accessible? (P), DPIA: Q68 or Q69 depending on the answer.
18.
Will you establish a documented procedure to mitigate the risk of outdated or inaccurate data influencing automated decisions? Do you plan to make this information publicly accessible? (P/D), DPIA: Q10 and Q67 or Q68 or Q69 depending on the answer
19.
Who controls the data? (D), DPIA: Q67 or Q68 or Q69 depending on the answer.
20.
Will the system integrate data from various sources? (D), DPIA: Q67 or Q68 or Q69 depending on the answer.
21.
Who was responsible for gathering the data used to train the system? (P),  DPIA: Q67 or Q68 or Q69 depending on the answer.
22.
Who was responsible for gathering the input data utilized by the system? (D),  DPIA: Q67 or Q68 or Q69 depending on the answer.
23.
Does the system rely on analyzing unstructured data to produce recommendations or decisions? If so, what specific types of unstructured data are involved? (D),  DPIA: Q67 or Q68 or Q69 depending on the answer.
  • Bias/assumptions in the data
24.
What underlying presumptions and biases are present in the data? What measures are in place to detect, counteract, or reduce their impact on the system’s outcomes? (P),  DPIA: Q67 or Q68 or Q69 depending on the answer.
25.
If the system relies on training data, does it appropriately represent the environment in which it will be deployed? (P),  DPIA: Q67 or Q68 or Q69 depending on the answer.
26.
Will there be formal procedures in place to assess datasets for potential biases and unintended consequences? (P), DPIA: Q67 or Q68 or Q69 depending on the answer.
27.
Will you undertake a Gender Based Analysis Plus of the data? Will you be making this information publicly available? (P), DPIA: Q67 or Q68 or Q69 depending on the answer.

Assessment Category 5: System Specifications

The systems specification section deals with a detailed assessment of the AI system’s specifications, focusing primarily on the security and privacy mechanisms concerning the handling, access, and protection of data. It also explores the system’s functionalities, including automated data analysis, risk assessment, content generation, process optimization, and decision-making capabilities. It aims to establish the alignment of the system with existing standards, as well as with organizational responsibilities (Figure 9). Again, many questions from this section are related to info that can be available by the Provider of the system.
Adapted/enriched based on [44,47].
  • Security and privacy
28.
Is the data within the system adequately protected? Consider security measures separately for both input and output data. (P/D), DPIA: Q20–Q30, Q32–Q42, Q44–Q46, Q67, or Q68, or Q69, depending on the answer.
29.
Is access to the system data supervised? (P/D), DPIA: Q67 or Q68 or Q69 depending on the answer.
30.
Does the system adhere to applicable regulations regarding data archiving? (P/D), DPIA: Q45, Q67 or Q68 or Q69 depending on the answer.
31.
Have proper controls been established to regulate access for authorized users or groups within the system? Will a procedure be implemented to authorize, monitor, and withdraw access privileges? (P/D), DPIA: Q43, Q67 or Q68 or Q69 depending on the answer.
32.
Is a logging system in place to track data access and usage, ensuring threat detection, prevention, and investigation within the organization? (P/D), DPIA: Q44, Q67, or Q68, or Q69, depending on the answer.
33.
Have adequate safeguards been implemented to protect the identity of system data, such as through anonymization or pseudonymization of personal information? (P/D), DPIA: Q41, Q67, or Q68, or Q69, depending on the answer.
34.
Will the Automated Decision System rely on personal data for its input? (D),  DPIA:Q4, Q67 or Q68 or Q69 depending on the answer.
35.
Have you confirmed that personal data are used solely for purposes directly linked to the delivery of the program or service? (D), DPIA: Q9, Q47, Q67, or Q68, or Q69, depending on the answer.
36.
Does the decision-making process utilize individuals’ personal data in a way that directly impacts them? (D), DPIA: Q47, Q67 or Q68, or Q69, depending on the answer.
37.
Have you ensured that the system’s use of personal data aligns with: (a) the existing Personal Information Banks (PIBs) and Privacy Impact Assessments (PIAs) for your programs, or (b) any planned or implemented updates to PIBs or PIAs that reflect new uses and processing methods? (P/D), DPIA: Q67 or Q68, or Q69, depending on the answer.
  • About The System
38.
Does the system support image and object recognition?(P)
39.
Does the system support text and speech analysis? (P)
40.
Does the system support risk assessment? (P)
41.
Does the system support content generation? (P)
42.
Does the system support process optimization and workflow automation? (P)
43.
Do you plan a system fully or partially automated? Please explain how the system contributes to the decision-making process. (D)
44.
Will the system be responsible for making decisions or evaluations that require subjective judgment or discretion? (D), DPIA: Q67 or Q68, or Q69, depending on the answer.
45.
Can you explain the standards used to assess citizens’ data and the procedures implemented to process it? (D)
46.
Can you outline the system’s output and any essential details required to understand it in relation to the administrative decision-making process? (D)
47.
Will the system carry out an evaluation or task that would typically require human involvement? (D)
48.
Is the system utilized by a department other than the one responsible for its development? (P/D), DPIA: Q67 or Q68 or Q69, depending on the answer.
49.
If your system processes or generates personal information, will you perform or have you already performed a Privacy Impact Assessment, or revised a previous one? (P/D), DPIA: Q67 or Q68 or Q69, depending on the answer.
50.
Will you integrate security and privacy considerations into the design and development of your systems from the initial project phase? (P/D), DPIA: Q67 or Q68 or Q69, depending on the answer.
51.
Will the information be used within a standalone system with no connections to the Internet, intranet, or any other external systems? (P/D), DPIA: Q34, Q68, or Q69, depending on the answer.
52.
If personal information is being shared, have appropriate safeguards and agreements been implemented? (P/D), DPIA: Q17, Q33, Q67, or Q68, or Q69, depending on the answer.
53.
Will the system remove personal identifiers at any point during its lifecycle? (P/D), DPIA: Q41, Q68, or Q69, depending on the answer.

Assessment Category 6: Algorithm Specifications

The algorithm specification section focused on understanding the type and characteristics of the algorithm being used, including its learning capabilities, decision-making process, and how its design and expected performance align with the desired purpose. It also seeks to ensure that the algorithm’s outputs are accurate, reliably tested, and free from biases, with a strong emphasis on the transparency and clarity of its operations to both technical and non-technical stakeholders (Figure 10). Most of the questions in this section correspond to the Provider of the system who can deliver the relevant info.
Adapted/enriched based on [44,47].
  • Algorithm type:
54.
What type of algorithm will be used (self-learning or non-self-learning)? (P)
55.
Why was this type of algorithm chosen? (P)
56.
Will the algorithm have any of the following traits: a) being considered a proprietary or confidential process, or b) being complex or challenging to interpret or clarify? (P), DPIA: Q68
57.
What makes this algorithm the most appropriate choice for meeting the goals outlined in question 2? (D)
58.
What other options exist, and why are they less suitable or effective? (D)
  • Algorithm accuracy:
59.
How accurate is the algorithm, and what evaluation criteria are used to determine its accuracy? (P), DPIA: Q68 or Q69, depending on the answer.
60.
Does the algorithm’s accuracy meet the standards required for its intended application? (D), DPIA: Q68 or Q69, depending on the answer.
61.
How is the algorithm tested? (D/P), DPIA: Q68 or Q69, depending on the answer.
62.
What steps can be implemented to prevent the replication or intensification of biases (e.g., different sampling strategy, feature modification, etc.)? (P), DPIA: Q68 or Q69, depending on the answer.
63.
What underlying assumptions were made in choosing and weighing the indicators? Are these assumptions valid? Why or why not? (P), DPIA: Q68 or Q69, depending on the answer.
64.
What is the error rate of the algorithm? For example, how many false positives or false negatives occur, or what is the R-squared value? (D/P), DPIA: Q67, Q68, or Q69, depending on the answer.
  • Transparency and explainability:
65.
Is it clear what the algorithm does, how it does this, and on what basis (what data) it does this? (D/P), DPIA: Q68 or Q69, depending on the answer.
66.
Which individuals or groups, both internal and external, will be informed about how the algorithm operates, and what methods will be used to achieve this transparency? (D/P), DPIA: Q68 or Q69, depending on the answer.
67.
For which target groups must the algorithm be explainable? (D/P), DPIA: Q68 or Q69, depending on the answer.
68.
Can the algorithm’s operation be described clearly enough for the target groups mentioned in Question 67? (D/P), DPIA: Q68 or Q69, depending on the answer.

Assessment Category 7: Effects of the AΙ System

The effects of the AI system section evaluate the potential social impact, ethical considerations, risks, and long-term effects of deploying it, including its alignment with societal values, potential for discrimination, implications for vulnerable groups, decision stakes, impact on employees, accessibility, need for new policies, reversibility, and severity of consequences.
Adapted/enriched based on [44,47].
  • Impact Assessment
69.
What impact will the AI system have on citizens, and how will the “human measure” factor into decision-making? (D), DPIA: Q66
70.
What risks might arise, such as stigmatization, discrimination, or other harmful outcomes, and what measures will be taken to address or reduce them? (D), DPIA: Q66
71.
How will the anticipated outcomes help resolve the original issue that led to the AI’s creation and deployment, and how will they support achieving the intended goals? (D), DPIA: Q66
72.
In what ways do the anticipated effects align with the underlying values, and how will the potential for undermining those values be managed? (D), DPIA: Q66
73.
Is the project in a field that attracts significant public attention, such as privacy issues or is it frequently challenged through legal actions? (D), DPIA: Q66
74.
Are citizens in this line of business particularly vulnerable? (D), DPIA: Q66
75.
Are the stakes of the decisions very high? (D), DPIA: Q66
76.
Will this project significantly affect staff levels or the nature of their responsibilities? (D), DPIA: Q66
77.
Will the implementation of the system introduce new obstacles or intensify existing challenges for individuals with disabilities? (D), DPIA: Q66
78.
Does the project require obtaining new policy authority? (D), DPIA: Q66
79.
Are the consequences of the decision reversible? (D), DPIA: Q66
80.
How long will the impact from the decision last? (D), DPIA: Q66

Assessment Category 8: Procedural Fairness and Governance

The procedural fairness and governance section focusses on the procedural fairness and governance of the system, examining the decision-making procedures, the human oversight, user feedback mechanisms, audit and documentation trails, change management, as well as the system’s transparency and explainability (Figure 11). This section in fact aims to assess how decisions are made and recorded and the system’s ability to justify its outputs.
Adapted/enriched based on [44,47].
  • Decision process and human role
81.
What is done with the AI system’s results, and what decisions rely on these outcomes? (D), DPIA: Q66
82.
Does the decision pertain to any of the categories below: health related services, economic interests, social assistance, access and mobility, licensing and issuance of permits, and employment? (D), DPIA: Q66
83.
What is the human role in decisions made using the system’s output, and how can individuals make responsible choices based on that output? (D),  DPIA: Q68 or Q69, depending on the answer.
84.
Are there enough trained personnel available to oversee, evaluate, and refine the AI system as required, both now and in the future? (D), DPIA: Q24, Q68, or Q69, depending on the answer.
  • Feedback mechanism
85.
Will a feedback collection mechanism be implemented for system users? (D), DPIA: Q68 or Q69, depending on the answer.
  • Appeal process
86.
Will a formal process be in place for citizens to appeal and challenge decisions made by the system? (D), DPIA: Q68 or Q69, depending on the answer.
  • Audit Trails and Documentation:
87.
Does the audit trail inform the authority or delegated authority as specified by the applicable legislation? (D), DPIA: Q68 or Q69, depending on the answer.
88.
Will the system generate an audit trail documenting every recommendation or decision it produces? (D), DPIA: Q68 or Q69, depending on the answer.
89.
Will the audit trail clearly indicate all critical decision points? (D), DPIA: Q68 or Q69 depending on the answer.
90.
What procedures will be in place to ensure that the audit trail remains accurate and tamper-proof? (D), DPIA: Q68 or Q69, depending on the answer.
91.
Could the audit trail produced by the system support the generation of a decision notification? (D), DPIA: Q68 or Q69, depending on the answer.
92.
Will the audit trail specify the exact system version responsible for each supported decision? (D), DPIA: Q68 or Q69, depending on the answer.
93.
Will the audit trail identify the authorized individual responsible for the decision? (D), DPIA: Q68 or Q69, depending on the answer.
  • Accountability
94.
What processes have been established for making decisions using the AI system? (D), DPIA: Q68 or Q69, depending on the answer.
95.
How are the different stakeholders incorporated into the decision-making process? (D), DPIA: Q68 or Q69, depending on the answer.
96.
What measures are in place to ensure that decision-making procedures adhere to principles of good governance, good administration and legal protection? (D), DPIA: Q68 or Q69, depending on the answer.
  • Human oversight process
97.
Will the system include an option for human intervention to override its decisions? (D), DPIA: Q68 or Q69 depending on the answer.
98.
Will there be a procedure to track instances where overrides occur? (D), DPIA: Q68 or Q69 depending on the answer.
  • System logic and change management
99.
Will all critical decision points in the automated logic be tied to relevant laws, policies, or procedures? (D), DPIA: Q68 or Q69 depending on the answer.
100.
Will you keep a detailed record of every change made to the model and the system? (D), DPIA: Q68 or Q69 depending on the answer.
101.
Will the audit trail contain change control processes to document adjustments to how the system functions or performs? (D), DPIA: Q68 or Q69 depending on the answer.
  • Transparency and explainability
102.
Will the system have the capability to provide explanations for its decisions or recommendations upon request? (D), DPIA: Q68 or Q69 depending on the answer.

Assessment Category 9: Contextual Factors

The contextual factors section assesses how factors such as time, geographic area, and changes in context affect the applicability of the system, highlighting that the underlying assumptions may no longer hold valid if the context changes.
Adapted/enriched based on [44,47].
  • Context:
103.
When will the AI system be operational, and for how long will it remain in use? (D), DPIA Q68 or Q69 depending on the answer.
104.
In what specific locations or circumstances will the AI system be applied? Does it target a particular geographic region, group of people, or cases? (D), DPIA Q68 or Q69 depending on the answer.
105.
Will the algorithm remain functional if the context shifts or if it is deployed in a different environment than initially intended? (P), DPIA Q68 or Q69 depending on the answer.

Assessment Category 10: Communication Strategy

The communication strategy section focuses on evaluating the communication strategy regarding an AI system’s deployment, specifically looking at the level of transparency, the methods that will be used to inform stakeholders about its use, and the clarity and accuracy of any visual representations of the system’s outputs to ensure they are understandable to diverse audiences.
Adapted/enriched based on [44,47].
  • Information dissemination
106.
To what extent can the operation of the AI system be made transparent, given its objectives and deployment context? (D), DPIA: Q12, Q68, or Q69, depending on the answer.
107.
What approach will you take to communicate about the AI system’s use? (D), DPIA: Q12, Q68, or Q69, depending on the answer.
108.
Will the system’s output be displayed in a visual format? If so, does the visualization accurately reflect the system’s results and provide clarity for different user groups? (D), DPIA: Q12, Q68, or Q69, depending on the answer.

Assessment Category 11: Evaluation, Auditing and Safeguarding

The evaluation, auditing and safeguarding section encompasses a comprehensive evaluation of the mechanisms and strategies in place to ensure that the AI system is responsibly deployed and maintained. It scrutinizes the tools and processes for continuous auditing and the adequacy of the organization’s resources to oversee the system’s performance. Additionally, it addresses how the system’s integrity, fairness, and effectiveness are safeguarded, the capacity to align algorithmic decisions with their intended objectives despite contextual changes, and the transparency in communicating audit frequency and outcomes to stakeholders.
Adapted/enriched based on [44,47].
  • Accountability and review
109.
Have suitable tools been made available for the evaluation, audit, and protection of the AI system? (D), DPIA: Q68 or Q69 depending on the answer.
110.
Are there sufficient mechanisms to ensure the system’s operation and outcomes remain accountable and transparent? (D), DPIA: Q68 or Q69, depending on the answer.
111.
What options exist for auditors and regulators to enforce formal actions related to the government’s use of the AI system? (D), DPIA: Q68 or Q69, depending on the answer.
112.
How frequently and at what intervals should the AI system’s use be reviewed? Does the organization have the appropriate personnel in place to carry out these evaluations? (D), DPIA: Q68 or Q69, depending on the answer.
113.
What processes can be implemented to ensure the system remains relevant and effective in the future? (D), DPIA: Q68 or Q69, depending on the answer.
114.
How can validation tools be implemented to confirm that decisions and actions continue to align with the system’s purpose and objectives, even as the application context evolves ? (D), DPIA: Q68 or Q69, depending on the answer.
115.
What safeguards are in place to maintain the system’s integrity, fairness, and effectiveness throughout its lifecycle? (D), DPIA: Q68 or Q69, depending on the answer.
116.
Have the human capital and infrastructure requirements specified in Q72 and Q73 been fulfilled? (D), DPIA: Q68 or Q69, depending on the answer.
117.
For self-learning algorithms, have monitoring processes and systems been established (for example, concerning data drift, concept drift and accuracy)? (D), DPIA: Q68 or Q69, depending on the answer.
118.
Are there adequate means to modify the system or change how it is used if it no longer serves its original purpose or goals? ((D), DPIA: Q68 or Q69, depending on the answer.
119.
Is there an external auditing and oversight mechanism in place? (D), DPIA: Q68 or Q69, depending on the answer.
120.
Is sufficient information provided about the system to enable external supervision? (D), DPIA: Q68 or Q69, depending on the answer.
121.
Is the audit frequency and practice clearly communicated? (D), DPIA: Q68 or Q69, depending on the answer.

Assessment Category 12: Core Human Rights

The core human tights section forms the means for assessing an AI system’s alignment with fundamental rights, evaluating its legal compliance, the extent of rights’ impact and the balance between its objectives and the potential infringement of rights. The corresponding questions are designed to determine if the system affects fundamental rights and to guide a structured conversation on whether such impacts can be prevented or mitigated, and under what circumstances any residual interference with these rights might be deemed acceptable.
The FRAIA tool, used by the Dutch government in its Annex [44], outlines a range of rights based on the Ethics Guidelines for Trustworthy AI by the European Commission [25] and the EU Charter [26]. These include rights tied to individual freedoms, equality, and procedural safeguards (e.g., bans on body searches, the right to leave the country, and the right to government-funded legal aid). This results in a comprehensive set of rights that must be evaluated for every high-risk AI system.
  • Fundamental rights
122.
Does the system impact any human rights? (D),  DPIA: Q66
  • Applicable laws and standards:
123.
Are there particular legal regulations or standards that address violations of human rights? (D), ←DPIA: Q8, DPIA: Q66
  • Defining seriousness:
124.
To what extent does the system impact on human rights? (D),DPIA: Q14–Q16, DPIA: Q66
Objective
125.
Which objectives are pursued by using the system? (D), DPIA: Q66
Efficacy
126.
Is the system that is to be used a suitable means to realize the set objectives? (D), DPIA: Q66
  • Necessity and subsidiarity
127.
Is using this specific system necessary to achieve this objective, and are there no other or mitigating measures available to do so? (D), DPIA: Q66
Balancing interests/proportionality
128.
Does the use of the system result in a reasonable balance between the objectives pursued and the fundamental rights that will be infringed? (D), DPIA: Q66

Assessment Category 13: Preventative and Mitigating Measures

Finally, the preventative and mitigating measures section outlines various preventative and mitigating strategies to address potential issues in the development and use of the system, particularly concerning their impact on fundamental rights (Figure 12). The corresponding stakeholders for addressing this subsection could be either the Deployer or the Provider.
Adapted/enriched based on [44,47].
  • During the development and implementation of the AI system
129.
Have actions been taken to reduce bias risks through instance class adjustment, instance filtering, or instance weighting? (P), DPIA: Q68 or Q69, depending on the answer.
130.
Have you employed tools like “gender tagging” to highlight how protected personal characteristics influence the system’s operation? (P), DPIA: Q68 or Q69, depending on the answer.
131.
Is the developed system “fairness-aware”? (P), DPIA: Q68 or Q69, depending on the answer.
132.
Have “in-processing” or “post-processing” methods of “bias mitigation” been implemented? (P), DPIA: Q68 or Q69, depending on the answer.
133.
Have mechanisms for “ethics by design”, “equality by design”, and “security by design” been implemented? (P), DPIA: Q68 or Q69, depending on the answer.
134.
Have “Testing and validation” been performed (P), DPIA: Q68 or Q69, depending on the answer.
135.
Have methods to enhance the system openness and interpretability such as a “crystal box” or “multi-stage” implemented? (P), DPIA: Q68 or Q69, depending on the answer.
  • Administrative measures
136.
Did the initial development involve a trial phase using a limited dataset and restricted access, before gradually expanding upon successful results? (P), DPIA: Q68 or Q69, depending on the answer.
137.
Have requirements been set for regular reporting and auditing to periodically review the system’s effects and its impact on human rights? (P/D), DPIA: Q68 or Q69, depending on the answer.
138.
Have you adopted a ban or moratorium on specific uses of the system? (P/D), DPIA: Q68 or Q69, depending on the answer.
139.
Have you formulated upper limits for the use of specific AI systems? (P/D), DPIA: Q68 or Q69, depending on the answer.
140.
Have you established “codes of conduct”, “professional guidelines”, or “ethical standards” for stakeholders who will operate the system? (P/D), DPIA: Q68 or Q69, depending on the answer.
141.
Have you established a formal ethical commitment for the system’s Providers and Deployers? (P/D), DPIA: Q68 or Q69, depending on the answer.
142.
Do you provide education/training on data-ethics awareness (P/D), DPIA: Q68 or Q69, depending on the answer.
143.
Are systems normalized, accredited and certified according to standards for credible performance? (P), DPIA: Q68 or Q69, depending on the answer.
144.
Are checklists available for decision-making on the basis of the system’s output? (D), DPIA: Q68 or Q69 depending on the answer.
145.
How do you ensure that affected citizens have opportunities to engage and participate? (D), DPIA: Q68 or Q69, depending on the answer.
146.
Have you created a strategy to phase out the system if it is determined to be no longer appropriate? (D), DPIA: Q68 or Q69, depending on the answer.

3.2. Synergies Between DPIA and FRIA

In the process of developing the unified DPIA—FRIA framework, it became evident that several interconnected questions feed into each other, i.e., the answer to a question in the FRIA section may directly determine the answer to a question in the DPIA framework (and vice versa). This observation in fact highlights the importance of a holistic approach in assessing the impact of AI systems. It is crucial to note though that this correlation is not absolute; the Deployer (being typically the data controller under the GDPR’s provisions) needs to consider such correlations and further enhance/enrich the output she/he provides, according to the specific context of each application.
Generally, correlations between DPIA and FRIA can be identified as follows:
  • Stakeholder responsibilities: The responsibilities outlined in DPIA concerning data processing activities will contribute to the corresponding question in FRIA that address the allocation of responsibility and accountability throughout the system’s design, development, operational use, and ongoing maintenance.
  • Personal data processing: Understanding whether and which personal information the AI system utilizes will provide essential details for the DPIA, enabling a clear definition of the scope and nature of personal data processing. Therefore, the relevant questions in FRIA will supply valuable information to the DPIA’s question concerning the processing of personal data.
  • Legal basis: The questions in DPIA that identify the lawful basis for data processing are related somehow with corresponding questions in FRIA regarding specific legislation and standards.
  • Data security and privacy: the questions concerning data security and privacy in the FRIA can offer valuable insights into the DPIA regarding the preventative and mitigating measures needed to safeguard personal data. For instance, the implementation of encryption, anonymization, traceability, and logical access control alleviate risks into both DPIA and FRIA contexts.
  • Contracts/agreements: The DPIA question regarding contracts, which seeks to understand how subcontractors are committed and being managed in terms of data protection practices, is relevant with the corresponding FRIA question concerning the existence of agreements or arrangements with appropriate safeguards when personal data are being processed.
  • Network security: Network security questions within the FRIA part offer vital insights to the relevant queries in the DPIA part, particularly concerning network security practices and their effectiveness.
  • Data anonymization and minimization: The answers to the corresponding FRIA questions may provide essential input the DPIA framework about identifying, categorizing, and addressing risks, threats, and mitigation measures.
  • Logical access control: The FRIA questions regarding who has access to the system, how access rights are granted, monitored, and revoked provide feedback to the DPIA relevant questions understanding the potential risks and vulnerabilities associated with unauthorized access or misuse of personal data.
  • The questions within FRIA pertaining to the effects of the AI system: The roadmap for protecting fundamental rights, mitigation measures, data quality, stakeholder responsibilities, security, privacy and system accuracy will significantly affect the responses to specific questions of the DPIA, concerning the potential threats, sources of risk, and the necessary measures to be adopted in the DPIA.
From the aforementioned analysis, it becomes evident that there is a close relationship between FRIA and DPIA that should be utilized for a holistic approach that covers both these instruments. The following table (Table 3) provides specific information on which questions of the DPIA framework are related to which questions of the FRIA framework and how (→ means that it provides input).

4. Case Study

4.1. The Dutch Childcare Benefits Scandal

To contextualize the theoretical model and evaluate the framework’s operational effectiveness, a real-world case study will be examined: the Dutch Childcare Benefits Scandal. This incident has been selected due to its profound implications for FRs and its exemplary demonstration of the complexities inherent in ADM systems. The case serves as an ideal test for the integrated DPIA and FRIA framework, offering rich insights into its capability to detect, analyze, and propose mitigations for the potential harms posed by ADM systems.
The Dutch Childcare Benefits Scandal, also known as “Toeslagenaffaire” [48], was a significant event that revealed serious issues within the Dutch Tax and Customs Administration, particularly around the use of algorithms that led to widespread institutional racism. This scandal had widespread consequences, leading to the resignation of the Dutch Government in 2021. It was characterized by the Tax and Customs Administration’s use of algorithms to create risk profiles for childcare benefits applicants, where “foreign-sounding names” and “dual nationality” were unjustly used as indicators of potential fraud. This practice resulted in thousands of families reaching 35,000, particularly from racialized low- and middle-income groups, being falsely accused of fraud and forced into financial hardship, such as debt, poverty, job loss, and even homelessness. The situation was exacerbated by a lack of transparency and accountability in the decision-making process, with those affected having no resources to challenge the decisions made against them.
Key aspects of the scandal include:
  • Racial profiling and discrimination: The risk-scoring algorithm employed by the Dutch Tax Administration inherently discriminated against people based on nationality and ethnicity, marking non-Dutch nationals and those with dual citizenship for harsher scrutiny. This led to significant financial and personal distress for affected families, many of whom belonged to minority groups.
  • Consequences for families: The scandal had devastating impacts on the lives of thousands of families. Many experienced debt, unemployment, evictions, mental health issues, and even family breakdowns due to the false fraud accusations and the subsequent financial burdens placed upon them.
  • Political and institutional response: The Dutch Government’s admission that institutional racism of the Dutch Tax and Customs Administration was the root cause of the scandal marked a pivotal moment in acknowledging the serious issues within the ADM systems used. It highlighted the need for systemic change to prevent such injustices from occurring in the future.
  • Calls for reform: Organizations like Amnesty International have emphasized the scandal as a wake-up call, urging for an immediate ban on the use of discriminatory data of nationality and ethnicity in risk-scoring for law enforcement and public service delivery. They advocate for a framework that ensures ADM systems are transparent and accountable, and they safeguard human rights.
The report “Xenophobic Machines: Discrimination Through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal” [48] by Amnesty International meticulously examines the flawed application of algorithmic decision-making by the Dutch tax authorities, which precipitated a significant discrimination issue and led to the Dutch Childcare Benefits Scandal. The report discusses the growing deployment of algorithmic systems by governments for automating tasks, with a notable focus on fraud prevention. It particularly critiques the Dutch Childcare Benefits scheme, where the Dutch tax authorities implemented an algorithmic self-learning system, the “risk classification model”, in 2013 to detect fraud. This system used various factors, including nationality, to assign risk scores to applications, unfairly flagging non-Dutch applicants for further scrutiny and accusations of fraud. Furthermore, the risk-detection algorithm scrutinized the social security documents submitted by parents for childcare allowances for any minor errors, leading to the automatic cessation of allowances and the demand for repayment of previously received aid, without applying the principle of proportionality. In addition, the responsible civil servants were given no information as to why the system had given the application a high-risk score, i.e., the ADM system was a black box. This process, particularly the emphasis on nationality as a significant factor for risk assessment, has raised concerns regarding discrimination and bias in the automated decision-making process.
When assigned high-risk scores by the risk classification model, parents and caregivers were often required by government officials to submit additional documentation to validate their claims for benefits. However, attempts to query which aspects of their applications were flagged as inaccurate or to understand the nature of any missing documentation frequently led to a lack of response; the tax authorities systematically abstained from providing the necessary clarifications. Consequently, for an extended period, individuals implicated found it virtually impossible to contest their alleged misrepresentations. Moreover, acquiring detailed insights into the operation and foundation of the risk classification model proved to be a challenge not only for the affected families, but also for journalists, policymakers, regulatory entities, and members of civil society.
It should also be noted that the Dutch Data Protection Authority fined the Dutch tax authorities EUR 2.75 million for unlawfully processing personal data, particularly dual nationality, and highlighted several GDPR violations due to the improper use of personal data over many years.
The critical issues identified with the deployment and use of this ADM system can be summarized as follows:
  • Inefficiency in achieving the objectives of the deployment and use.
  • Discrimination–racial profiling, inadequate data source, and quality validation.
  • Black box systems, self-learning algorithms in the public sector, and lack of human supervision in the decision-making process.
  • Lack of transparency and explainability.
  • Lack of FRIA.
  • Lack of consultations with stakeholders and accountability.
  • Lack of effective remedies and redress.
  • Lack of review and auditing.
  • GDPR violations.

4.2. Theoretical Application of the Integrated Framework

The integrated DPIA–FRIA framework presented in Section 3 would be a crucial step in identifying and mitigating potential FR issues in a timely and structured manner prior to the deployment of the ADM system involved in the Dutch childcare benefits scandal. In providing a general justification for the integrated DPIA and FRIA framework without exhaustively analyzing each question, it is important to emphasize that this approach strategically evaluates ADM systems to prevent issues like those seen in the Dutch childcare scandal. By considering legal compliance, stakeholder values, data integrity, and FRs holistically, the framework identifies potential risks and implements safeguards across all stages of system development and deployment.
The integrated framework emphasizes the alignment of AI/ADM systems with their intended objectives and legal bases, ensuring efficiency through stakeholder consultations and rigorous impact assessments. FRIA Q1, Q9, and Q118 identify these issues.
Through rigorous examination of data sources and quality with assessments of built-in biases and assumptions, the framework aims to prevent discrimination, including racial profiling. It includes measures to rectify data biases and ensure that algorithms are as fair and objective as possible. FRIA Q18, Q21, Q24, and Q26 identify these issues.
Addressing the opaqueness of AI/ADM systems and algorithm specifications, the framework mandates clear documentation of algorithms and decision-making processes, enhancing transparency and explainability. This ensures that stakeholders understand how decisions are made, enabling better oversight and trust in the system. FRIA Q54, Q65, Q68, Q88, Q96, and Q102 identify these issues.
An FRIA component within the integrated framework assesses potential FR impact, engaging with a broad range of stakeholders to identify and mitigate risks. FRIA Q124 and Q128 identify these issues.
The framework outlines mechanisms for human oversight, clear lines of accountability, and effective remedies, ensuring that decisions are reviewable, and citizens can make official complaints and seek redress. Regular audits and reviews guarantee ongoing compliance and responsiveness to emerging issues. FRIA Q10, Q13, Q44, Q83, Q86, Q97, Q109, and Q115 identify these issues.
Security and privacy are foundational, with the framework demanding strict controls and assessments to safeguard personal data. The integrated DPIA–FRIA algorithm would have identified GDPR violations, pinpointing areas where data processing did not comply with the regulation’s strict principles, i.e., proportionality and necessity, data minimization, legal control, rights of data subjects, etc. FRIA Q12, Q13, Q47, Q33, and Q35 identify these issues.
Impact assessments further illustrate the AI/ADM system’s effects on public values and fundamental rights, guiding the development of preventative and mitigating measures. FRIA Q8, Q133, Q135, Q145, and Q146 identify these issues.
This integrated DPIA and FRIA approach, with its emphasis on legal compliance, stakeholder engagement, data integrity, transparency, and human rights, offers a robust methodology for identifying and resolving the complex issues highlighted by the Dutch childcare benefits scandal. Through continuous monitoring, review, and adaptation, such a framework can prevent similar failures in future ADM deployments, ensuring that systems serve their intended purposes without compromising individual rights or public trust. It is obvious that if the integrated DPIA–FRIA framework had been applied before the use of the ADM system by the Dutch Tax and Customs Administration, it would have identified and mitigated the critical issues, or potentially, the ADM system use would have been banned due to the identified risks.
It is critical to mention that The Dutch House of Representatives, on 8 April 2022, endorsed the mandatory use of the Human Rights and Algorithms Impact Assessment (IAMA). This decision was part of an effort to ensure responsible and informed decision-making in the use of algorithms, aiming to prevent violations of human rights and address issues highlighted by the ‘toeslagenaffaire’ [49].

5. Discussion

Our work focused explicitly on existing DPIA and FRIA frameworks, with the ultimate goal of providing a unified framework covering both these tools; revisiting our research questions stated in Section 1.2, the following conclusions are derived:
Q1.
The DPIA (in the AI domain) and the FRIA aim to proactively identify and mitigate risks to human rights. The DPIA is focused on data protection and privacy, leveraging a structured, regulatory-driven methodology, whereas the FRIA broadens the scope to encompass a range of fundamental rights beyond privacy, adopting a more principle-based approach. On the other hand, the DPIA may be obligatory for processes that do not employ the use of AI systems.
Q2.
Merging DPIA and FRIA methodologies into a unified framework involves harmonizing their structured and principle-based approaches to ensure comprehensive risk evaluation. Such an integrated framework needs to consider their complementary strengths as well as their strong interconnections, facilitating a thorough assessment that encompasses both data protection as well as other fundamental rights. In any case, key elements include a detailed analysis of the AI system’s impact on fundamental rights, a clear documentation process, and mechanisms for continuous monitoring and updating in response to new insights or system changes.
Q3.
Stakeholders in the EU, by adopting such a framework, would benefit from several advantages, including a comprehensive approach to risk assessment that aligns simultaneously with more than one regulatory requirement and ethical standards, potentially enhancing public trust in AI systems. The integrated framework addresses compliance with the GDPR and the AI Act by providing a comprehensive tool for evaluating both data protection and fundamental rights impacts, thereby promoting ethical AI system utilization. This approach ensures that AI systems are developed and deployed in a manner consistent with EU regulatory frameworks and ethical considerations. However, obstacles such as the complexity of integrating methodologies, resource constraints, and the need for specialized knowledge to apply the framework effectively could arise; as it became evident, the role of the Provider of the system is also crucial in conducting a proper assessment.
Q4.
By fostering a comprehensive evaluation of transparency, accountability, and impacts on fundamental rights, including data protection and privacy, the proposed framework aims to contribute significantly to enhancing public confidence in AI technologies. Therefore, it may encourage the development of AI systems that are not only compliant with legal standards, but also aligned with ethical principles, thereby supporting the responsible use of AI across diverse industries. Making the output of the FRIA/DPIA publicly available, the overall transparency, as well as the user’s trust, will be increased, whilst at the same time, the relevant stakeholders will be obligated to indeed implement what they are supposed to do.

6. Conclusions

This paper focused on a well-known challenging problem that appears through the recent AI Act in Europe, that is the interplay between the AI Act and the GDPR, with emphasis on the relationship between the FRIA and the DPIA (which are both accountability tools provisioned in these legal instruments). To this end, based on existing tools which support either DPIA or FRIA conduction, we provide a new unified framework that aims to facilitate the stakeholders in conducting simultaneously and effectively both DPIA and FRIA.In the process, we identified several specific interconnections between these tools, as well the need to employ the AI system’s Provider in the process in case that the Provider is the only one acquiring crucial information on the system.
Moreover, toward illustrating the applicability and effectiveness of the proposed framework, we examined how it could be used in a realistic scenario. As shown in Section 4, critical failures related to the design of the AI system could be identified before the deployment of the system, thus allowing the proper corrective measures to proceed so as to ensure that fundamental rights will not be violated.
This coherent framework reveals the strong relationships between FRIA and DPIA. Therefore, our recommendation is that the relevant stakeholders should put effort on promoting such unified approach, to facilitate the proper conduction of both DPIA and FRIA, whilst the role of the AI systems Providers is also important, independently from whether they are data controllers or not. Additionally, from our point of view, this aspect illustrates that national legislators should designate data protection authorities, which are already competent for checking DPIAs according to the GDPR, as the competent Market Surveillance Authorities for the AI Act in case personal data are being processed, i.e., we support the relevant European Data Protection Board’s statement [50].
Of course, this framework can form the basis for further development and improvement, possibly towards a software tool with appropriate automation (and we set this as a possible future research step). In any case, we hope that this work will be helpful in the public discussion concerning interplays between the AI Act and the GDPR, facilitating the further development of powerful and practical accountability tools.

Author Contributions

Conceptualization, K.L.; methodology, A.T and K.L.; validation, A.T. and K.L.; investigation, A.T.; implementation, A.T.; writing—original draft preparation, A.T and K.L.; writing—review and editing, A.T and K.L.; supervision, K.L All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their very useful comments and suggestions, which helped to greatly improve the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aggarwal, K.; Mijwil, M.M.; Al-Mistarehi, A.-H.; Alomari, S.; Gök, M.; Alaabdin, A.M.Z.; Abdulrhman, S.H. Has the Future Started? The Current Growth of Artificial Intelligence, Machine Learning, and Deep Learning. Iraqi J. Comput. Sci. Math. 2022, 3, 115–123. [Google Scholar] [CrossRef]
  2. Chugh, R. Impact of Artificial Intelligence on the Global Economy: Analysis of Effects and Consequences. Int. J. Soc. Sci. Econ. Res. 2023, 8, 1377–1385. [Google Scholar] [CrossRef]
  3. Bezboruah, T.; Bora, A. Artificial Intelligence: The Technology, Challenges and Applications. Trans. Eng. Comput. Sci. 2020, 8, 44–51. [Google Scholar] [CrossRef]
  4. Poalelungi, D.G.; Musat, C.L.; Fulga, A.; Neagu, M.; Neagu, A.I.; Piraianu, A.I.; Fulga, I. Advancing Patient Care: How Artificial Intelligence Is Transforming Healthcare. J. Pers. Med. 2023, 13, 1214. [Google Scholar] [CrossRef] [PubMed]
  5. Zamponi, M.E.; Barbierato, E. The Dual Role of Artificial Intelligence in Developing Smart Cities. Smart Cities 2022, 5, 728–755. [Google Scholar] [CrossRef]
  6. Khurana, D.; Koli, A.; Khatter, K.; Singh, S. Natural Language Processing: State of the Art, Current Trends and Challenges. Multimed. Tools Appl. 2023, 82, 3713–3744. [Google Scholar] [CrossRef] [PubMed]
  7. Etzioni, A.; Etzioni, O. Incorporating Ethics into Artificial Intelligence. J. Ethics 2017, 21, 403–418. [Google Scholar] [CrossRef]
  8. Mökander, J.; Morley, J.; Taddeo, M.; Floridi, L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Sci. Eng. Ethics 2021, 27, 44. [Google Scholar] [CrossRef]
  9. Selbst, A.D. An Institutional View of Algorithmic Impact Assessments. Harv. J. Law Technol. Harv. JOLT 2021, 35, 117–191. [Google Scholar]
  10. Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 23–25 May 2016; pp. 582–597. [Google Scholar]
  11. European Union. Regulation—EU—2024/1689—EN—EUR-Lex. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 16 July 2024).
  12. Heikkilä, M. Five Things You Need to Know About the EU’s New AI Act. Available online: https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/ (accessed on 27 March 2024).
  13. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA Relevance); 2016; Volume 119. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng (accessed on 17 December 2024).
  14. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe; 2018. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52018DC0237 (accessed on 17 December 2024).
  15. European Parliament Legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021)0206—C9-0146/2021—2021/0106(COD)); 9_TA. Available online: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html (accessed on 17 December 2017).
  16. Brock, D.C. Learning from Artificial Intelligence’s Previous Awakenings: The History of Expert Systems. AI Mag. 2018, 39, 3–15. [Google Scholar] [CrossRef]
  17. Grosan, C.; Abraham, A. Rule-Based Expert Systems. In Intelligent Systems: A Modern Approach; Grosan, C., Abraham, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 149–185. ISBN 978-3-642-21004-4. [Google Scholar]
  18. Buiten, M.C. Towards Intelligent Regulation of Artificial Intelligence. Eur. J. Risk Regul. 2019, 10, 41–59. [Google Scholar] [CrossRef]
  19. Woschank, M.; Rauch, E.; Zsifkovits, H. A Review of Further Directions for Artificial Intelligence, Machine Learning, and Deep Learning in Smart Logistics. Sustainability 2020, 12, 3760. [Google Scholar] [CrossRef]
  20. Cao, L. AI in Finance: Challenges, Techniques and Opportunities. arXiv 2021, arXiv:2107.09051. [Google Scholar] [CrossRef]
  21. Tan, E.; Jean, M.P.; Simonofski, A.; Tombal, T.; Kleizen, B.; Sabbe, M.; Bechoux, L.; Willem, P. Artificial Intelligence and Algorithmic Decisions in Fraud Detection: An Interpretive Structural Model. Data Policy 2023, 5, e25. [Google Scholar] [CrossRef]
  22. Heins, C. Artificial Intelligence in Retail—A Systematic Literature Review. Foresight 2023, 25, 264–286. [Google Scholar] [CrossRef]
  23. Lacroux, A.; Martin-Lacroux, C. Should I Trust the Artificial Intelligence to Recruit? Recruiters’ Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening. Front. Psychol. 2022, 13, 895997. [Google Scholar] [CrossRef] [PubMed]
  24. Brooks, C.; Gherhes, C.; Vorley, T. Artificial Intelligence in the Legal Sector: Pressures and Challenges of Transformation. Camb. J. Reg. Econ. Soc. 2020, 13, 135–152. [Google Scholar] [CrossRef]
  25. Ethics Guidelines for Trustworthy AI|Shaping Europe’s Digital Future. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 23 November 2023).
  26. European Union. Charter of Fundamental Rights of the European Union. 2010. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:12012P/TXT (accessed on 17 December 2024).
  27. Gaud, D. Ethical Considerations for the Use of AI Language Model. Int. J. Res. Appl. Sci. Eng. Technol. 2023, 11, 6–14. [Google Scholar] [CrossRef]
  28. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. SSRN Electron. J. 2020, 2020-1. [Google Scholar] [CrossRef]
  29. Art. 29 Working Party. Justice and Consumers Article 29—Guidelines on Data Protection Impact Assessment (DPIA) (Wp248rev.01). Available online: https://ec.europa.eu/newsroom/article29/items/611236 (accessed on 26 January 2025).
  30. Söderlund, K.; Larsson, S. Enforcement Design Patterns in EU Law: An Analysis of the AI Act. Digit. Soc. 2024, 3, 41. [Google Scholar] [CrossRef]
  31. Novelli, C.; Casolari, F.; Rotolo, A.; Taddeo, M.; Floridi, L. Taking AI Risks Seriously: A New Assessment Model for the AI Act. AI Soc. 2023, 39, 2493–2497. [Google Scholar] [CrossRef]
  32. Mantelero, A. The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, Legal Obligations and Key Elements for a Model Template. Comput. Law Secur. Rev. 2024, 54, 106020. [Google Scholar] [CrossRef]
  33. Getting the Future Right—Artificial Intelligence and Fundamental Rights|European Union Agency for Fundamental Rights. Available online: https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights (accessed on 29 January 2025).
  34. Ivanova, Y. The Data Protection Impact Assessment as a Tool to Enforce Non-Discriminatory AI. In Privacy Technologies and Policy, Proceedings of the 8th Annual Privacy Forum, APF 2020, Lisbon, Portugal, 22–23 October 2020; Lecture Notes in Computer Science (LNCS, Volume 12121); Springer: Cham, Switzerland, 2020; pp. 3–24. [Google Scholar] [CrossRef]
  35. Lazcoz, G.; de Hert, P. Humans in the GDPR and AIA Governance of Automated and Algorithmic Systems. Essential Pre-Requisites against Abdicating Responsibilities. Comput. Law Secur. Rev. 2023, 50, 105833. [Google Scholar] [CrossRef]
  36. Kaminski, M.E.; Malgieri, G. Algorithmic Impact Assessments under the GDPR: Producing Multi-Layered Explanations. Int. Data Priv. Law 2021, 11, 125–144. [Google Scholar] [CrossRef]
  37. Kazim, E.; Koshiyama, A. The Interrelation between Data and AI Ethics in the Context of Impact Assessments. AI Ethics 2021, 1, 219–225. [Google Scholar] [CrossRef]
  38. Mitrou, L. Data Protection, Artificial Intelligence and Cognitive Services: Is the General Data Protection Regulation (GDPR) ‘Artificial Intelligence-Proof’? SSRN Electron. J. 2018. [Google Scholar] [CrossRef]
  39. Kloza, D.; Van Dijk, N.; Casiraghi, S.; Maymir, S.V.; Roda, S.; Tanas, A.; Konstantinou, I. Towards a Method for Data Protection Impact Assessment: Making Sense of GDPR Requirements. d.pia.lab Policy Brief 2019, 1, 1–8. [Google Scholar]
  40. Data Protection Impact Assessments (DPIAs). Available online: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/accountability-and-governance/data-protection-impact-assessments-dpias/ (accessed on 29 February 2024).
  41. Data Protection and Privacy Impact Assessments. Available online: https://iapp.org/resources/topics/privacy-impact-assessment-2/ (accessed on 30 January 2025).
  42. The Open Source PIA Software Helps to Carry Out Data Protection Impact Assessment. Available online: https://www.cnil.fr/en/open-source-pia-software-helps-carry-out-data-protection-impact-assessment (accessed on 29 February 2024).
  43. Bertaina, S.; Biganzoli, I.; Desiante, R.; Fontanella, D.; Inverardi, N.; Penco, I.G.; Cosentini, A.C. Fundamental Rights and Artificial Intelligence Impact Assessment: A New Quantitative Methodology in the Upcoming Era of AI Act. Comput. Law Secur. Rev. 2025, 56, 106101. [Google Scholar] [CrossRef]
  44. Gerards, J.; Schäfer, M.T.; Muis, I.; Vankan, A. Impact Assessment Fundamental Rights and Algorithms—Report—Government.Nl. Available online: https://www.government.nl/documents/reports/2022/03/31/impact-assessment-fundamental-rights-and-algorithms (accessed on 30 January 2025).
  45. Waem, H.; Dauzier, J.; Demircan, M. Fundamental Rights Impact Assessments Under the EU AI Act: Who, What and How? Available online: https://www.technologyslegaledge.com/2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/ (accessed on 31 March 2024).
  46. Human Rights Impact Assessment of Digital Activities|the Danish Institute for Human Rights. Available online: https://www.humanrights.dk/publications/human-rights-impact-assessment-digital-activities (accessed on 5 March 2024).
  47. Treasury Board of Canada Secretariat Algorithmic Impact Assessment Tool. Available online: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed on 5 March 2024).
  48. Xenophobic Machines: Discrimination Through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal. Available online: https://www.amnesty.org/en/documents/eur35/4686/2021/en/ (accessed on 22 March 2024).
  49. Dutch House of Representatives Endorses Mandatory Use of Human Rights and Algorithms Impact Assessment—News—Utrecht University. Available online: https://www.uu.nl/en/news/dutch-house-of-representatives-endorses-mandatory-use-of-human-rights-and-algorithms-impact (accessed on 26 January 2025).
  50. European Data Protection Board; Statement 3/2024 on Data Protection Authorities’ Role in the Artificial Intelligence Act Framework. Available online: https://www.edpb.europa.eu/our-work-tools/our-documents/statements/statement-32024-data-protection-authorities-role-artificial_en (accessed on 29 January 2025).
Figure 1. DPIA categories.
Figure 1. DPIA categories.
Jcp 05 00007 g001
Figure 2. General framework.
Figure 2. General framework.
Jcp 05 00007 g002
Figure 3. Core principles.
Figure 3. Core principles.
Jcp 05 00007 g003
Figure 4. Preventative and mitigating measures (DPIA).
Figure 4. Preventative and mitigating measures (DPIA).
Jcp 05 00007 g004
Figure 5. Risks.
Figure 5. Risks.
Jcp 05 00007 g005
Figure 6. Purpose assessment.
Figure 6. Purpose assessment.
Jcp 05 00007 g006
Figure 7. Public values and legal basis.
Figure 7. Public values and legal basis.
Jcp 05 00007 g007
Figure 8. Data sources and quality.
Figure 8. Data sources and quality.
Jcp 05 00007 g008
Figure 9. System specifications.
Figure 9. System specifications.
Jcp 05 00007 g009
Figure 10. Algorithm Specifications.
Figure 10. Algorithm Specifications.
Jcp 05 00007 g010
Figure 11. Procedural fairness and governance.
Figure 11. Procedural fairness and governance.
Jcp 05 00007 g011
Figure 12. Preventative and mitigating measures (FRIA).
Figure 12. Preventative and mitigating measures (FRIA).
Jcp 05 00007 g012
Table 1. Key differences between DPIA And FRIA.
Table 1. Key differences between DPIA And FRIA.
DPIAFRIA
Legal BasisGDPR AI Act.
When?For any personal data process with high risks for individuals (regardless of the use of AI system or not).For high-risk AI systems, as defined in the AI Act.
Who?Data controllersAI Deployers (who typically are expected to be data controllers). The role of AI Providers (providing feedback) is also important, if they are different entities from the AI Deployers.
ScopeFocuses mainly on the rights of privacy and personal data protection (and other rights possibly affected due to the misuse of personal data).Encompasses all fundamental rights, such as equality, non-discrimination, freedom of speech and human dignity.
People affectedData subjects (i.e., the individuals whose data are being processed in the context of the said data process), e.g., customers and employees.Individuals whose data are being used for training purposes, individuals who use the systems, and individuals for whom AI systems take decisions. More generally, there is a broader societal impact, especially for vulnerable populations or marginalized groups.
Engagement of AuthoritiesConsultation with data protection authorities if high risks are still identified after the mitigation measures induced by the DPIA.Communicate the output of the FRIA to the market surveillance authority, independently from whether high risks are still identified after the mitigation measures or not.
Table 2. DPIA And FRIA minimum information requirements.
Table 2. DPIA And FRIA minimum information requirements.
RequirementDPIAFRIA
PurposeThe purpose of the said data process, as well as the relevant legal basis according to Art. 6 of the GDPR.The purposes for which the Deployer uses the AI system.
Description of Processing/SystemDetailed description of personal data being processed, including data flows, recipients etc. Description of the AI system’s operation, context, and intended societal or operational goals. Specific information related to the design of the AI system are also needed.
Necessity and ProportionalityAssess whether the processing is necessary and proportional to achieve the intended purpose.Evaluate whether the use of an AI system is necessary and whether its benefits outweigh risks to human rights.
Risks IdentifiedRisks to privacy and data protection (e.g., breaches, profiling, unauthorized access, collecting excessive personal data related to the desired purposes etc.).Risks to fundamental rights due to the use of the AI system (e.g., discrimination, bias, human dignity, and freedom of expression). Societal risks are also considered.
Mitigation MeasuresTechnical and organizational measures (e.g., encryption, pseudonymization, and access controls).Ethical, legal, and technical safeguards, including safeguards related to the inners of the AI system (e.g., bias detection, fairness audits, redress mechanisms, and explainability).
Time Period and FrequencyExpected duration of processing.A description of the intended timeframe and usage frequency for the high-risk AI system.
Table 3. Synergies between DPIA and FRIA.
Table 3. Synergies between DPIA and FRIA.
Description of ConnectionNotes
DPIA: Q2 → FRIA: Q10, Q13Stakeholders responsibilities
FRIA: Q34 → DPIA: Q4Personal data processing
DPIA: Q8 → FRIA: Q123Legal basis
FRIA: Q35 → DPIA: Q9
FRIA: Q35–36 → DPIA: Q47
Data minimization principle
FRIA: Q18 → DPIA: Q10Data quality
FRIA: Q106–108 → DPIA: Q12Communication
DPIA: Q14–Q16 → FRIA: Q124Fundamental Rights
FRIA: Q52 → DPIA: Q17
FRIA: Q52 → DPIA: Q33
Contracts: Agreements
FRIA: Q84 → DPIA: Q24Personnel management
FRIA: Q115 → DPIA: Q26Accountability and review
FRIA: Q28 → DPIA: Q20–Q30, Q32–Q42, Q44–Q46, Q67 or Q68 or Q69 depending on the answerData Security
FRIA: Q51 → DPIA: Q34Network security
FRIA: Q33, Q53 → DPIA: Q41Data anonymization
FRIA:31 → DPIA: Q43Authentication methods
FRIA: Q32 → DPIA: Q44Logging
FRIA: Q30 →DPIA: Q45Archiving
FRIA: Q8, Q69–Q82, Q122–Q128 → DPIA: Q66AΙ system implications
FRIA: Q9–Q16, Q18–Q37, Q44, Q48–Q50, Q52, Q64 → DPIA: Q67AΙ system threats
FRIA: Q9–Q37, Q44, Q48–Q53, Q56, Q59–Q68, Q83–Q146 → DPIA: Q68AΙ system sources of risk
FRIA: Q9–Q37, Q44, Q48–Q53, Q59–Q68, Q83–Q121, Q129–Q146 → DPIA: Q69AΙ system measures
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thomaidou, A.; Limniotis, K. Navigating Through Human Rights in AI: Exploring the Interplay Between GDPR and Fundamental Rights Impact Assessment. J. Cybersecur. Priv. 2025, 5, 7. https://doi.org/10.3390/jcp5010007

AMA Style

Thomaidou A, Limniotis K. Navigating Through Human Rights in AI: Exploring the Interplay Between GDPR and Fundamental Rights Impact Assessment. Journal of Cybersecurity and Privacy. 2025; 5(1):7. https://doi.org/10.3390/jcp5010007

Chicago/Turabian Style

Thomaidou, Anna, and Konstantinos Limniotis. 2025. "Navigating Through Human Rights in AI: Exploring the Interplay Between GDPR and Fundamental Rights Impact Assessment" Journal of Cybersecurity and Privacy 5, no. 1: 7. https://doi.org/10.3390/jcp5010007

APA Style

Thomaidou, A., & Limniotis, K. (2025). Navigating Through Human Rights in AI: Exploring the Interplay Between GDPR and Fundamental Rights Impact Assessment. Journal of Cybersecurity and Privacy, 5(1), 7. https://doi.org/10.3390/jcp5010007

Article Metrics

Back to TopTop