Next Article in Journal
A Lightweight Multimodal Framework for Misleading News Classification Using Linguistic and Behavioral Biometrics
Next Article in Special Issue
Homomorphic Encryption for Confidential Statistical Computation: Feasibility and Challenges
Previous Article in Journal
Deep Learning Approaches for Multi-Class Classification of Phishing Text Messages
Previous Article in Special Issue
Blockchain-Enabled GDPR Compliance Enforcement for IIoT Data Access
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons)

Beacom College of Computer and Cyber Sciences, Dakota State University, Madison, SD 57042, USA
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2025, 5(4), 103; https://doi.org/10.3390/jcp5040103
Submission received: 26 August 2025 / Revised: 7 October 2025 / Accepted: 19 November 2025 / Published: 22 November 2025
(This article belongs to the Special Issue Data Protection and Privacy)

Abstract

Privacy harms have expanded alongside rapid technological change, challenging the adequacy of existing regulatory frameworks. This systematic review (1990–2025) systematically maps documented privacy harms to specific legal mechanisms and observed enforcement outcomes across jurisdictions, using PRISMA-guided methods and ROBIS risk-of-bias assessment. We synthesize evidence on major regimes (e.g., GDPR, COPPA, CCPA, HIPAA, GLBA) and conduct comparative legal analysis across the U.S., E.U., and underexplored regions in Asia, Latin America, and Africa. Key findings indicate increased recognition of data subject rights, persistent gaps in cross-border data governance, and emerging risks from AI/ML/LLMs, IoT, and blockchain, including data breaches, algorithmic discrimination, and surveillance. While regulations have advanced, enforcement variability and fragmented standards limit effectiveness. We propose strategies for harmonization and risk-based, technology-neutral safeguards. While focusing on the U.S. sectoral and E.U. comprehensive models, we include targeted comparisons with Canada (PIPEDA), Australia (Privacy Act/APPs), Japan (APPI), India (DPDPA), Africa (POPIA/NDPR/Kenya DPA), and ASEAN interoperability instruments. This review presents an evidence-based framework for understanding the interplay between evolving harms, emerging technologies, and legal protections, and identifies priorities for strengthening global privacy governance.

1. Introduction

Privacy is a fundamental right, but its legal, cultural, and technological evolution over the past 35 years has been fragmented and uneven. While prior scholarship has chronicled privacy laws, harms, and technological drivers separately, few studies systematically link specific harms to corresponding regulatory responses across jurisdictions. This review fills that gap by mapping how emerging harms, such as algorithmic discrimination, cyberattacks, and surveillance, have shaped legislative trajectories in the U.S., E.U., and beyond, and by assessing the extent to which existing frameworks mitigate these risks. It offers a comparative, evidence-based evaluation that highlights effectiveness, gaps, and future challenges in privacy governance.
The rapid advancement of digital technologies over the past three and a half decades has profoundly reshaped the landscape of personal data privacy, presenting both unprecedented opportunities and significant challenges. From the commercialization of the Internet in the 1990s to the rise of big data analytics, Artificial Intelligence (AI)/Machine Learning (ML)/Large Language Models (LLMs), and ubiquitous computing, the collection and processing of personal data have expanded exponentially; see the cited analysis of Internet history, discussions on privacy in the digital age and the critique of surveillance capitalism [1,2,3]. This transformation has been further accelerated by the proliferation of devices based on the Internet of Things (IoT), wearable technologies, and Generative Artificial Intelligence (GAI), which create intricate ecosystems where data flows seamlessly between devices, platforms, organizations, and jurisdictions [4,5].
As a result, the potential for privacy violations has grown substantially, encompassing threats such as identity theft, unauthorized data access, data breaches, algorithmic discrimination, targeted surveillance, and manipulative targeting. Notable incidents, such as the 2017 Equifax breach and the 2018 Cambridge Analytica scandal, have vividly demonstrated the dangers of inadequate data protection and misuse of personal information, raising public awareness and encouraging governments around the world to enact robust privacy regulations [6,7]. These events underscore the broader risks of such harms, which can undermine individual autonomy, lead to exploitation, and exacerbate social inequalities, as detailed in key sources on data security and regulatory frameworks [5,8].
Consequently, understanding the evolving nature of privacy laws is crucial for monitoring regulatory responses, mitigating risks, and ensuring accountability in data protection. The concept of privacy has itself evolved from its initial definition by U.S. Supreme Court justices as “the right to be let alone,” in the foundational essay, to more expansive interpretations that now include informational privacy, decisional privacy, and contextual integrity, adapting to societal shifts and technological progress [9,10]. Although thorough, this study recognizes several limitations. Its reliance on English-language sources may result in missing important viewpoints from non-English-speaking regions. Furthermore, the rapid pace of technological and regulatory changes could mean that some recent developments are not fully represented in the literature. Evaluating the effectiveness of privacy laws is complicated by underreported harms, and depending solely on published materials may overlook informal regulatory practices.
Privacy harms—including identity theft, unauthorized data access, data breaches, targeted discrimination, and surveillance—pose significant risks to individual autonomy and can lead to discrimination and exploitation [1,2,5,11]. Therefore, understanding the evolving landscape of privacy laws is essential to monitor how regulatory bodies respond to new challenges, mitigate privacy risks, and ensure organizational accountability for protecting user data.
This review of the literature examines the developments in privacy harms and regulations, focusing on key patterns, legal responses, and challenges that have shaped this field. By analyzing major privacy laws such as the General Data Protection Regulation (GDPR), Children’s Online Privacy Protection Act (COPPA), California Consumer Privacy Act (CCPA), Health Insurance Portability and Accountability Act (HIPAA), and Gramm–Leach–Bliley Act (GLBA), this review provides a comprehensive understanding of how privacy frameworks have adapted to address the complexities of privacy issue.
This research addresses several key questions:
  • How did the five harms, namely breaches, algorithmic discrimination, surveillance, manipulative targeting, and dignitary harms, change since 2010 by sector and due to different technologies (AI/LLMs, IoT, blockchain)?
  • Which privacy principles are most cited in enforcement for each harm?
  • Since 2018, how have breach notifications, sanction patterns, and use of Data Protection Impact Assessments (DPIAs)/Record of Processing Activities (RoPA)-Data Subject Requests (DSR) portals/TIAs changed under GDPR vs. major U.S. laws?
  • Where do AI/Automated decision-making (ADM), IoT, and blockchain conflict with GDPR duties, and which controls mitigate them?
  • Which mechanisms best close cross-border and algorithmic accountability gaps?
By addressing these questions, this review contributes to the ongoing discourse on privacy protection in the digital age, offering insights to policymakers, organizations, and researchers seeking to navigate the complex landscape of privacy regulation.
The remainder of this paper is organized as follows: Section 2 describes the methodological framework adopted for this study, including the research design and analytical techniques. Section 3 investigates the historical development and transformation of privacy harms, together with their contemporary implications. Section 4 provides an overview of the pivotal privacy legislation and assesses its regulatory impact. Section 5 analyzes the foundational principles of privacy and the conceptual frameworks that inform institutional privacy practices. Section 6 critically examines the technological barriers and vulnerabilities that affect privacy. Section 7 presents the empirical findings and discusses their relevance to the current privacy discourse. Section 8 details limitations. Finally, Section 9 offers concluding remarks and proposes avenues for future research.

2. Methodology

This section details the methods used for this study. It describes research design, data collection, and analytical techniques, providing an overview of the search of the academic literature, research design, and data collection methods. This study is a systematic review and comparative legal analysis conducted in accordance with the PRISMA 2020 guidelines (Appendix C). The methodology was predefined to enhance transparency, with eligibility criteria, search strategy, and synthesis approach documented prior to data collection.
The comprehensive search across Scopus, IEEE Xplore, ACM Digital Library, JSTOR, and ScienceDirect yielded a total of 32,362 records before deduplication. After removing duplicates in EndNote, 31,817 unique records remained. Of these, 550 records were screened at the title and abstract level by two independent reviewers, resulting in 222 full-text articles assessed for eligibility. Ultimately, 23 full-text articles were excluded (reasons: 12 lacked empirical or policy-relevant data, 7 were purely theoretical commentaries, 3 were duplicates, and 1 was non-English), leaving 99 studies/reports included in the narrative synthesis. The included studies encompassed a diverse range of evidence types, with the following breakdown: 28 legal analyses, 11 policy documents, 27 empirical studies, 9 review articles, and 24 technical papers. This corpus enabled thematic grouping by harm classification, such as data breaches, algorithmic discrimination, and comparative legal analysis across jurisdictions, while accounting for heterogeneity in study designs. The next subsection elaborates on the specific research design implemented.

2.1. Research Design

We conducted a systematic review and comparative legal analysis, adhering to PRISMA 2020 guidelines. Our methodology was predefined, covering study selection, information retrieval, data extraction, bias assessment, and synthesis. The review examined privacy harms and regulations from 1990 to 2025 across various jurisdictions. Our objectives were to characterize the evolution of privacy harms, analyze relevant laws and principles, identify regulatory trends, and evaluate the effectiveness of legal responses, particularly concerning emerging technologies. Due to diverse study designs, we employed a narrative synthesis, combining predefined categories with emergent themes. We also compared legal approaches across jurisdictions and timeframes. Further details on data collection are in Section 2.2.
A comprehensive search was performed across Scopus, IEEE Xplore, ACM Digital Library, JSTOR, ScienceDirect, covering the period from January 1990 to June 2025. Search strings combined controlled vocabulary and free-text terms related to “privacy harms,” “digital harms,” “misinformation,” “cybercrime,” and “legal enforcement,” using Boolean operators (AND, OR). Studies published in English were included.
We included studies that (i) analyzed legal or regulatory responses to online or privacy harms, (ii) identified enforcement mechanisms, or (iii) evaluated the impact of regulation. Exclusion criteria were (a) purely theoretical or opinion-based commentaries, (b) studies without empirical or policy-relevant data, and (c) duplications. Grey literature, including government reports and regulatory guidance, was considered if it provided substantial data or legal detail.
All records retrieved were imported into EndNote, and duplicates were removed. Two reviewers independently screened article titles and abstracts, with subsequent full-text evaluation for inclusion. Disagreements between reviewers were resolved by consensus. The selection process is summarized in the PRISMA flow diagram (Figure A1).

2.2. Data Collection

This section focuses on the data collection process. Data collection involved a comprehensive search of academic databases, including IEEE Xplore, ACM Digital Library, ScienceDirect, JSTOR, and Google Scholar, using the following search terms.
  • “Privacy harm*” OR “privacy violation*”
  • “Privacy law*” OR “data protection law*”
  • “GDPR” OR “GLBA” OR “CCPA” OR “COPPA” OR “HIPAA”
  • “Privacy principle*” OR “data protection principle*”
  • “Privacy AND technology”
  • “Privacy AND artificial intelligence”
  • “Privacy AND blockchain”
  • “Privacy AND Internet of Things”
The inclusion criteria for this review were carefully defined to ensure the relevance and quality of the sources. Only materials published between 1990 and 2025 were considered, reflecting the modern evolution of privacy concerns and technological advancements. The review was limited to sources written in English to maintain consistency and accessibility. Publications in peer-reviewed journals, conference proceedings, and books were prioritized to ensure the credibility and academic rigor of the information. The selected materials focused specifically on topics related to privacy harms, privacy regulations, or technological challenges, aligning with the scope of the review.
Search terms were selected to balance breadth and precision, prioritizing foundational concepts such as “Privacy AND artificial intelligence” to encompass ML/LLMs over granular variants like “machine learning” to avoid redundancy and capture interdisciplinary overlaps, For example, AI-driven bias and black-box opacity, addressed in Section 3.2.1 and Section 6.1. This approach yielded comprehensive coverage of 86 studies, with post hoc validation ensuring key ML-related challenges (e.g., algorithmic bias from training data, self-learning risks) were represented via broader AI queries, enhancing the review’s accuracy without exhaustive term proliferation.
The primary focus begins with examining legal and regulatory documents, which serve as foundational sources for understanding privacy laws and their enforcement. For example, key texts such as the GDPR, CCPA, COPPA, HIPAA, and GLBA were thoroughly reviewed to understand their core provisions, while regulatory guidance documents provided insight into how these laws are interpreted and applied in practice. Building on this foundation, court decisions related to privacy violations were analyzed to highlight important legal precedents, and policy documents from regulatory bodies were explored to reveal evolving frameworks and compliance expectations, thereby illustrating the dynamic nature of privacy regulation.
In transitioning to real-world applications, significant case studies of privacy violations were identified and scrutinized to demonstrate the practical implications of these laws and the consequences of noncompliance. These cases drew from regulatory enforcement actions, which showcased penalties and corrective measures, as well as court proceedings that revealed key legal arguments and outcomes in disputes. Furthermore, media reports on major data breaches and privacy scandals were incorporated to underscore tangible harms, such as impacts on individuals and organizations, thus bridging theoretical legal analysis with everyday consequences.
In the analytical phase, the collected data were subjected to rigorous examination using several techniques, starting with a chronological analysis that mapped the evolution of privacy harms and regulations over time. This approach highlighted how technological advances have altered privacy threats and prompted adaptive regulatory responses, setting the stage for deeper comparative insights. Following this timeline, a comparative legal analysis was conducted across jurisdictions to identify similarities and differences in the main privacy laws, focusing on shared principles such as data minimization and transparency, while noting variations in the enforcement, scope, and definitions of personal data; this not only evaluated the strengths and limitations of these frameworks, but also tracked trends in response to emerging technologies.
To further enrich the analysis, a thematic approach was employed to uncover recurring themes within the literature, such as the prevalent types of privacy harms, including intrusion, loss of control, and emotional or financial consequences, as well as core principles like transparency and user rights. This thematic exploration also delved into regulatory strategies and technological challenges, such as AI and big data, while assessing enforcement mechanisms to gauge their effectiveness in promoting compliance. Finally, building on these themes, a detailed case study analysis was performed on selected incidents to evaluate their wider implications, including the extent of harms like unauthorized access and reputational damage, the adequacy of regulatory responses and penalties, and potential influences on future privacy regulations, thereby tying together the overall narrative of privacy’s complexities and ongoing adaptations.
To fully grasp the research methodology previously outlined, it is also essential to examine related work on defining privacy harms, the evolution of privacy regulation, the technological challenges to privacy, and the significant research gaps. This critical analysis will be explored in Section 3.
The exploration of privacy harms and regulations has attracted considerable scholarly interest across disciplines such as law, computer science, information systems, and ethics, reflecting the multifaceted nature of privacy [12]. In law, researchers investigate legal frameworks and precedents; in computer science, they examine technical vulnerabilities and data security mechanisms; in information systems, they focus on organizational practices and policy implementation; and in ethics, they address moral implications and social impacts [13]. Collectively, scholars in these disciplines have produced a wealth of studies that shed light on the evolving challenges of data protection [14].
To quantify the breadth of this scholarly effort, the review incorporates a systematic collection of relevant articles spanning the last 35 years, drawing from databases such as JSTOR, Google Scholar, and specialized journals in privacy and data protection [15]. By tallying the number of publications per topic, such as the surge in articles on algorithmic bias after 2010 or the steady increase in GDPR-related studies, this analysis demonstrates the growing intensity of research activity, underscoring how academic focus has changed in response to real-world events such as major data breaches [16,17]. Ultimately, this examination serves as the foundation for the present study, revealing how past research informs our understanding of privacy dynamics and identifies areas where additional empirical evidence is needed to address ongoing and future challenges [18,19,20].

2.3. Quality and Bias Assessment

Risk of bias was assessed using the ROBIS tool across relevance, identification/selection, data collection/analysis, and synthesis/interpretation; two reviewers rated independently with discrepancies resolved by discussion.
Due to significant heterogeneity of outcomes, a narrative synthesis approach was adopted. Studies were thematically grouped by harm classification (e.g., misinformation, surveillance, cybercrime, privacy violations) and corresponding legal responses across jurisdictions, with particular attention to comparative enforcement strength.

2.4. Compliance with PRISMA Guidelines and Registration

This systematic review was conducted in accordance with the PRISMA 2020 guidelines. A completed PRISMA 2020 checklist is provided in Appendix C and the corresponding PRISMA 2020 flow diagram and structured risk of bias assessment are included in Appendix D and Appendix E. Evidence-type tallies are non-mutually-exclusive categorizations across the included corpus and therefore exceed the total N of included studies. This review was not prospectively registered in any registry due to scope alignment and timing constraints. However, the methodology was predefined, including databases searched, eligibility criteria, Boolean queries, and synthesis approach. All of which are transparently documented in this manuscript (Section 2). Future iterations of this review will be prospectively registered to further strengthen reproducibility and transparency. With the methodology established, Section 3 applies it to trace the historical foundations and evolution of privacy harms.

3. Origin and Evolution of Privacy Harms

The section traces the historical foundations of privacy harms, examines their evolution in the digital age, and surveys the legal and cultural responses to these changes.

3.1. Historical Foundations

Understanding contemporary privacy harms requires situating them in their historical and legal context. Table 1 presents a timeline of pivotal developments from conceptual origins to modern technological inflection points.

3.1.1. Warren and Brandeis’ Conceptualization

As introduced earlier, the 1890 conceptualization of privacy as ‘the right to be let alone’ provided the historical foundation for addressing harms through tort law, grounding it in personal dignity and autonomy amid emerging media like photography [9,21]. Transitioning from this theoretical base, the legal development of privacy harms through tort law provided early mechanisms for redress, allowing individuals to seek compensation for violations. The emergence of LLMs after 2018 introduced model-level privacy risks such as memorization and inversion, intensifying tensions between data minimization, transparency, and explainability duties.

3.1.2. Legal Development Through Tort Law

Prosser’s 1960 synthesis formalized four privacy torts, including intrusion upon seclusion, public disclosure of private facts, false light, and appropriation of name or likeness, that shape U.S. jurisprudence [22]. Though durable, this framework does not fully capture digital-era violations, as critics note [2,23]. The rise of digital technologies has exposed these limits and redirected attention to how contemporary innovations reshape privacy harms, which the next subsection examines.

3.2. Evolution of Privacy Harms in the Digital Age

The digital revolution has fundamentally transformed the nature and scope of privacy harms, introducing new forms of violation that were unimaginable in the pre-digital era.

3.2.1. Technological Drivers of Evolution

Several technological shifts have reshaped privacy harms. The commercial Internet expanded data collection, storage, and sharing while weakening control over information flows in distributed networks [1]. Big data analytics then enabled large-scale aggregation and profiling (“dataveillance”), intensifying monitoring risks [24]. Social media blurred public–private boundaries by making once-ephemeral interactions persistent and widely visible. Mobile and IoT ecosystems generate continuous streams of sensitive data (e.g., location, activity, biometrics), often through opaque, always-on sensing, multiplying risks beyond earlier paradigms. In recent years, AI/ML systems have inferred sensitive attributes from seemingly innocuous data, challenging informed consent and traditional notice-and-choice models [25,26]. Building on these drivers, the next subsection details the main categories of digital-era privacy harms.

3.2.2. Emerging Categories of Privacy Harms

Data breaches, algorithmic discrimination, pervasive surveillance, manipulative targeting, and dignitary harms, as outlined in the Introduction, collectively illustrate the expanding vulnerabilities in the digital ecosystem. These necessitate technical safeguards, accountability mechanisms, and enforceable rights to mitigate risks to autonomy and equity.
These harms highlight growing vulnerability in the digital ecosystem and the need for effective mitigation, including technical safeguards, accountability mechanisms, and enforceable rights. This necessity has driven the development of both legal frameworks and cultural shifts, which aim to address these challenges and protect individuals’ personal information in an increasingly interconnected world.

3.3. Legal and Cultural Responses

Rising privacy risks have catalyzed mutually reinforcing legal and cultural responses. Societal concern over data exploitation, alongside rapid technological change, has driven new and strengthened protections as policymakers, advocates, and communities converge on accountability, transparency, and user rights. Table A1 traces the growth of privacy practices and legal frameworks from 1970 to 2025, showing relatively slow development through 1990 followed by accelerating adoption as harms and awareness intensified.

3.3.1. Legal Frameworks

Legal frameworks that address privacy harms have emerged in diverse historical and cultural contexts, reflecting different sets of priorities and traditions. One of the earliest modern frameworks involved Fair Information Practices (FIPs), pioneered in the 1970s and 1980s to define core principles such as notice (clear communication of data practices), choice (consent/opt-out options), access (rights to review/correct data), and security (safeguards against unauthorized access) [27]. These foundational concepts, which encourage transparency and individual control over personal information, influenced subsequent laws worldwide.
Instead of comprehensive privacy frameworks, the U.S. has historically adopted a sector-specific approach to privacy regulation, targeting particular industries or types of data, and addressing unique privacy concerns within those contexts. Notable examples include the HIPAA of 1996, which establishes strict standards for protecting sensitive health information by mandating specific privacy and security measures for healthcare providers, insurers, and related entities, and the GLBA of 1999, which regulates financial institutions and requires them to disclose their information-sharing practices and implement safeguards to protect sensitive financial data. This sectoral approach has resulted in a fragmented regulatory landscape, with varying levels of protection depending on the industry or the nature of the data involved [11].
In contrast to the U.S. sectoral approach, the European Union (E.U.) has adopted comprehensive privacy frameworks that intentionally apply across multiple sectors. The E.U.’s approach began with the Data Protection Directive of 1995, which established uniform standards for data protection between member states. Its successor, the GDPR of 2016, expanded on these standards by introducing robust rights for individuals, including the right to be forgotten, which allows individuals to request deletion of personal data under certain conditions; data portability, which allows individuals to transfer their data from one service provider to another; and enhanced consent requirements, which demand clear and affirmative consent for data processing activities [28]. The GDPR has become a global benchmark, influencing privacy legislation worldwide and setting a high standard for data protection practices [9].
In the absence of comprehensive federal privacy legislation in the U.S., individual states have begun implementing their own state-level initiatives on privacy laws. The CCPA of 2018 is a leading example, granting California residents significant privacy rights [29]. These include the right to know what personal data businesses collect and how they are used, the right to request the deletion of personal data held by businesses, and the right to opt out of the sale of personal data to third parties. The CCPA has prompted other states to enact similar legislation and has spurred discussions about the need for a unified federal privacy law. Furthermore, its provisions have shaped the practices of international companies operating in the U.S., pushing them to align with stricter privacy standards [30].

3.3.2. Cultural Shifts

Public awareness and attitudes toward privacy have changed dramatically in recent years, influenced by high-profile data breaches and revelations about government surveillance. As a result, 71% of Americans now report being concerned about how companies use their data [31]. Associated with this surge in privacy consciousness is the growing recognition of privacy as a fundamental human right, emphasizing its centrality to individual autonomy and democratic participation [32]. Currently, the concept of “privacy by design,” prominently advocated by the author in [33], proposes that privacy assurance should be built into the core of organizational processes and technological systems rather than merely treated as a regulatory afterthought. This evolving outlook on privacy underscores the need for robust protections and carefully considered design practices that respect and uphold individuals’ rights [33].
As cultural attitudes toward privacy have evolved, emphasizing its role as a fundamental human right and advocating for proactive approaches such as “privacy by design,” these shifts have significantly influenced the development of legal frameworks. The next section explores how key privacy laws and regulations have emerged in response to these cultural changes and the challenges posed by advancing technologies.

4. Key Privacy Laws and Regulations

Following the evolution of privacy harms, this section examines key privacy laws and regulations that have shaped the global privacy landscape. The evolution of privacy laws over the past three and a half decades reflects the growing recognition of privacy as a fundamental right and the need to address emerging technological challenges.
Table 2 offers a concise overview of key legal frameworks that shape global data protection and privacy standards, highlighting their core focuses, originating regions, broader influences, and interconnections to demonstrate how these laws have evolved and inspired one another across jurisdictions. From foundational principles such as the Fair Information Practice Principles (FIPPs) and Data Protection Directives (DPD), to comprehensive regulations like the GDPR and Personal Information Protection Law (PIPL), this illustrates the progression of data rights and security measures in response to technological advancements and societal needs.
Privacy laws generally follow one of two primary frameworks: the sectoral approach or the comprehensive approach. Therefore, we will analyze privacy laws in the United States and the European Union separately, as they exemplify these contrasting models. The United States adheres to a sectoral model, implementing privacy regulations on an industry-by-industry basis. In contrast, the European Union and much of the rest of the world have embraced a more unified and comprehensive legal framework for privacy protection.

4.1. U.S. Privacy Laws

The United States has adopted a sectoral approach to privacy regulation, with laws explicitly targeting certain industries and types of data. One of the key pieces of legislation is HIPAA, enacted in 1996, which introduced privacy and security rules to protect health information, focusing on the principles of confidentiality, integrity, and data availability [34,35]. Another important law is the COPPA, established in 1998, which protects the privacy of children under 13 years of age by requiring parental consent for data collection [36].
The GLBA, passed in 1999, requires financial institutions to disclose their data sharing practices and implement safeguards to protect sensitive information [34]. More recently, the CCPA, enacted in 2018, grants Californians the right to access, delete, and opt out of the sale of their data, setting a significant precedent for state-level privacy laws [30]. Additionally, nineteen U.S. states have enacted consumer privacy laws similar to GDPR and CCPA while awaiting the introduction of federal privacy legislation, which has been proposed but not yet reviewed.
The article, “Privacy Purgatory”, advocates for the adoption of a federal data privacy law to address the inadequacies of the current fragmented regulatory framework [37]. It highlights the challenges posed by inconsistent state privacy laws, the risks to individual privacy, and the burdens on businesses. The proposed American Data Privacy Protection Act (ADPPA) is presented as a viable solution that offers clear protections to consumers and a unified compliance framework to businesses. The article also explores constitutional considerations and emphasizes the urgency of federal action to protect privacy in the digital age [37]. Subsequently, the comprehensive privacy framework adopted almost a decade ago by the E.U. will be discussed next, representing a similar approach to ADPPA.

4.2. E.U. Privacy Laws

In contrast to the U.S. sectoral approach to privacy laws, the E.U. has established comprehensive privacy frameworks that serve as global benchmarks for data protection. The Data Protection Directive, enacted in 1995, was a significant step in harmonizing data protection laws across E.U. member states, emphasizing key principles such as data quality and purpose limitation [38]. The GDPR (2018) unified EU data protection, introducing key individual rights such as erasure (Art. 17), portability (Art. 20), and enhanced consent (Art. 7) [28,39]. These build on the 1995 Data Protection Directive, establishing a global benchmark for comprehensive regulation with detailed enforcement outlined in Section 7.
The GDPR has inspired privacy legislation worldwide, including Brazil’s General Data Protection Law (LGPD) (2020), China’s PIPL (2021), and most recently, Brunei’s Personal Data Protection Order (PDPO) (2025). These laws reflect the global trend towards comprehensive data protection frameworks [39,40]. Despite significant progress, privacy laws face challenges in addressing cross-border data transfers, enforcement, and emerging technologies. The Court of Justice of the European Union (CJEU) in Schrems II (Case C-311/18) invalidated the E.U.-U.S. Privacy Shield, citing inadequate protections against U.S. surveillance. Following the Schrems II judgment, transfers primarily rely on Standard Contractual Clauses (SCCs), accompanied by documented data transfer impact assessments (DTIAs), and, where necessary, supplementary measures. In 2023, the E.U.–U.S. Data Privacy Framework (DPF) was adopted to facilitate transatlantic transfers for certified U.S. organizations, though many controllers still depend on SCCs and transfer assessments due to business scope and risk considerations. Divergences in enforcement and guidance among E.U. supervisory authorities persist, increasing compliance complexity for multinational controllers and processors [41].
The European Union’s comprehensive privacy laws, particularly the GDPR, have not only set a global standard for data protection but have also influenced the development of privacy legislation worldwide. Building on these legal foundations, the next section examines the underlying privacy principles and frameworks that guide organizations in implementing effective data protection practices.

4.3. International Snapshot: Canada (PIPEDA), Australia (APPs), Japan (APPI), India (DPDPA), Africa (POPIA/NDPR/Kenya DPA), and ASEAN Instruments

To orient to global benchmarks, the following snapshot summarizes key international jurisdictions and instruments, highlighting their scope, legal bases, cross-border transfer mechanisms, enforcement posture, and individual rights to facilitate comparisons with the GLBA, HIPAA, and the GDPR.
  • Canada: Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) sets a federal, consent-driven baseline for private-sector handling of personal information and articulates fair information principles; recent reform efforts (e.g., CPPA proposals) seek stronger enforcement and enhanced rights, whereas GDPR relies on multiple legal bases and more prescriptive transfer tools [42].
  • Australia: Australia’s Privacy Act is an omnibus framework anchored by the Australian Privacy Principles (APPs); reforms have increased penalties and expanded OAIC powers, pushing the regime closer to GDPR’s rights and enforcement posture while remaining principles-based and somewhat less prescriptive on cross-border mechanisms [43].
  • Japan: Japan’s Act on the Protection of Personal Information (APPI) provides a comprehensive, consent-oriented framework with cross-border rules requiring notice/consent or adequacy/equivalent safeguards; periodic amendments have expanded rights, breach notification, and penalties, positioning APPI between sectoral U.S. laws and the GDPR’s breadth of rights and transfer tools [44].
  • India: India’s Digital Personal Data Protection Act (DPDPA) establishes an omnibus, principles-based regime centered on consent and specified legitimate uses, creates a central Data Protection Board, and adopts a government-led approach to cross-border transfers; it converges toward GDPR on scope and accountability but currently enumerates fewer explicit rights [45].
  • Africa: Representative African laws such as South Africa’s Protection of Personal Information Act (POPIA), Nigeria’s Nigeria Data Protection Regulation (NDPR), and Kenya’s Data Protection Act (DPA) provide omnibus protections beyond sectoral U.S. laws, with rights like access, correction, and deletion; however, enforcement capacity and guidance remain heterogeneous across authorities compared with the GDPR’s more standardized approach [46,47,48]
  • ASEAN: ASEAN’s Model Contractual Clauses and interoperability initiatives facilitate cross-border flows across heterogeneous national laws, conceptually similar to GDPR SCCs but without a unified supervisory regime, leading to variation in rights and enforcement across the coalition [49].
This section traces the shift from foundational FIPPs to comprehensive, rights-based regimes, contrasting the U.S. sectoral model with the E.U.’s unified approach. Persistent challenges include cross-border transfers, uneven enforcement, and rapid technological change. Next, we transition from legal requirements to the principles that operationalize them by examining FIPPs, accountability, and risk-based controls, which translate statutes into practical governance, design, and assurance across jurisdictions. We next translate these legal requirements into operational principles and verifiable controls.

5. Privacy Principles and Frameworks

This section examines core privacy principles, how major frameworks implement them, and the strengths and limitations of those frameworks. These principles underpin privacy laws and guide organizations in protecting personal data.

5.1. Core Privacy Principles

Widely recognized principles include transparency, data minimization, purpose limitation, security of processing (security safeguards), and accountability. Together, they provide a foundation for effective privacy management and help organizations handle personal data responsibly while fostering trust.

5.1.1. Transparency

Transparency requires clear, accessible communication about data collection, use, and sharing, including privacy policies, purposes, categories of data, recipients, and individual rights [8,50]. Organizations should use plain language, notify individuals of material changes and breaches, and enable informed choices.

5.1.2. Data Minimization

Collect and process only data necessary for specific, legitimate purposes, reducing misuse risks and breach impact while lowering storage and processing costs [8]. For example, an e-commerce site should request only payment and shipping details for transactions.

5.1.3. Purpose Limitation

Use personal data solely for explicit, lawful, and specific purposes stated at collection, preventing unauthorized secondary uses and function creep [50]. If a user provides an email for a newsletter, it should not be repurposed for unrelated advertising without additional consent.

5.1.4. Security of Processing (Security Safeguards)

Security of processing requires appropriate technical and organizational measures to preserve confidentiality, integrity, and availability. In OECD/FIPPs, ‘security safeguards’ is the foundational principle that corresponds to GDPR’s ‘security of processing’ (Art. 5(1)(f), Art. 32), which we refer to as the security principle [50,51]. Frameworks such as the National Institute of Standards and Technology (NIST) Privacy Framework (PF) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) 27701 (leveraging ISO/IEC 27001/27002) instantiate these controls; practical implementations include end-to-end encryption and least-privilege access, with evidence such as key inventories, access reviews, SIEM logs, and incident registers [52,53]. These safeguards also support other principles (e.g., minimization, purpose limitation) and corresponding legal obligations summarized in Table 3 and cross-referenced to outcomes in Table A5. They protect personal data against unauthorized access and breaches and should be complemented by audits, vulnerability assessments, and incident response plans [54].

5.1.5. Accountability

Organizations must demonstrate compliance through policies, roles, risk assessments, records of processing, and measurable outcomes [52]. This often includes appointing a DPO or equivalent, performing DPIAs, and maintaining evidence of implemented controls.
These principles reinforce each other. For example, transparency supports accountability, while minimization and purpose limitation reduce breach impact, which is further mitigated by robust security. The next section shows how frameworks apply these principles.

5.2. Privacy Frameworks

This section aligns with two tables: Table 4, a crosswalk of privacy harms, legal responses, mechanisms, and outcomes; and Table 3, a mapping of privacy principles to verifiable controls.
NIST’s PF and ISO/IEC 27701 translate principles into actionable controls [52,53]. The NIST PF uses a risk-based structure organized around functions such as Govern-P, Map-P, Control-P, Protect-P, and Communicate-P to identify risks, implement controls, and demonstrate results [52]. ISO/IEC 27701 extends ISO/IEC 27001 to establish a Privacy Information Management System aligned with GDPR and supported by ISO/IEC 27001:2022 controls, with auditable requirements for roles, notices, purpose specification, minimization, and security [53,55]. Table 3 shows example mappings: transparency to NIST PF Communicate-P and ISO/IEC 27701 privacy notices (evidence: layered policies, DSR logs); minimization to NIST PF Manage-P and ISO/IEC 27701 data limitation (evidence: retention schedules, purge scripts, RoPA); security to NIST Protect-P and ISO/IEC 27001 controls (evidence: validated encryption, PAM logs).
These controls support the legal obligations summarized in Table 4 as described below.
  • Data breaches: Articles 32 and 33–34 of GDPR, HIPAA Security Rule, and CCPA/CPRA breach provisions implemented through safeguards and notification processes, leading to increased notifications and significant GDPR fines. Algorithmic discrimination: GDPR Articles 5, 9, 22 and CPRA principles operationalized through DPIAs, special category data protections, and restrictions on automated decisions; the E.U. AI Act introduces expanded audit/oversight; U.S. remains fragmented.
  • Surveillance/mass monitoring: The E.U. standards for proportionality and necessity, along with sectoral oversight in the U.S., are supported by governance and assessment, even though constraints and chilling effects continue to exist.
  • Manipulative targeting: GDPR consent and purpose limitation, CPRA limits on secondary use, and COPPA protections translate into consent management, opt-outs, and fairness controls, reflected in major enforcement.
  • Dignitary harms: GDPR erasure rights (Art. 17), national civil/criminal remedies, and platform takedowns depend on documented governance and response controls, with uneven remedies across jurisdictions.
For example, data minimization is evidenced by documented retention schedules with automated purge scripts, RoPA purpose tags, and deletion logs; ‘security of processing’ is evidenced by validated encryption at rest/in transit, privileged access management (PAM) access logs, and incident registers.
OECD privacy guidelines provide the foundational principles—transparency, purpose limitation, data minimization, security safeguards, and accountability—that NIST PF and ISO/IEC 27701 operationalize [50]. Table 4 links harms to legal responses and outcomes, while Table 3 shows how principles map to verifiable controls and evidence. Despite better alignment between law and practice, gaps remain in areas such as consistent algorithmic accountability, cross-border enforcement, and effective remedies—limitations examined next.

5.3. Privacy Limitations

While these frameworks provide valuable guidance, challenges include complexity, implementation costs, and jurisdictional inconsistencies. ISO/IEC 27701 is effective for GDPR alignment but can be difficult for smaller organizations to implement, especially because it typically builds on ISO/IEC 27001, often requiring a multi-year effort [54]. The NIST Privacy Framework does not require ISO/IEC 27001 but relies on NIST security controls (e.g., SP 800-171, SP 800-53) to ensure appropriate safeguards. Rapid technological change adds new risks and demands continual adaptation. The following section explores how big data and AI test the boundaries of current protections.

6. Technological Challenges to Privacy

With the understanding of the privacy frameworks available to guide the privacy management effort, technological advances have introduced new privacy risks, necessitating the adaptation of privacy laws and principles. One significant challenge arises from big data and analytics. The ability to collect and analyze vast amounts of data raises concerns about data minimization, consent, and potential misuse. For example, predictive analytics can lead to algorithmic discrimination, undermining fairness and accountability [56].
AI systems also present complex challenges related to privacy. These systems process personal data in intricate ways, creating issues surrounding transparency, accountability, and bias. The GDPR includes a “right to explanation” aimed at addressing these concerns, yet its effectiveness remains a topic of debate [26].
The IoT further complicates the privacy landscape. IoT devices generate continuous streams of personal data, often without clear consent mechanisms. This situation creates significant risks related to data security and unauthorized access [25].
Lastly, blockchain technology poses unique challenges to privacy laws. Its inherent immutability conflicts with regulations such as the GDPR, which stipulate the right to erasure. To address these challenges, innovative solutions such as off-chain storage and zero-knowledge proofs are being explored [57,58].
The unique challenges posed by emerging technologies, such as blockchain, underscore the ongoing tension between technological innovation and the adaptability of privacy laws.

6.1. Privacy-Enhancing Technologies (PETs) for AI/ML, IoT, and Blockchain: Costs and Limitations

Privacy-enhancing technologies (PETs) offer complementary protections but impose context-dependent trade-offs in utility, performance, trust, and governance. Differential privacy (DP) provides rigorous statistical guarantees for aggregated outputs by bounding contribution influence via the privacy budget ε . However, the utility degrades as ε tightens, composition across multiple queries must be carefully accounted for, and DP is better suited to analytics over populations than to individual-level decisions or model personalization without careful design [59,60,61].
Homomorphic encryption (HE) enables computation over ciphertexts and strong data confidentiality, yet remains constrained by substantial computational and memory overheads, limiting practicality to selected analytics or batched workloads rather than low-latency, high-throughput inference typical in IoT and real-time machine learning settings [62,63,64].
Trusted execution environments (TEEs), such as Intel SGX and ARM TrustZone, reduce exposure by isolating code and data in hardware-protected enclaves, but residual risks include side-channel leakage, attestation and key management complexity, and reliance on vendor supply chains and microcode integrity; they are often effective for third-party analytics under strict enclave governance and auditing [65,66,67].
Federated learning (FL) keeps raw data local and aggregates model updates centrally, but privacy hinges on secure aggregation, clipping, and DP at the update or record level to mitigate gradient leakage and model inversion/membership inference risks; without noise and robust aggregation, updates can leak sensitive features [68,69].
Zero-knowledge proofs (ZKPs) enable verification of statements (e.g., policy compliance, credential possession) without revealing underlying data, supporting selective disclosure for identity and blockchain use cases, but they carry developer complexity, circuit design burdens, and nontrivial performance costs that can limit throughput and user experience on constrained platforms [70,71].
Finally, pseudonymization and tokenization reduce exposure of direct identifiers and facilitate internal analytics under role-based access controls, yet they do not, on their own, prevent linkage attacks or re-identification via quasi-identifiers; robust governance, minimization, and periodic re-risking are required, especially in high-dimensional datasets typical of AI/ML and IoT telemetry [72,73].

6.2. Cyberattacks and Privacy

Ransomware with ‘double extortion’ or data exfiltration plus encryption and third-party/vendor compromises have amplified privacy harms by coupling availability failures with large-scale confidentiality breaches [74]. Sectoral impacts vary while healthcare and financial sectors remain frequent targets, and legal responses hinge on breach notification and security of processing obligations such as GDPR Arts. 32–34, HIPAA Breach Notification Rule, and CPRA/CPRA regulations [75]. Controls increasingly emphasize secure by default configurations, third-/fourth-party continuous monitoring, immutable backups, key governance, and attack-path reduction, consistent with mitigation recommendations synthesized in peer-reviewed surveys of ransomware defenses [76]. With these technological constraints in view, Section 7 synthesizes the review’s findings, highlighting the effectiveness of existing privacy regulations and the challenges that persist in enforcement and compliance.

7. Findings

The findings of this review underscore the intricate relationship between technological advances, privacy harms, and the regulatory frameworks designed to address them. This interplay reveals both the progress made in strengthening privacy protections and the persistent challenges that demand further attention.
Appendix G synthesizes the harm categories, governing legal responses, enforceable mechanisms, and observed outcomes identified in this review, and it anchors the thematic subsections that follow. As defined previously in Section 3.3, FIPs depicts its foundational concepts, which encourage transparency and individual control over personal information, influenced subsequent laws worldwide [27].
One of the most significant insights is the differential effectiveness of privacy law enforcement, as shown in the Comparative Enforcement Metrics in Appendix H. As introduced in Section 4.2, GDPR rights such as erasure and portability have driven accountability: more than €3.0 billion in fines are tied to a single violation category (e.g., insufficient legal basis for processing) within a cumulative €6.72 billion total (2018–Sep 2025) [77]. By contrast, CCPA/CPRA fines total about $2.75 million (2020–2025) and HIPAA $144 million (2003–Oct 2024).
The metrics also show uneven enforcement across sectors and jurisdictions. Under GDPR, technology/online platforms bears the highest average penalties, while HIPAA concentrates higher averages on providers relative to health plan providers/insurers, indicating regulator focus where control and risk reside. Resolution timelines vary meaningfully (roughly 3–6 months under GDPR, 4–8 months under CCPA/CPRA, and 6–12 months under HIPAA), which can blunt deterrence. Compliance rates remain low—about 28% for GDPR-affected organizations and 11% for CCPA/CPRA—signaling that substantial resourcing and governance investments are still required to lift baseline compliance.
Regulations such as the GDPR and the CCPA have introduced robust mechanisms to safeguard personal data, empowering individuals with rights such as data access, portability, and erasure [51,78]. These laws have also compelled organizations to adopt stricter data handling practices, which foster greater accountability. However, enforcement and compliance remain critical obstacles. Many organizations struggle to fully implement these regulations due to their complexity, high costs, and the need for specialized expertise [39,79]. In addition, regulatory bodies often face resource constraints that limit their ability to monitor compliance and effectively impose penalties [80]. While regulations like GDPR have improved data protection, these gains are complicated by emerging privacy risks from technologies such as AI, which amplify issues of enforcement and compliance as outlined next.
Emerging privacy risks further complicate the regulatory landscape. Technologies such as AI, the IoT, and blockchain have introduced unprecedented challenges to data protection. AI systems, for instance, can infer sensitive information from seemingly innocuous data, raising concerns about algorithmic discrimination and the erosion of informed consent [26,56]. Similarly, IoT devices generate continuous streams of sensitive data, such as location and biometric information, often without users’ explicit awareness [25,81]. Blockchain technology, with its immutable nature, conflicts with privacy laws such as GDPR, which enshrine the right to erasure [57]. These technological advances highlight the need for proactive and adaptive regulatory approaches that can address the unique risks posed by emerging innovations.
Another critical finding is the importance of global harmonization of privacy standards. The GDPR has set a global benchmark for data protection, inspiring similar legislation in other jurisdictions, such as Brazil’s LGPD and China’s PIPL [39,40]. However, significant differences in legal standards across countries create challenges for cross-border data transfers. For example, the Schrems II decision by the Court of Justice of the European Union invalidated the European Union-U.S. Privacy Shield, citing inadequate protections for E.U. citizens’ data under U.S. law [41]. This decision underscores the need for robust mechanisms to ensure data protection in international contexts and highlights the complexities of achieving global interoperability in privacy regulations.
Finally, the findings emphasize the need for future research and innovation to address these challenges. Collaborative efforts between policymakers, technologists, and privacy advocates are essential to develop solutions that balance technological progress with robust privacy protections. For instance, advances in privacy-preserving technologies, such as differential privacy, homomorphic encryption, and zero-knowledge proofs, offer promising avenues for mitigating privacy risks while enabling data-driven innovation [59,82]. Furthermore, research should focus on creating scalable and cost-effective compliance tools to support organizations, particularly small and medium enterprises, in meeting regulatory requirements [79].
A key dimension often overlooked in privacy scholarship is the comparative strength of enforcement mechanisms. Since GDPR’s applicability, E.U. Data Protection Authority (DPA) has issued numerous high-value fines, with top categories including inadequate legal basis, security failures, and transparency violations; several national authorities have also increased investigative throughput. U.S. enforcement remains fragmented across the FTC and state Attorneys General (AGs), with comparatively lower monetary penalties but increasing injunctive relief and conduct remedies. Emerging regimes such as LGPD, PIPL show rising activity but heterogeneous capacity and guidance. These differences shape global compliance incentives and the relative prioritization of rights management, security controls, and cross-border transfer safeguards.
Empirical evidence underscores the uneven enforcement landscape. Under the GDPR, fines have totaled over €5.88 billion from 2018 to mid-2025 across more than 1500 cases, averaging €1.2 million per penalty but concentrated on high-profile tech firms like Meta, which is €1.2 billion in 2023 for unlawful E.U.-U.S. data transfers [77]. In contrast, U.S. efforts reveal stark limitations. CCPA/CPRA enforcement has yielded approximately $87 million in fines since 2020, including Zoom’s $85 million (2021) for security lapses enabling “Zoombombing” and Sephora’s $1.2 million (2022) for failing to honor opt-outs [83]. The FTC, however, has been pursuing over 500 actions since 2000 with approximately $10–12 billion in penalties from 2018–2025, averaging 20–30 cases annually, exemplified by the $5 billion Meta settlement in 2019 for deceptive privacy controls [84]. Collectively, these regimes demonstrate resource constraints and low deterrence, with total U.S. fines under $100 million yearly despite widespread violations, highlighting the need for enhanced mechanisms to bridge gaps in privacy harm mitigation.
In conclusion, while significant strides have been made in strengthening privacy protections, the rapid pace of technological change and the global nature of data flows demand continuous adaptation of privacy laws and frameworks. By addressing enforcement challenges, proactively managing emerging risks, and fostering international collaboration, privacy protections can evolve to meet demands. Next, we outline the limitations of the review to contextualize these findings in Section 8.

8. Limitations

While this review offers a comprehensive synthesis, it is worth noting a few limitations to guide interpretation and future updates. An important point is that the source material is predominantly in English, with most of the content originating from the United States and the European Union. As a result, perspectives from regions such as ASEAN countries and parts of Africa may not be fully reflected. To improve geographic and cultural representation, future iterations will aim to incorporate more publications from India, African nations, and Southeast Asia, particularly to capture region-specific insights in the context of machine learning and artificial intelligence. In addition, while established methodological frameworks such as PRISMA and ROBIS were used to structure the review process, reliance on predefined Boolean search terms may have limited the range of sources identified. These queries were developed with care and precision, but may not encompass all relevant work. Another point to note is the inclusion of bibliometric indicators such as hI, hc, and hA. These metrics, reported in Appendix B, were used to provide a general sense of academic engagement but were not used to assess the legal implications discussed in the main analysis. Lastly, it is important to recognize the pace at which technologies like large language models and quantum computing are evolving. The findings presented here reflect the state of knowledge at a particular time, and regular updates are expected to ensure continued relevance.

9. Conclusions and Future Work

Drawing from the preceding sections, this conclusion highlights how legal evolution, technological change, and observed enforcement patterns guide future directions. Over the past three and a half decades, privacy laws and principles have undergone significant evolution, shaped by rapid technological advancements and shifting societal expectations. This review highlights the critical need for continuous adaptation of privacy frameworks to address the ever-expanding landscape of privacy risks.
As technologies such as Artificial Intelligence (AI), Internet of Things (IoT), and blockchain (Distributed Ledger Technology (DLT)) redefine the ways personal data is collected, processed, and shared, privacy protections must evolve to safeguard individual rights while enabling innovation. By fostering a culture of accountability, transparency, and user empowerment, privacy frameworks can strike a delicate balance between technological progress and the protection of fundamental human rights in the digital age.
Despite significant progress, challenges remain. Enforcement and compliance with privacy laws, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), continue to be hampered by resource constraints, jurisdictional differences, and the complexity of regulatory requirements. Emerging technologies further complicate the privacy landscape, introducing risks that existing frameworks are not equipped to address. These challenges underscore the need for innovative solutions, global collaboration, and a forward-looking approach to privacy governance.
Looking ahead, several critical areas require attention to ensure that privacy protections remain robust and effective. Ethical AI governance is essential, as AI systems increasingly influence decision-making in areas such as hiring, lending, and law enforcement. Governance frameworks must address issues such as algorithmic bias, explainability, and ethical use of AI in sensitive contexts to foster public trust and mitigate the risks of discrimination and misuse.
Similarly, the challenges of cross-border data transfers demand urgent attention. In a globalized digital economy, data frequently flows across national boundaries, creating complex legal and regulatory challenges. Mechanisms to harmonize privacy laws across jurisdictions, such as international agreements or interoperable standards, are necessary to facilitate secure and lawful data transfers.
The privacy implications of emerging technologies, such as blockchain (DLT), quantum computing, and IoT, also require thorough investigation. For instance, blockchain’s immutability conflicts with GDPR’s right to erasure, while IoT devices generate continuous streams of sensitive data, often without explicit user consent. Research should explore how these technologies impact privacy and develop solutions, such as privacy-preserving cryptographic techniques, to address these challenges.
Additionally, user-centric privacy models must be prioritized to empower individuals with greater control over their personal data. Privacy-Enhancing Technologies (PETs)—such as tools for data anonymization, differential privacy, and user-friendly consent management systems—can help rebuild trust in digital ecosystems and ensure that privacy remains a fundamental right in the face of technological change.
Finally, scalable compliance tools are essential to help Small and Medium-sized Enterprises (SMEs) meet complex regulatory requirements. Developing cost-effective tools, such as automated compliance monitoring systems and Privacy Impact Assessment (PIA) frameworks, can simplify adherence to privacy laws and reduce the burden on smaller organizations.
Against this backdrop, our systematic review shows that while GDPR/CCPA-era regimes broadened rights and accountability, effectiveness remains uneven due to three recurrent gaps: fragmentation and exemptions that leave surveillance and cross-context tracking under-regulated; limited translation of compliance processes (e.g., DPIAs and notices) into measurable reductions in concrete harms; and enforcement asymmetries across jurisdictions and sectors.
To fill these gaps, we propose risk-tiered obligations for high-impact contexts (AI, ad-tech, Automated Decision-Making (ADM)); stronger interoperability of enforcement, including cooperation on cross-border data transfers and ad-tech practices via SCCs and the DPF; and the adoption of outcome-oriented metrics that link legal controls to observable reductions in breaches, discrimination, manipulation, and dignitary harms.
For example, under the E.U.’s GDPR, mechanisms such as purpose limitation, DPIAs (Art. 35), Data Subject Access Requests (DSARs) and contestation rights (Arts. 15, 21), and safeguards for ADM (Art. 22) can be moderately effective against algorithmic discrimination in hiring when paired with sectoral anti-discrimination enforcement; effectiveness diminishes where opacity and vendor fragmentation impede contestability. Current evidence remains mixed, with few causal measures of harm reduction, but regulator actions and guidance are increasing.
We propose a practical model that links harms → principles → verifiable controls → laws → enforcement → outcomes. For each harm category, organizations should (i) identify applicable principles (e.g., minimization, fairness), (ii) implement verifiable controls mapped to NIST/ISO with evidence artifacts, (iii) align with jurisdiction-specific legal requirements (articles/sections), and (iv) monitor external enforcement patterns to calibrate risk and investment. This model can be operationalized as a control matrix, as shown in the Appendix G, to drive DPIAs and audit readiness.

Author Contributions

K.P. prepared the literature review and the original draft of the manuscript. J.K. carefully reviewed the work, made the necessary edits, and provided the final modifications. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study did not involve any human participants, animal subjects, or the use of sensitive data. Therefore, ethical approval from an institutional review board was not required.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article, as this is a review of previously published work.

Acknowledgments

During the preparation of this manuscript/study, the author(s) used three AI tools, OpenAI GPT-4o, Microsoft Copilot with GPT-4, and Gork 4, during the initial research process. Additionally, they used two Grammarly tools (a desktop version of Grammarly for Mac v1.128.0.0 and Chrome Extension version 14.1239.0 for Chrome browser) for proofreading the manuscript. The authors have reviewed and edited the content and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The sponsors had no role in the design of the study; in the collection, analysis, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Legal Frameworks Growth Since 1970

Table A1. Legal Frameworks Signed and Growth Since 1970.
Table A1. Legal Frameworks Signed and Growth Since 1970.
FrameworksFocus/DescriptionYearRegion
FIPPsFair Information Practice Principles1970U.S.
FERPAPrivacy Laws on Education Records1974U.S.
Privacy ActPrivacy Act1988Australia
DPDE.U.’s Data Protection Directive1995E.U.
HIPAAHealth Information Privacy1996U.S.
COPPAChild’s Online Privacy1998U.S.
GLBAFinancial Data Privacy1999U.S.
PIPEDAPersonal Information Protection and Electronic Documents Act2000Canada
GDPRE.U.’s General Data Privacy Regulation2016E.U.
CCPACalifornia’ Consumer Privacy Act2018California
DPAKenya Data Protection Act2019Kenya
NDPRNigeria Data Protection Regulation2019Nigeria
POPIAProtection of Personal Information Act2020South Africa
LGPDBrazil’s consumer privacy law2020Brazil
APPIAct on the Protection of Personal Information2020Japan
VirginiaVirginia state consumer privacy law2021Virginia
ColoradoColorado state consumer privacy law2021Colorado
PIPLChina’s Consumer Privacy Law2021China
PDPLPersonal Data Protection Law2021UAE
UtahUtah state consumer privacy law2022Utah
ConnecticutConnecticut state consumer privacy law2022Connecticut
PDPAPersonal Data Protection Act2022Thailand
PDPPersonal Data Protection2022Indonesia
PDPBPersonal Data Protection Bill2023India
IowaIowa state consumer privacy law2023Iowa
IndianaIndiana state consumer privacy law2023Indiana
TennesseeTennessee state consumer privacy law2023Tennessee
MontanaMontana state consumer privacy law2023Montana
TexasTexas state consumer privacy law2023Texas
OregonOregon state consumer privacy law2023Oregon
DelawareDelaware state consumer privacy law2023Delaware
PDPLPersonal Data Protection Law2023Saudi Arabia
New JerseyNew Jersey state consumer privacy law2024New Jersey
New HampshireNew Hampshire state consumer privacy law2024New Hampshire
KentuckyKentucky state consumer privacy law2024Kentucky
NebraskaNebraska state consumer privacy law2024Nebraska
MarylandMaryland state consumer privacy law2024Maryland
MinnesotaMinnesota state consumer privacy law2024Minnesota
Rhode IslandRhode Island state consumer privacy law2024Rhode Island
BruneiPersonal Data Protection Order2025Brunei

Appendix B. Publish or Parish h Index Table

Metrics (hI_annual, hc_index, hA) indicate scholarly attention, not rigor or legal effectiveness; they were used descriptively and did not influence causal claims. They are sensitive to language/database coverage, as well as field-specific citation norms. Table A2 serves as an illustration of Google Scholar keyword search results alongside these metrics to facilitate a comprehensive analysis of research dynamics between privacy laws, privacy harms, and privacy technologies over the last three and a half decades.
Table A2. Privacy research effort and impact metrics include hI_annual, hc_index, and hA, which act as comparative enforcement indicators of sustained scholarly output and impact across jurisdictions.
Table A2. Privacy research effort and impact metrics include hI_annual, hc_index, and hA, which act as comparative enforcement indicators of sustained scholarly output and impact across jurisdictions.
Keyword QuerieshI_annualhc_indexhA
“GDPR” OR “GLBA” OR “CCPA” OR “COPPA” OR “HIPAA”1.716829
Privacy or Data Privacy1.386230
“Privacy law*” OR “data protection law*”1.84322
“Privacy AND technology”1.313417
“Privacy harm*” OR “privacy violation*”1.295023
“Privacy AND artificial intelligence”2.294725
“Privacy principle*” OR “data protection principle*”0.612010
“Privacy AND blockchain”2.864424

Appendix C. PRISMA 2020 Compliance Statement and Flow Diagram

PRISMA 2020 Checklist

Table A3. PRISMA 2020 Checklist completed for this systematic review.
Table A3. PRISMA 2020 Checklist completed for this systematic review.
SectionItemChecklist ItemHow Addressed in This Review
Title1Identify as a systematic reviewTitle explicitly states systematic review of privacy harms, laws, and technology
Abstract2Structured summaryAbstract includes objectives, methods, results, and limitations
Introduction3RationaleExplains need for synthesis of privacy laws and technological developments
Introduction4ObjectivesExplicit research questions provided
Methods5Eligibility criteriaPeer-reviewed, 1990–2025, English, legal/technical focus
Methods6Information sourcesIEEE Xplore, ACM, JSTOR, ScienceDirect, Google Scholar
Methods7Search strategyBoolean queries provided
Methods8Selection processScreening and exclusion criteria described
Methods9Data collection processExtracted laws, harms, principles, and technology issues
Methods10Data itemsLegal frameworks, privacy harms, technical risks
Methods11Risk of bias assessmentRisk of bias assessment: ROBIS used; results summarized in Appendices Appendix E and Appendix F
Methods12Effect measuresNot applicable (qualitative synthesis only)
Methods13Synthesis methodsNarrative, thematic, chronological, comparative
Results14Study selectionNumbers reported; PRISMA flow diagram included
Results15Study characteristicsYear, focus, jurisdiction, technology domain
Results16Risk of bias in studiesLimitations qualitatively noted
Results17Results of individual studiesSummarized by themes
Results18Synthesis resultsThemes: harms, laws, principles, technology
Discussion19Summary of evidenceLinked to GDPR, AI, IoT, blockchain
Discussion20LimitationsEnglish-only, reporting gaps, database coverage
Discussion21ConclusionsFuture research directions highlighted
Other22RegistrationNot registered
Other23ProtocolNone
Other24SupportNo funding reported
Other25Competing interestsNone declared
Other26Availability of data/codeNot applicable

Appendix D. PRISMA 2020 Flow Diagram

Figure A1. PRISMA 2020 flow diagram, showing the flow of information through the review.
Figure A1. PRISMA 2020 flow diagram, showing the flow of information through the review.
Jcp 05 00103 g0a1

Appendix E. Risk of Bias Assessment

Appendix E.1. Assessment Domains

Risk of bias was assessed across five domains adapted from the ROBIS tool:
  • Study eligibility criteria;
  • Identification and selection of studies;
  • Data collection and study appraisal;
  • Synthesis and findings;
  • Overall risk of bias.

Appendix E.2. ROBIS Summary Table

As summarized in Appendix F, a structured ROBIS assessment indicated that legal and policy studies were subject to moderate jurisdictional bias, empirical studies often lacked standardized metrics, and review articles showed the highest risk due to narrative selection strategies. In contrast, technical studies typically exhibited lower risk, although reproducibility limitations were noted. In general, the evidence base was classified as medium risk of bias.
Table A4. ROBIS Risk of Bias Assessment by Study Type.
Table A4. ROBIS Risk of Bias Assessment by Study Type.
Study TypeEligibility CriteriaStudy SelectionData Collection/ AppraisalSynthesis/FindingsOverall Risk
Legal/Policy (N = 28)LowMedium (regional bias: E.U./U.S. focus)MediumMediumMedium
Policy (N = 11)LowLowMediumMediumMedium
Empirical (N = 27)LowLowMedium (limited follow-up, no standardized metrics)MediumMedium
Review Articles (N = 9)LowMedium (narrative search bias)Medium–HighMedium–HighMedium–High
Technical (N = 24)LowLowLow–Medium (reproducibility concerns)Low–MediumLow–Medium

Appendix E.3. Narrative Summary

In all included studies, the risk of bias was generally low in terms of eligibility criteria and study selection. However, several studies demonstrated moderate to high risk in outcome reporting and synthesis domains, largely due to insufficient transparency in reporting primary endpoints and incomplete data availability. Overall, the body of evidence is considered to have low-to-moderate risk of bias.

Appendix F. Structured Risk of Bias Assessment

To strengthen internal validity, we assessed included studies using ROBIS domains (relevance, identification, appraisal, synthesis, and bias in interpretation):
  • Legal/Policy Studies: Medium risk. Potential jurisdictional and publication bias (focused on E.U. and U.S., less from Asia/Africa).
  • Empirical Studies: Medium risk. Some lacked longitudinal follow-up or standardized privacy metrics.
  • Technical Papers: Low to medium risk. Strong peer-reviewed venues (IEEE/ACM), but reproducibility limitations are noted.
  • Review Articles: Medium to high risk. Several narrative reviews lacked systematic search strategies.
In general, the evidence base includes valuable information but is prone to regional, linguistic, and methodological bias. The results should therefore be interpreted with caution.

Appendix G. Crosswalk: Harms → Legal Response → Mechanisms → Observed Outcomes

Table A5. Crosswalk: Harms → Legal Response → Mechanisms → Observed Outcomes.
Table A5. Crosswalk: Harms → Legal Response → Mechanisms → Observed Outcomes.
Harm CategoryRepresentative ScenariosLegal Response (Selected)Mechanisms (Obligations, Controls, Enforcement)Observed Outcomes/Evidence
Unauthorized access and exfiltrationRansomware/data breaches of PII/PHI; credential stuffing; supply-chain compromiseGDPR Arts. 5, 32–34; Network and Information Systems Directive 2 (NIS2); HIPAA Security/Privacy Rules; CCPA/CPRA; FTC Act §5; state breach- notification lawsSecurity by design; risk assessments; encryption at rest/in transit; MFA; least privilege; incident response; 72-h notification (GDPR); HIPAA breach assessment; AG/DPAs and FTC enforcement, civil penaltiesNotifications increased; fines and consent decrees for poor security; persistent breach volume; enforcement drives MFA/Zero Trust adoption; recurring supply-chain risk despite controls
Unlawful processing and purpose creepSecondary use without consent; model repurposing; shadow AI analyticsGDPR Arts. 5(1)(b) purpose limitation, 6 lawfulness; CPRA purpose limitation; ePrivacy; LGPDDPIAs; records of processing; consent/legitimate interest balancing tests; internal use restrictions; DPA guidance; audit logsReduced overt secondary uses; gray-area analytics persist; DPIAs improve documentation but unevenly constrain practices; enforcement nudges clearer notices
Excessive data collection (minimization failure)Telemetry over-collection in apps/IoT; intrusive SDKs; dark patternsGDPR Art. 5(1)(c) data minimization; CPRA data minimization; Digital Services Act (DSA)/ Digital Markets Act (DMA) fairness constraintsData mapping; necessity tests; configurable SDKs; privacy-by- default; granular opt-outs; dark-pattern enforcementSDK footprints reduced in regulated markets; measurable but incomplete minimization; dark-pattern crackdowns improve consent quality
Profiling, inferences, automated decisionsCredit/ insurance scoring; targeted ads; risk scores; LLM-based triageGDPR Arts. 21–22 (rights to object/ADP), 13–15 transparency; CPRA sensitive inferences; Fair Credit Reporting Act (FCRA), Equal Credit Opportunity Act (ECOA)/Reg BImpact assessments; human-in-the- loop; explanation/ transparency; opt-out of sale/sharing; fairness testing; model risk managementMore disclosures and opt-outs; uneven explainability; ADP limits rarely invoked at scale; fairness audits emerging but not standardized
linkageJoining pseudonymized datasets; location/mobility deanonymizationGDPR Recitals 26, 28; ISO/IEC 20889; CPRA “deidentified” standardsRobust de-identification; k-anonymity/()- diversity/t-closeness; DP for releases; contractual controls; release-and-re-risk cyclesRe-identification remains feasible in high-dimensional data; DP adoption growing for analytics; contractual limits help but are brittle
Inadequate security governanceThird-party/vendor risk; key mismanagement; insecure defaultsGDPR Art. 28 processors; SOC 2, ISO 27001; NIS2 supply-chain; HIPAA Business Associate Agreements (BAAs)Vendor due diligence; DPAs/BAAs; shared responsibility matrices; continuous monitoring; key rotation; secure defaultsBetter contractual allocation of risk; residual exposure via fourth parties; improvements where continuous monitoring is enforced
Children’s privacy harmsTracking, profiling, manipulative designGDPR Art. 8; COPPA;Age-appropriate design codes; profiling limitations; heightened DPIAs; default privacyTracking reduction on child-directed services; age-assurance frictions; ongoing debates on proportionality and free expression
Workplace surveillanceProductivity tracking; biometric timekeepingGDPR (legitimate interest, labor law); state biometrics lawsNotice, impact assessment; biometric consent; retention/destruction schedules; access controlsSignificant Biometric Information Privacy Act (BIPA) litigation drives compliance; monitoring persists with more transparency and retention limits
Sensitive data misuseHealth, location, biometrics, financialGDPR Art. 9; CPRA “sensitive”; HIPAA; GLBA; state health data lawsExplicit consent or statutory basis; segregation; extra safeguards; PETs for analytics (DP, FL)Better segregation and consent flows; data brokers under scrutiny; enforcement actions for sensitive categories increase
Cross-border transfer risksInadequate safeguards; government accessGDPR Ch. V; SCCs + TIAs; adequacy; DPF; sectoral rulesTransfer impact assessments; encryption/keys in EEA; split-processing; contractual safeguardsPost-Schrems II hardening of safeguards; DPF adoption by U.S. firms; residual legal uncertainty persists
IoT privacy/safety couplingSensor overreach; insecure firmware; bystander privacyGDPR, NIS2; product safety regs; state IoT security lawsSecure boot/updates; local processing; data minimization; PETs at edge; labelsGradual improvement via security baselines and labeling; long tail of insecure devices remains
Blockchain transparency vs. privacyOn-chain PII/inferences; deanonymizationGDPR applicability; Anti-money laundering (AML)/Know your customer (KYC)Off-chain PII; pseudonymous wallets; ZK-proofs for selective disclosure; mixers restrictionsIncreased use of ZK for compliance proofs; chain analytics can still re-identify patterns; tension with AML persists
AI model leakage and memorizationTraining data extraction; prompt leakageGDPR data subject rights; IP/trade secrets; platform policiesDP-SGD; RLHF/red teaming; content filtering; retrieval with access controls; audit trailsDP-SGD improves privacy at utility cost; memorization reduced but not eliminated; governance programs emerging

Appendix H. Comparative Enforcement Metrics

Table A6. Comparative Enforcement Metrics for Key Privacy Regulations (2018–2025).
Table A6. Comparative Enforcement Metrics for Key Privacy Regulations (2018–2025).
MetricGDPR (EU)CCPA/CPRA (CA, USA)HIPAA (USA)Notes/Sources
Total Fines€6.72B$2.75M$144MCumulative penalties from 2200+ (GDPR) [77,85], 4 (CCPA) [86,87], and 152 (HIPAA) actions [88].
(2018–Sep 2025)(2020–2025)(2003–Oct 2024)
Top Penalty by SectorTech/Online: €1.2BHealth/Online Health Services: $1.55MHealth Plans/Insurers: $16MHighest fine incident shown; GDPR: [89,90]; CCPA: [87]; HIPAA: [87].
Compliance Rate28%11%Not audited/surveyedBased on audits/surveys; low rates indicate enforcement gaps. GDPR: [91]; CCPA: [92,93]; HIPAA: [88].
Avg. Response Time3–6 months4–8 months6–12 monthsFrom violation report to resolution; delays highlight resource challenges. GDPR: [77,94]; CCPA: [93]; HIPAA: [95,96]

References

  1. Abbate, J. Inventing the Internet; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  2. Citron, D.K.; Solove, D.J. Privacy harms. BUL Rev. 2022, 102, 793. [Google Scholar] [CrossRef]
  3. Zuboff, S. Surveillance capitalism and the challenge of collective action. In Proceedings of the New Labor Forum; Sage Publications: Los Angeles, CA, USA, 2019; Volume 28, pp. 10–29. [Google Scholar]
  4. Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; Wiley: Hoboken, NJ, USA, 2022; pp. 535–545. [Google Scholar]
  5. Schneier, B. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World; WW Norton & Company: New York, NY, USA, 2015. [Google Scholar]
  6. Kenny, C. The Equifax data breach and the resulting legal recourse. Brook. J. Corp. Fin. Com. L. 2018, 13, 215. [Google Scholar]
  7. Andrews, E.L. The Science Behind Cambridge Analytica: Does Psychological Profiling Work. Geliş Tarihi. 2018. Available online: https://www.gsb.stanford.edu/insights/science-behind-cambridge-analytica-does-psychological-profiling-work (accessed on 27 May 2025).
  8. Solove, D.J. The Digital Person: Technology and Privacy in the Information age; NyU Press: New York, NY, USA, 2004; Volume 1. [Google Scholar]
  9. Warren, S.D.; Brandeis, L.D. The right to privacy. Harv. L. Rev. 1890, 4, 193. [Google Scholar] [CrossRef]
  10. Nissenbaum, H. Privacy in context: Technology, policy, and the integrity of social life. In Privacy in Context; Stanford University Press: Redwood City, CA, USA, 2009. [Google Scholar]
  11. Solove, D.J.; Hartzog, W. The FTC and the new common law of privacy. Colum. L. Rev. 2014, 114, 583. [Google Scholar] [CrossRef]
  12. Harzing, A.W.K.; Van der Wal, R. Google Scholar as a new source for citation analysis. Ethics Sci. Environ. Politics 2008, 8, 61–73. [Google Scholar] [CrossRef]
  13. Sidiropoulos, A.; Katsaros, D.; Manolopoulos, Y. Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 2007, 72, 253–280. [Google Scholar] [CrossRef]
  14. Harzing, A.W.; Alakangas, S.; Adams, D. hIa: An individual annual h-index to accommodate disciplinary and career length differences. Scientometrics 2014, 99, 811–821. [Google Scholar] [CrossRef]
  15. Wagner, I. Privacy policies across the ages: Content of privacy policies 1996–2021. ACM Trans. Priv. Secur. 2023, 26, 1–32. [Google Scholar] [CrossRef]
  16. Kordzadeh, N.; Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
  17. Zaguir, N.A.; de Magalhães, G.H.; de Mesquita Spinola, M. Challenges and enablers for GDPR compliance: Systematic literature review and future research directions. IEEE Access 2024, 12, 81608–81630. [Google Scholar] [CrossRef]
  18. Chen, S.; Gu, C.; Wei, J.; Lv, M. Research on the influence mechanism of privacy invasion experiences with privacy protection intentions in social media contexts: Regulatory focus as the moderator. Front. Psychol. 2023, 13, 1031592. [Google Scholar] [CrossRef]
  19. Barth, S.; De Jong, M.D. The privacy paradox–Investigating discrepancies between expressed privacy concerns and actual online behavior–A systematic literature review. Telemat. Inform. 2017, 34, 1038–1058. [Google Scholar] [CrossRef]
  20. Aridor, G.; Che, Y.K.; Salz, T. The effect of privacy regulation on the data industry: Empirical evidence from GDPR. RAND J. Econ. 2023, 54, 695–730. [Google Scholar] [CrossRef]
  21. Richards, N.M.; Solove, D.J. Privacy’s Other Path: Recovering the law of confidentiality. Geo. LJ 2007, 96, 123. [Google Scholar]
  22. Prosser, W.L. California Law Review Vol. 48 August 1960 No. 3: Privacy. In Pre-Nineteen Sixty Developments in the Bill of Rights Area; Routledge: Oxford, UK, 2014; pp. 355–395. [Google Scholar]
  23. Solove, D.J. A taxonomy of privacy. U. Pa. L. Rev. 2005, 154, 477. [Google Scholar] [CrossRef]
  24. Boyd, D.; Crawford, K. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Inf. Commun. Soc. 2012, 15, 662–679. [Google Scholar] [CrossRef]
  25. Ziegeldorf, J.H.; Morchon, O.G.; Wehrle, K. Privacy in the Internet of Things: Threats and challenges. Secur. Commun. Netw. 2014, 7, 2728–2742. [Google Scholar] [CrossRef]
  26. Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; Floridi, L. Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Sci. Eng. Ethics 2018, 24, 505–528. [Google Scholar]
  27. Gellman, R. Fair Information Practices: A Basic History-Version 2.30. SSRN Electron. J. 2025. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5348107 (accessed on 28 May 2025). [CrossRef]
  28. European Parliament and Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (EU) 2016, 679, 2016. [Google Scholar]
  29. Bonta, R. California Consumer Privacy Act (CCPA). 2022. Available online: https://oag.ca.gov/privacy/ccpa (accessed on 25 May 2025).
  30. Chander, A.; Kaminski, M.E.; McGeveran, W. Catalyzing privacy law. Minn. L. Rev. 2020, 105, 1733. [Google Scholar] [CrossRef]
  31. McClain, C.; Faverio, M.; Anderson, M.; Park, E. How Americans view data privacy. Pew Res. Cent. 2023, 18, 2023. [Google Scholar]
  32. Floridi, L. On human dignity as a foundation for the right to privacy. Philos. Technol. 2016, 29, 307–312. [Google Scholar] [CrossRef]
  33. Cavoukian, A. Privacy by design: The 7 foundational principles. Inf. Priv. Comm. Ont. Can. 2009, 5, 12. [Google Scholar]
  34. DeYoung, H.; Garg, D.; Jia, L.; Kaynar, D.; Datta, A. Experiences in the logical specification of the HIPAA and GLBA privacy laws. In Proceedings of the 9th Annual ACM Workshop on Privacy in the Electronic Society, Chicago, IL, USA, 4–8 October 2010; pp. 73–82. [Google Scholar]
  35. Office for Civil Rights, H. Standards for privacy of individually identifiable health information. Final rule. Fed. Regist. 2002, 67, 53181–53273. [Google Scholar]
  36. Anderson, H. The Guardian of The Digital Era: Assessing the Impact and Challenges of the Children’s Online Privacy Protection Act. Law Econ. 2024, 3, 6–10. [Google Scholar] [CrossRef]
  37. Taetzsch, E.S. Privacy Purgatory: Why the United States Needs a Comprehensive Federal Data Privacy Law. J. Legis. 2024, 50, 121. [Google Scholar]
  38. Bygrave, L.A. European Data Protection: Determining Applicable Law Pursuant to European Data Protection Legislation. Comput. Law Secur. Rev. 2000, 16, 252–257. [Google Scholar] [CrossRef]
  39. Greenleaf, G. 172 Countries with Data Privacy Laws-Year by Year 1973–2025. Priv. Laws Bus. Int. Rep. 2025, 16–17. Available online: https://ssrn.com/abstract=5189972 (accessed on 28 May 2025).
  40. Creemers, R. China’s emerging data protection framework. J. Cybersecur. 2022, 8, tyac011. [Google Scholar] [CrossRef]
  41. Baskett, N. The Court of Justice of the European Union ruling in Data Protection Commissioner v Facebook Ireland and Maximillian Schrems. Judgment in Case C-311/18. J. Data Prot. Priv. 2020, 3, 460–462. [Google Scholar] [CrossRef]
  42. Parliament of Canada. Personal Information Protection and Electronic Documents Act (S.C. 2000, c. 5). 2000. Available online: https://laws-lois.justice.gc.ca/eng/acts/P-8.6/FullText.html (accessed on 28 May 2025).
  43. Australian Government. Privacy Act 1988 (Cth). 1988. Available online: https://www.oaic.gov.au/privacy/privacy-legislation/the-privacy-act (accessed on 28 May 2025).
  44. Personal Information Protection Commission (PPC), Japan. Act on the Protection of Personal Information (APPI); English Materials and Official Translations Provided by the PPC; Amendments Effective 2022; Government of Japan: Tokyo, Japan, 2022.
  45. Ministry of Electronics and Information Technology. The Digital Personal Data Protection Act, 2023; Act No. 22 of 2023; Government of India: New Delhi, India, 2023.
  46. Republic of South Africa. Protection of Personal Information Act (POPIA); Act No. 4 of 2013; Commenced 1 July 2020; Government Gazette: Pretoria, South Africa, 2013.
  47. Nigeria Data Protection Bureau (NDPB). Nigeria Data Protection Regulation (NDPR); Issued January 2019; National Information Technology Development Agency (NITDA): Abuja, Nigeria, 2019.
  48. Republic of Kenya. Data Protection Act; Act No. 24 of 2019; Kenya Gazette Supplement: Nairobi, Kenya, 2019. [Google Scholar]
  49. Chin, Y.C.; Zhao, J. Governing cross-border data flows: International trade agreements and their limits. Laws 2022, 11, 63. [Google Scholar] [CrossRef]
  50. Kirby, M. The history, achievement and future of the 1980 OECD guidelines on privacy. Int. Data Priv. Law 2011, 1, 6–14. [Google Scholar] [CrossRef]
  51. Voigt, P.; Von dem Bussche, A. The EU General Data Protection Regulation (GDPR): A Practical Guide, 1st ed.; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
  52. Boeckl, K.R.; Lefkovitz, N.B. NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0. 2020. Available online: https://www.nist.gov/publications/nist-privacy-framework-tool-improving-privacy-through-enterprise-risk-management (accessed on 27 May 2025).
  53. ISO/IEC 27701:2019; Information Technology—Security Techniques—Extension to ISO/IEC 27001 and ISO/IEC 27002 for Privacy Information Management—Requirements and Guidelines. Technical Report; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2019.
  54. Lachaud, E. ISO/IEC 27701 standard: Threats and opportunities for GDPR certification. Eur. Data Prot. L. Rev. 2020, 6, 194. [Google Scholar] [CrossRef]
  55. ISO/IEC 27001:2022; Information Technology—Security Techniques—Information Security Management Systems—Requirements. Technical Report; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2022.
  56. Barocas, S.; Selbst, A.D. Big data’s disparate impact. Calif. L. Rev. 2016, 104, 671. [Google Scholar] [CrossRef]
  57. Finck, M. Blockchains and data protection in the European Union. Eur. Data Prot. L. Rev. 2018, 4, 17. [Google Scholar] [CrossRef]
  58. Zyskind, G.; Nathan, O. Decentralizing privacy: Using blockchain to protect personal data. In Proceedings of the 2015 IEEE Security and Privacy Workshops, San Jose, CA, USA, 21–22 May 2015; pp. 180–184. [Google Scholar]
  59. Dwork, C. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation, Xi’an, China, 25–29 April 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–19. [Google Scholar]
  60. Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy; Foundations and Trends in Theoretical Computer Science, Now Publishers: Norwell, MA, USA, 2014; Volume 9, pp. 211–407. [Google Scholar] [CrossRef]
  61. Erlingsson, Ú.; Pihur, V.; Korolova, A. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the ACM CCS 2014, Scottsdale, AZ, USA, 3–7 November 2014; pp. 1054–1067. [Google Scholar] [CrossRef]
  62. Gentry, C. Fully Homomorphic Encryption Using Ideal Lattices. In Proceedings of the STOC 2009, Washington, DC, USA, 31 May–2 June 2009; pp. 169–178. [Google Scholar] [CrossRef]
  63. Halevi, S.; Shoup, V. Algorithms in HElib. In Proceedings of the CRYPTO 2014, Santa Barbara, CA, USA, 17–21 August 2014; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8616. [Google Scholar] [CrossRef]
  64. Cheon, J.H.; Kim, A.; Kim, M.; Song, Y. Homomorphic Encryption for Arithmetic of Approximate Numbers. In Proceedings of the ASIACRYPT 2017, Hong Kong, China, 3–7 December 2017; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10624, pp. 409–437. Available online: https://link.springer.com/chapter/10.1007/978-3-319-70694-8_15 (accessed on 7 October 2025).
  65. Costan, V.; Devadas, S. Intel SGX Explained. IACR Cryptol. EPrint Arch. 2016, 2016, 86. [Google Scholar]
  66. Van Bulck, J.; Minkin, M.; Weisse, O.; Genkin, D.; Kasikci, B.; Piessens, F.; Silberstein, M.; Fogh, A.; Yarom, Y.; Strackx, R. Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution. In Proceedings of the USENIX Security Symposium, Baltimore, MD, USA, 15–17 August 2018; pp. 991–1008. [Google Scholar]
  67. Schwarz, M.; Weiser, S.; Gruss, D.; Giner, L.; Maurice, C.; Mangard, S. Malware Guard Extension: Using SGX to Conceal Cache Attacks. In Proceedings of the DIMVA 2017, Bonn, Germany, 6–7 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 3–24. [Google Scholar] [CrossRef]
  68. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Aguera y Arcas, B. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the AISTATS 2017, Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  69. Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks Against Centralized and Federated Learning. In Proceedings of the IEEE Symposium on Security and Privacy (S&P), San Francisco, CA, USA, 20–22 May 2019; pp. 739–753. [Google Scholar] [CrossRef]
  70. Ben-Sasson, E.; Bentov, I.; Horesh, Y.; Riabzev, M. Scalable, Transparent, and Post-Quantum Secure Computational Integrity. IACR Cryptol. EPrint Arch. 2018, 2018, 046. [Google Scholar]
  71. Bünz, B.; Bootle, J.; Boneh, D.; Poelstra, A.; Wuille, P.; Maxwell, G. Bulletproofs: Short Proofs for Confidential Transactions and More. In Proceedings of the IEEE Symposium on Security and Privacy (S&P), San Francisco, CA, USA, 21–23 May 2018; pp. 315–334. [Google Scholar] [CrossRef]
  72. Narayanan, A.; Shmatikov, V. Robust De-anonymization of Large Sparse Datasets. In Proceedings of the IEEE Symposium on Security and Privacy (S&P), Oakland, CA, USA, 18–21 May 2008; pp. 111–125. [Google Scholar] [CrossRef]
  73. ISO/IEC 20889:2018; Privacy Enhancing Data De-Identification Terminology and Classification of Techniques. ISO: Geneva, Switzerland, 2018.
  74. Meurs, T.; Cartwright, E.; Cartwright, A.; Junger, M.; Abhishta, A. Deception in double extortion ransomware attacks: An analysis of profitability and credibility. Comput. Secur. 2024, 138, 103670. [Google Scholar] [CrossRef]
  75. Jiang, J.X.; Ross, J.S.; Bai, G. Ransomware Attacks and Data Breaches in US Health Care Systems. JAMA Netw. Open 2025, 8, e2510180. [Google Scholar] [CrossRef]
  76. Reshmi, T. Information security breaches due to ransomware attacks—A systematic literature review. Int. J. Inf. Manag. Data Insights 2021, 1, 100013. [Google Scholar] [CrossRef]
  77. McKean, R.; Magee, J.; de Souza, R. GDPR Fines and Data Breach Survey: January 2025. 2025. Available online: https://www.dlapiper.com/en/insights/publications/2025/01/dla-piper-gdpr-fines-and-data-breach-survey-january-2025 (accessed on 5 October 2025).
  78. Calo, R. The boundaries of privacy harm. Ind. LJ 2011, 86, 1131. [Google Scholar]
  79. Rubinstein, I.S. Regulating privacy by design. Berkeley Tech. LJ 2011, 26, 1409. [Google Scholar]
  80. Bamberger, K.A.; Mulligan, D.K. Privacy on the Books and on the Ground. Stan. L. Rev. 2010, 63, 247. [Google Scholar]
  81. Perera, C.; Ranjan, R.; Wang, L.; Khan, S.U.; Zomaya, A.Y. Big data privacy in the internet of things era. IT Prof. 2015, 17, 32–39. [Google Scholar] [CrossRef]
  82. Goldwasser, S.; Micali, S.; Rackoff, C. The knowledge complexity of interactive proof-systems. In Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali; ACM Digital Library: New York, NY, USA, 2019; pp. 203–225. [Google Scholar]
  83. California Department of Justice, Office of the Attorney General (OAG). California Privacy Protection: Privacy Enforcement Actions. 2024. Lists CPRA/CCPA Actions, Including over 100 Investigations Since 2020, with Fines Exceeding $10 Million (e.g., $1.2 Million Against Sephora in 2022). Highlights Under-Enforcement, with Only a Handful of Public Settlements Amid Thousands of Potential Violations. Available online: https://oag.ca.gov/privacy/privacy-enforcement-actions (accessed on 5 October 2025).
  84. Federal Trade Commission (FTC). Protecting Consumer Privacy and Security: Privacy and Security Enforcement Actions. 2024. Available online: https://www.ftc.gov/news-events/topics/protecting-consumer-privacy-security/privacy-security-enforcement (accessed on 26 May 2025).
  85. Enforcement Tracker. Enforcement Tracker Insights. 2024. Available online: https://www.enforcementtracker.com/?insights (accessed on 6 October 2024).
  86. California Department of Justice, Office of the Attorney General. Attorney General Bonta Announces Settlement with Sephora as Part of Ongoing Enforcement of California Consumer Privacy Act. Press Release. Available online: https://oag.ca.gov/news/press-releases/attorney-general-bonta-announces-settlement-sephora-part-ongoing-enforcement (accessed on 29 May 2025).
  87. California Department of Justice, Office of the Attorney General. Attorney General Bonta Announces Largest CCPA Settlement to Date, Secures $1.55 Million from Healthline.com. Press Release. Available online: https://oag.ca.gov/news/press-releases/attorney-general-bonta-announces-largest-ccpa-settlement-date-secures-155 (accessed on 29 May 2025).
  88. U.S. Department of Health and Human Services. Enforcement Highlights. 2023. Available online: https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/data/enforcement-highlights/2023-november/index.html (accessed on 6 October 2025).
  89. Statista. Largest Fines Issued for GDPR Violations (as of Feb 2025). Available online: https://www.statista.com/statistics/1133337/largest-fines-issued-gdpr/ (accessed on 7 October 2025).
  90. Data Privacy Manager. 20 Biggest GDPR Fines so Far [2025]. Includes Case Write-ups for Meta €1.2B, Amazon €746M, TikTok €345M, H&M €35.3M, LinkedIn €310M, Uber €290M. 2025. Available online: https://dataprivacymanager.net/5-biggest-gdpr-fines-so-far-2020/ (accessed on 7 October 2025).
  91. Capgemini Research Institute. Championing Data Protection and Privacy report: Firms Failed to Meet Their Own Expectations on GDPR Compliance. Reports 28% of Organizations Achieved GDPR Compliance. 2019. Available online: https://www.capgemini.com/news/press-releases/championing-data-protection-and-privacy-report/ (accessed on 7 October 2025).
  92. Business Wire. 90% of Companies Unprepared for CCPA Compliance, 95% Unprepared for GDPR, Says New Research by CYTRIO. Summarizes CYTRIO Study; Finds Only 11% Fully Meet CCPA Requirements. 2022. Available online: https://www.businesswire.com/news/home/20220426005252/en/90-of-Companies-Unprepared-for-CCPA-Compliance-95-Unprepared-for-GDPR-Says-New-Research-by-CYTRIO (accessed on 7 October 2025).
  93. von Hoffman, C. Only 11% of US Businesses Fully Comply with CCPA Privacy Law. Reports CYTRIO Findings on CCPA Compliance Rates. 2022. Available online: https://martech.org/only-11-of-us-businesses-fully-comply-with-ccpa-privacy-law/ (accessed on 7 October 2025).
  94. CMS Law. GDPR Enforcement Tracker Report 2024/2025: Numbers and Figures. CMS International Law Firm Publication. 2025. Available online: https://cms.law/en/int/publication/gdpr-enforcement-tracker-report/numbers-and-figures (accessed on 7 October 2025).
  95. U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR). Enforcement Highlights—Current. HHS.gov. 2024. Available online: https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/data/enforcement-highlights/index.html (accessed on 8 October 2025).
  96. HIPAA Journal. HIPAA Violation Cases—Updated 2024. HIPAA Journal Website. 2024. Available online: https://www.hipaajournal.com/hipaa-violation-cases/ (accessed on 7 October 2025).
Table 1. Key Concepts and Events in Privacy.
Table 1. Key Concepts and Events in Privacy.
Key EventsFocus/DescriptionYearInfluence Connection
Right to be Let AloneWarren and Brandeis’s conceptualization1890Concept of Privacy
Privacy TortsProsser’s Influence1960Influenced Privacy from Tort laws
Internet Commercialization19901990Rise of the Internet
Big Data and Social MediaProsser’s Influence2010Proliferation of Data
Equifax Data BreachPrivate Data Breach2017First Major Data Breach
Cambridge Analytica ScandalPrivacy Abuse2018Selling of Personal Data
AI/LLMs and IoT InflectionMassive Telemetry Data2023Powerful inference/synthetic data risks
Table 2. Privacy Laws Timeline.
Table 2. Privacy Laws Timeline.
Legal FrameworkFocus/DescriptionRegionInfluence Connection
FIPPsNotice, choice, access, securityGlobalBasis for later laws
DPDUniform data protection standards.E.U.Precursor to GDPR
HIPAAProtects health information.U.S.Sectoral approach
COPPASafeguards children online privacy.U.S.Sectoral approach
GLBARegulates financial data sharing.U.S.Sectoral approach
GDPRRights to erasure, portability, consent.E.U.Inspires LGPD, PIPL
CCPARights to access, delete, opt-out.CaliforniaState-level initiative
LGPDComprehensive data protection.BrazilInfluenced by GDPR
PIPLComprehensive data protection.ChinaInfluenced by GDPR
Table 3. Mapping Privacy Principles to Verifiable Controls.
Table 3. Mapping Privacy Principles to Verifiable Controls.
PrincipleNIST PF Function/CategoryISO 27701/ISO 27001 Clause ExampleExample Control/Evidence
TransparencyCommunicate-P; Control-PISO 27701 7.3 Privacy noticesLayered privacy policy; DSR portal logs
Data MinimizationManage-P; Data Processing InventoryISO 27701 7.2.5 Data limitation; ISO 27001 A.5.12Retention schedules; automated purge scripts; RoPA
Purpose LimitationManage-P purpose taggingISO 27701 6.4.5 Purpose specificationPurpose tags in RoPA; DLP/usage rules
Security of ProcessingProtect-P; NIST SP 800-53 overlaysISO 27001 A.10/A.12 encryption, access controlValidated encryption at rest; PAM logs
AccountabilityGovern-P; Assess-PISO 27701 5.3 roles; 7.5 DPODPIA register; quarterly governance minutes
Table 4. Crosswalk of Privacy Harms, Legal Responses, Mechanisms, and Observed Outcomes.
Table 4. Crosswalk of Privacy Harms, Legal Responses, Mechanisms, and Observed Outcomes.
Privacy HarmLegal Response(s)Mechanisms/ProvisionsObserved Outcomes/Notes
Data BreachesGDPR (Arts. 32, 33–34); HIPAA Security Rule; CCPA/CPRA breach provisionsSecurity safeguards; breach notification requirements; penalties for non-reportingIncreased breach notifications; high fines under GDPR; limited private right in U.S.
Algorithmic DiscriminationGDPR (Arts. 5, 9, 22); CPRA purpose/minimization; FTC Act unfairnessDPIAs for profiling; special category data protections; automated decision-making restrictionsLimited enforcement; E.U. AI Act audit requirements emerging; U.S. fragmented
Surveillance/Mass MonitoringE.U. CFR Art. 8; U.S. FTC oversight (sectoral)Proportionality/necessity standards; unfairness doctrineWeakly constrained in U.S.; stronger E.U. oversight; chilling effects persist
Manipulative Targeting/Behavioral ControlGDPR (consent Arts. 6–7); CPRA limits on secondary use; COPPA (minors)Consent standards; opt-outs; parental consent; fairness principlesMajor enforcement (e.g., CNIL vs. Google, Meta)
Dignitary Harms (e.g., non-consensual intimate sharing)GDPR (Art. 17 “Right to Erasure”); national civil/criminal penalties; tortsErasure rights; tort damages; takedown obligationsPatchy enforcement; inequities in remedies across jurisdictions
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Phang, K.; Kaabi, J. Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons). J. Cybersecur. Priv. 2025, 5, 103. https://doi.org/10.3390/jcp5040103

AMA Style

Phang K, Kaabi J. Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons). Journal of Cybersecurity and Privacy. 2025; 5(4):103. https://doi.org/10.3390/jcp5040103

Chicago/Turabian Style

Phang, Kong, and Jihene Kaabi. 2025. "Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons)" Journal of Cybersecurity and Privacy 5, no. 4: 103. https://doi.org/10.3390/jcp5040103

APA Style

Phang, K., & Kaabi, J. (2025). Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons). Journal of Cybersecurity and Privacy, 5(4), 103. https://doi.org/10.3390/jcp5040103

Article Metrics

Back to TopTop