Next Article in Journal
Multi-Stakeholder Priorities for Modular Construction Adoption Under Uncertainty: An Analytic Hierarchy Process and Monte Carlo Analysis in Aceh, Indonesia
Previous Article in Journal
Biophilic Architecture in the Livable City of Melbourne CBD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Personal Identity Proofing Services Through AI for a Sustainable Digital Society in Korea

Department of Software Engineering, Sejong Cyber University, Seoul 05000, Republic of Korea
Sustainability 2025, 17(23), 10486; https://doi.org/10.3390/su172310486 (registering DOI)
Submission received: 10 October 2025 / Revised: 19 November 2025 / Accepted: 21 November 2025 / Published: 23 November 2025

Abstract

The integrity of digital identity is foundational to a sustainable digital society yet is increasingly challenged by sophisticated AI-enabled risks such as deepfakes and synthetic identities. This is a conceptual paper that develops an AI-integrated, adaptive audit framework for Personal Identity Proofing Services (PIPSs). Focusing on the Republic of Korea’s regime for designating and periodically auditing Accredited Identity Proofing Institutions (AIPIs), this paper proposes an AI-integrated, adaptive audit framework for PIPSs that replaces static, checklist-based oversight with intelligence-driven governance. The framework comprises five capabilities: presentation-attack/synthetic identity detection; anomalous-behavior analytics beyond rule-based fraud detection systems; explainability and bias governance; predictive resilience and incident readiness; and standards conformance for interoperability. To clarify the sustainability relevance, the paper aligns governance outcomes with the UN Sustainable Development Goals (SDGs)—SDG 9 and SDG 16. The paper outlines policy actions and audit-ready indicators to support future pilots and comparative assessment. By shifting from rules to intelligence, the framework strengthens technical resilience and user-centered digital trust, advancing resilient infrastructure and trustworthy institutions. To validate the framework, this study outlines a pilot with AIPIs using SDG-aligned metrics and audit-ready indicators as evaluation endpoints.

1. Introduction

Personal Identity Proofing Services (PIPSs) have become essential infrastructure across e-commerce, electronic finance, and public administration, enabling identity verification in non-face-to-face environments [1,2,3]. In Korea, citizens are assigned a Resident Registration Number (RRN) at birth, a unique 13-digit identifier used by the government for official identification purposes across multiple public sectors, including health insurance, passports, banking, and national defense. Due to growing privacy and security concerns, the collection and storage of RRNs without explicit legal justification are now prohibited in online environments [3]. As a result, online service providers require alternative means of identifying users in non-face-to-face settings.
Korea has introduced substitute identification mechanisms such as i-PIN, mobile phone authentication, credit card verification, and digital certificates to replace the use of RRNs [4,5]. Users seeking access to online services must first register with an Accredited Identity Proofing Institution (AIPI) authorized to issue such substitute credentials. To obtain them, users submit their RRNs to the AIPI and undergo identity proofing through either in-person validation or digital authentication methods such as electronic certificates. Once verified, a substitute identifier is issued. When an online service provider needs to confirm a user’s identity, it sends a verification request to the AIPI. Upon successful verification, the AIPI cannot share the user’s RRN directly; instead, it provides two derived identifiers: Connecting Information (CI) and Duplication Information (DI), both generated through cryptographic algorithms. These identifiers enable completion of the identity proofing process without exposing the original RRN [3,5].
The CI is a unique encrypted identifier created to represent an individual in place of their RRN; it remains consistent across services, allowing for the secure linkage of the same individual among multiple institutions or services [3,6]. The DI, by contrast, is generated separately for each website or service and is used solely to check for duplicate registrations within that specific service. The same user will produce an identical DI upon re-registration on the same site, while different sites generate distinct DIs. This mechanism prevents multiple account creation and enables duplicate registration checks within individual platforms [7]. Korea’s CI and DI system therefore represents an alternative identification framework developed in response to the legal restrictions on RRN usage.
While this substitute identifier architecture reduces RRN exposure and improves convenience, Korea’s current designation and periodic audit regime for PIPS remains static and procedure driven. This limits responsiveness to AI-enabled threats such as deepfakes and synthetic identity fraud and creates a gap between prescribed controls and real-world risk. To address this governance gap, this paper proposes an AI-integrated, adaptive audit framework that closes this gap while preserving accountability. This framework comprises five components: detection of synthetic identities and presentation attacks, anomalous behavior analytics, explainability and bias governance, predictive management for system resilience, and conformance with international standards (e.g., eIDAS [8], NIST SP 800-63 [9]) to support cross-border interoperability.
Accordingly, this study addresses two research questions. Q1: Which governance capabilities and data signals keep PIPSs effective against AI-enabled threats while preserving due process and accountability? Q2: How can these capabilities be operationalized into audit-ready indicators aligned with UN Sustainable Development Goals (SDGs) 9 and 16 [10,11,12,13], and measurable from routine logs and audit records?
To clarify its sustainability relevance, this paper frames identity proofing as a sustainability challenge and aligns governance outcomes with the SDGs, focusing on SDG 9 (industry, innovation, and infrastructure) and SDG 16 (peace, justice, and strong institutions). SDG 9 is advanced by strengthening resilient, interoperable digital identity services, while SDG 16 is supported by improving integrity, accountability, and auditability at scale.
This paper offers a conceptual and methodological framework rather than an empirical analysis. The evaluation relies on document-based evidence and structured expert judgment, anchored to international standards. The objectives of this paper are threefold: to specify the governance gap between Korea’s static, checklist-based PIPS audits and AI-driven risks; to present an AI-integrated, adaptive audit framework aligned with international standards; and to align that framework with SDG 9 and 16 through measurable indicators with defined data sources, cadences, and thresholds.
Building on these objectives, the paper reframes PIPS oversight as an AI-integrated, adaptive audit pipeline that generates audit-ready evidence across presentation attack detection (PAD) [14,15,16], behavioral analytics [17], and explainability and bias checks [18]. It operationalizes governance outcomes—transparency, fairness, incident readiness, and efficiency—into measurable indicators with explicit data sources, measurement cadences, and thresholds, aligned with SDGs 9 and 16. It further specifies governance documentation and periodic evaluation requirements to strengthen reproducibility and accountability and maps the Korean regime to NIST SP 800-63 and EU eIDAS to clarify international interoperability and transferability beyond static, checklist-based audits.
Internationally, the EU’s eIDAS organizes digital-identity assurance into low, substantial, and high levels and relies on conformity assessment of qualified providers, while the US NIST SP 800-63 separates identity, authentication, and federation assurance (IAL/AAL/FAL) and enables remote proofing with PAD/liveness and fraud controls. In Korea, identity proofing is delivered by AIPIs and the CI architecture that shields the RRN; however, the current designation and periodic audits remain static and checklist-driven. The core problem analyzed in this paper is the resulting mismatch: whereas EU/US regimes increasingly combine risk-tiered assurance with periodic, evidence-based conformity assessment, the Korean regime lacks an adaptive, audit-ready feedback loop for AI-enabled threats such as deepfakes, synthetic identities. The proposed framework seeks to close this gap by using PAD performance, fairness metrics, and incident-readiness key performance indicators as audit evidence and by mapping controls to eIDAS and NIST clauses for transferability.
The PIPS ecosystem comprises AIPIs (which operate proofing, implement PAD and analytics, and produce audit-ready logs), supervisory authorities (which set risk-tiered profiles and thresholds, approve cadence and evidence, and oversee incident registers), relying service providers (which integrate AIPIs, enforce risk-based controls, and retain transaction and access logs), users and civil society actors (who exercise information rights, raise complaints and participate in accessibility testing), independent auditors (who conformity assessment, replicate metrics, and perform periodic red-team tests), and standards bodies and peer regulators (who define clauses, map levels of assurance, and conduct mutual-recognition pilots).
The remainder of the paper is organized as follows: Section 2 reviews the background and related work; Section 3 analyzes limitations of the current designation and audit framework; Section 4 presents the proposed framework and its five components; Section 5 discusses policy implications in relation to SDGs 9 and 16; and Section 6 concludes the paper.

2. Literature Review

2.1. Digital Trust

Digital trust refers to individuals’ confidence in how organizations collect, use, and protect personal data and in the integrity of digital systems and processes; transparency in data handling and security practices are core attributes of digital trust. Peer-reviewed studies document that digital trust is an antecedent of online usage and purchase intentions and is associated with improved customer and firm outcomes, while security failures are linked to abnormal market-value losses [19,20,21,22,23]. Accordingly, this study treats digital trust as a governance construct with measurable outcomes (transparency, fairness, and incident readiness) rather than as a managerial claim.

2.2. AI-Enabled Identity Risks and Synthetic Identities

The scale of identity-related threats in the digital environment has grown exponentially in recent years [24,25,26]. Experts estimate that global cybercrime losses will reach USD 13.82 trillion by 2028 [27], approaching or, by some estimates, exceeding annual losses from natural disasters and comparable to the global revenues of the illegal drug trade. These escalating costs extend far beyond direct financial damage; they also erode trust, increase the regulatory burden, and divert innovation resources toward recovery and crisis management. This constitutes a fundamental threat to economic sustainability. Traditional, static, rule-based security controls are no longer adequate to respond effectively to these advanced threats.
In particular, the rise in AI-driven synthetic identities has emerged as a major challenge. Synthetic identities combine real personal attributes from multiple individuals to fabricate non-existent personas, which are subsequently exploited for new-account fraud, mule accounts, and account takeover [28]. Conventional identity-proofing controls remain ill-equipped to detect such attacks, undermining the stability of the broader digital economy [29].

2.3. AI and Sustainability in Identity Proofing

AI is widely regarded as a powerful tool for promoting sustainable development. According to the United Nations, AI can contribute to 134 of the 169 SDG targets, approximately 79% [30]. However, AI simultaneously poses emerging risks to environmental and social sustainability [31].
The training and inference processes of large-scale AI models require substantial amounts of energy, which can lead to increased carbon emissions. While the energy consumption of the training phase is often discussed, the inference phase—when models are deployed and used at scale—is frequently overlooked, although its impact grows with user volume. Research by Google shows that inference workloads can have considerable impacts on electricity use, carbon emissions, and water consumption [32]. Therefore, when integrating AI into personal identity proofing systems, it is crucial to consider not only security benefits but also environmental footprint.
Moreover, AI can reproduce and amplify biases embedded in data, exacerbating existing social inequalities. Well-documented cases of bias in automated decision-making systems, such as in recruitment and credit evaluation, demonstrate this risk. Without transparent and accountable governance frameworks, the application of AI in personal identity proofing could lead to discriminatory outcomes against specific groups, undermining the social dimension of sustainability.
From a sustainability perspective, the design of AI-based frameworks for the designation and periodic audit of AIPIs must therefore extend beyond technical efficiency. It should include proactive measures to manage and mitigate potential risks associated with AI—such as its environmental footprint and algorithmic bias—to ensure that the use of AI contributes not only to stronger security but also to sustainable digital governance. Table 1 summarizes the dual nature of AI for sustainability, presenting both positive contributions and potential negative impacts across social, economic, and technological/environmental dimensions.

2.4. Comparative Governance for Digital Identity

In Korea, PIPS operates a substitute identification system based on CI and DI to mitigate risks associated with the direct use of RRNs [3]. The CI links and verifies the same individual across different services. For example, when bank A and Insurance company B compare CI values, they can determine whether they refer to the same person. By contrast, the DI is generated separately for each platform and is used only to check for duplicate registrations within a given service. Thus, CI enables cross-service identity linkage, while DI prevents multiple registrations within a single service, together, they serve complementary functions within Korea’s identity proofing framework.
The EU standardizes electronic identity (eID) under the eIDAS Regulation [8]. Each member state aligns its national identity verification system with eIDAS, enabling mutual recognition and cross-border authentication [33,34]. Unlike Korea’s RRN-based model, the EU issues an official eID for online authentication and digital signatures. Cross-service linkage is managed through a national-level authentication hub (eID Hub). From a data-protection perspective, eIDAS emphasizes pseudonymization and data minimization [35]. In essence, while Korea’s CI operates as an individual-level linking key, the EU achieves similar functionality through national/EU-level eID infrastructure. Unlike Korea’s DI mechanism, which prevents duplicate registration at the service level, the EU commonly employs attribute-based authentication that provides only the minimum information required by each service.
The United States follows the NIST special publication 800-63-3 digital identity guidelines, which define a three-tier structure comprising identity proofing, authentication, and federation (IAL, AAL, and FAL, respectively) [9]. A defining feature of the U.S. model is federated identity, allowing users to access third-party services using credentials from providers such as Google or Microsoft. Unlike Korea’s CI framework, the U.S. has no national-level linkage key. Instead, identity providers (IDPs) manage users’ unique identifiers and issue trust tokens to relying parties that request authentication [36]. Duplicate registration checks are handled by individual service policies, and no centralized mechanism comparable to Korea’s DI exists [5]. Consequently, the U.S. model adopts a decentralized, IDP-centric structure that ensures interoperability while differing fundamentally from the systems of Korea and the EU. Table 2 summarizes governance structures across Korea, the EU, and the U.S.

2.5. Sustainability and SDG Alignment in Prior Research

Prior research increasingly links digital identity proofing and broader digital-trust infrastructures to the UN SDGs, with most attention converging on SDG 9 and SDG 16 [10,11,12,13]. Conceptually, robust identity infrastructures are treated as critical digital public goods that underpin resilient, interoperable services (SDG 9) and enable transparent, accountable, and rights-respecting institutions (SDG 16). Studies examining PIPSs for non-face-to-face identity proofing and eID adoption indicate contributions to SDG 9.c (access to ICT), while warning that insufficient governance can entrench exclusion and undermine trust [10,37].
On the SDG 9 axis, the literature emphasizes technical, interoperability, and security performance [38,39]. Yet these engineering-centric metrics are rarely operationalized as SDG-aligned indicators. Comparative works across jurisdictions (e.g., eIDAS in the EU; NIST SP 800-63 in the U.S.) discuss conformance and architectural choices, but stop short of mapping infrastructure quality to audit-ready measures that permit longitudinal or cross-country benchmarking without privileged data access [13].
On the SDG 16 axis, prior studies foreground governance attributes such as transparency, accountability, fairness oversight (bias detection and remediation), and privacy protection (data minimization, purpose limitation) [40]. The AI auditing and trustworthy AI studies introduce tools like model cards, incident registers, and impact assessments, yet empirical uptake in personal identity proofing remains uneven. Where fairness is considered, analyses frequently rely on small-scale case studies or proprietary evaluations, limiting reproducibility and comparability [41]. As a result, the linkage from governance practices to concrete SDG 16 targets is often asserted rather than measured [42]
In sum, existing research substantiates the theoretical alignment between identity proofing and SDGs 9 and 16 but falls short of delivering standardized, assessment-ready approaches that are feasible without proprietary data. This evidentiary gap motivates subsequent work to translate governance goals into measurable outcomes—spanning security, fairness, transparency, and environmental performance—so that sustainability claims in digital identity can be tested, compared, and continuously improved.

2.6. Synthesis and Research Gaps

The literature indicates a persistent governance gap: static, procedure-driven audits underperform against adaptive adversaries and rarely report SDG-aligned outcomes. To address this gap, Section 4 develops an AI-integrated, adaptive audit framework that organizes five capabilities—detection of synthetic identity and presentation attacks (with PAD/liveness), anomaly analytics beyond rule-based FDS, explainability and bias governance, predictive resilience and incident readiness, and standards conformance (eIDAS, NIST SP 800-63)—into a policy-ready, audit-ready evaluation design.

2.7. Methods

This study employs a qualitative, document-based and expert-judgment approach structured as follows:
  • Data sources and inclusion criteria: This study draws on publicly available Korean legal and regulatory sources—specifically the Information and Communications Network Act and the Personal Information Protection Act—and the national Identity Proofing Service Guideline, together with designation/audit checklists and international standards for digital identity. Items were included if they had official provenance (statutes, regulations, or formally issued guidance) or were peer reviewed; opinion pieces and undocumented claims were excluded.
  • Expert panel and selection: This study convened an expert panel of eight participants—regulators (n = 2), AIPI operators (n = 3), and academics (n = 3)—purposively sampled for role and domain diversity. All experts had at least seven years of relevant experience; no current conflicts of interest were reported.
  • Structured-judgment protocol: This study used a two-round Delphi process. In Round 1, experts independently rated the relevance and feasibility of candidate capabilities and indicators on a five-point scale. In Round 2, anonymized group feedback was shared (item-level median and interquartile range, IQR, with rationales), and experts re-rated the items. The consensus threshold was pre-specified as median ≥ 4 and IQR ≤ 1; items that did not meet the threshold were revised or dropped.
  • Instruments and reproducibility: This study describes the indicator codebook—summarizing construct definitions, formulas, typical data sources, and measurement cadence—in the text. Owing to legal and institutional constraints, detailed audit criteria and templates are not publicly released; however, they may be made available on a limited basis under confidentiality upon reasonable request and subject to institutional approvals. Reproducibility is supported by the specification of inclusion criteria, Delphi consensus thresholds, and indicator formulas in the text.
  • Indicator derivation and mapping to SDGs: This study retained indicators that achieved consensus and mapped to SDG 9 and 16. The final set prioritizes observable, audit-ready signals—e.g., PAD performance (AUC/FAR/FRR), detection and response latency (MTTD/MTTR), fairness gaps (ΔFNR/ΔFPR across salient groups), incident-disclosure timeliness, and energy per 1000 verifications.

3. Analysis of Korea’s AIPI Audit Framework and Its Limitations

3.1. Legal Basis and Audit Lifecycle

The designation and periodic audit framework for AIPIs in Korea is grounded in the Act on Promotion of Information and Communications Network Utilization and Information Protection, its Enforcement Decree, and notifications issued by the Korea Communications Commission (KCC). The framework specifies the minimum requirements an organization must meet to be designated as an AIPI and applies the same criteria in periodic audits to verify continued compliance [3,5]. In practice, it functions as a single, regulation-centered baseline that governs both the initial designation and subsequent audits.
The audit lifecycle comprises five stages: application with documentary and technical submissions; designation decision; post-designation obligations; periodic audits reassessing conformity against the same criteria via document review, configuration inspection, interviews, and sampling, with findings leading to recommendations or sanctions under KCC notifications; and ad hoc reviews triggered by significant incidents, complaints, or regulatory changes. The detailed evaluation domains and items that structure both designation and periodic audits are summarized in Section 3.2.

3.2. Evaluation Domains and Items

Evaluation criteria for the designation and periodic audit of AIPIs are organized into four domains: organizational/operational controls, technical capability, financial capability, and adequacy of facility scale, which jointly provide the structured basis for both initial designation and subsequent audits [3,5].

3.2.1. Organizational/Operational Controls (Physical, Technical, and Administrative)

This domain evaluates the overall security and operational management plans related to the performance of PIPSs. Specifically, it examines the management and operation of facilities associated with identity proofing tasks; the prevention of network intrusions; the operation, security, and management of systems and networks; user protection and complaint handling; emergency response and contingency management; the establishment and enforcement of internal regulations; the security of substitute credentials; and the controls to prevent the falsification or alteration of access information, as required by the KCC. The PIPS guidelines further specify required documentation, technical measures, and operational procedures. Accordingly, this domain assesses both legal-institutional safeguards and their operationalization, confirming that controls function reliably in practice.

3.2.2. Technical Capability

This domain evaluates whether the AIPI employs qualified professionals capable of securely operating PIPSs. At least eight personnel are required to meet one or more of the following criteria: hold a national technical qualification at or above the level of Information and Communications Engineer, Information Processing Engineer, or Computer Systems Application Engineer; possess equivalent credentials recognized by the KCC; or have at least two years of professional experience in information security, information and communications operations, or system management. The PIPS guidelines instruct evaluators to verify the employment status, documented experience, and appropriate departmental placement. This domain is a determinant of professional competence and operational sustainability.

3.2.3. Financial Capability

This domain evaluates whether an AIPI maintains sufficient financial resources to ensure stable and continuous services. By law, an AIPI must maintain at least 8 billion KRW in paid-in capital, although this requirement does not apply to national or local government entities. The PIPS guidelines recommend considering not only whether the capital threshold is met but also the institution’s profit and loss structure, debt ratio, and long-term business viability as indirect indicators of financial soundness. These checks safeguard against service interruption due to short-term financial stress or insolvency.

3.2.4. Adequacy of Facility Scale

This domain evaluates whether an AIPI possesses facilities of sufficient scale to effectively perform its identity proofing operations. Specifically, it includes facilities for the verification, management, and protection of personal information; systems for the generation, issuance, and management of substitute credentials; physical security equipment for access control and restriction; and disaster-prevention infrastructure designed to protect against fire, flooding, or power outages.
According to the PIPS guidelines, the evaluation focuses not merely on whether such facilities exist, but on the operational effectiveness of these facilities such as processing capacity, system performance, redundancy configuration, and recovery procedures. This criterion is used to determine whether identity proofing services can be provided reliably even under conditions of increased user demand, sudden spikes in traffic, or unexpected system failures. Among the four domains, the factors most directly tied to day-to-day security and operational stability are the organizational/operational controls and the technical capability of staff. Financial capability and facility adequacy remain essential foundations, but they are less determinative against rapidly evolving AI-enabled threats.

3.3. Structural Limitations and the Sustainability Gap

Although the current framework assesses 87 detailed evaluation criteria, it remains largely static and regulation-centered, limiting responsiveness to emerging and adaptive risks. In particular, it is insufficient to address generative-AI-driven threats, synthetic identity attacks, and advanced social engineering techniques [15,43,44]. This rigidity constrains adaptability and real-time response, ultimately threatening the sustainability of the national identity proofing system. The domain-specific limitations are summarized below.

3.3.1. Limitations in Physical and Environmental Controls

The current standards require measures such as operating disaster recovery centers and environmental controls, but assessments focus primarily on the existence of facilities and the establishment of procedures, leaving gaps in early detection of abnormal conditions. Such static controls fall short of ensuring resilience against unpredictable disasters and sophisticated insider threats, thereby undermining technical and operational sustainability.

3.3.2. Limitations in Network and System Security Operations

Firewalls, intrusion prevention systems (IPSs), web application firewalls (WAFs), and fraud detection systems (FDSs) are core controls explicitly defined in the framework. However, rule- and signature-based detection limits effectiveness, and FDS policies lack adaptability to novel patterns and unconventional attack scenarios. As a result, systems remain vulnerable to sophisticated cybercrimes, increasing the potential for significant social and economic losses. This structural limitation ultimately undermines the sustainability of the digital economy, highlighting the need for more adaptive, intelligence-driven security operations within AIPIs.

3.3.3. Limitations in Personal Data Protection and User Rights

The current framework places strong emphasis on procedural safeguards, such as the principles of data minimization and consent processes. However, it lacks concrete management standards to address re-identification risks or algorithmic biases that may arise during data integration and model training in the AI era. In the absence of transparency in automated decision-making, users are unable to understand how their personal data are processed or utilized, resulting in diminished trust. This lack of transparency threatens digital trust, which is a core element of social sustainability. Ensuring user awareness, accountability, and fairness in AI-driven identity proofing is therefore essential to maintaining the credibility and inclusiveness.

3.3.4. Limitations in Substitute Credential and CI Management

Although procedures for the issuance, modification, transmission, storage, and disposal of CI and DI are systematically defined, the current framework is insufficient for detecting presentation attacks as well as synthetic identity fraud [43,44]. These AI-driven threats compromise system trustworthiness and can inflict serious harm on individuals, thereby weakening social sustainability. In particular, digitally marginalized groups, such as older adults, often face difficulties in completing identity verification due to low possession rates of physical identification documents and limited access to mobile authentication. This poses a significant challenge to social inclusion, underscoring the need for an adaptive and equitable identity proofing framework that ensures accessibility and fairness for all users.

3.3.5. Limitations in Access Log Management

The standards require access logs to be retained and periodically reviewed, but the approach remains retrospective, focusing on post-incident traceability rather than real-time threat response. There is no AI-based mechanism to score and prioritize risky behaviors for immediate investigation, resulting in limited defenses against intelligent and adaptive attacks. This shortcoming undermines the technical sustainability and operational resilience of AIPIs. As a result, the current framework ensures a minimum level of stability and reliability, but its static and procedure-centered audit structure fails to adequately reflect emerging threats enabled by generative AI, including deepfakes, synthetic identities, and AI-automated attacks. In particular, the absence of AI-based dynamic detection and response mechanisms within the Security Control Plan and Technical Capability domains represents a critical limitation of the existing framework.
Because procedure-driven audits are ill-suited to adaptive, AI-enabled threats and rarely yield SDG-aligned outcomes, Section 4 advances an AI-integrated, adaptive audit framework to shift oversight from static compliance to intelligence-driven governance.

4. AI-Integrated Framework

4.1. Need for Improvement

The current framework for the designation and periodic audit of AIPI consists of 87 evaluation items designed to ensure basic safety. However, these criteria are primarily static and compliance-oriented, focusing on the existence of facilities and documented procedures, and are therefore ill-equipped to address emerging threats posed by generative AI, including deepfake, synthetic identities, and presentation attacks. Accordingly, it is necessary to supplement existing audit criteria or introduce new ones that incorporate AI-driven dynamic detection, predictive analytics, and explainability, thereby supporting the social, economic, and technological sustainability of the national identity proofing system.

4.2. Domain-Specific Improvement Measures

4.2.1. Physical and Environmental Controls

  • Improvement Plan: introduce active controls that integrate and analyze access logs, CCTV streams, and environmental sensor data for real-time alerts on abnormal behavior. Apply AI-based predictive maintenance [17] to detect early signs of failure in power and cooling systems and enable preemptive response.
  • Contribution to Sustainability (SDG 9): improves predictive resilience and infrastructure reliability, strengthening technical and operational sustainability.

4.2.2. Network and System Security Operations

  • Improvement Plan: to overcome rule- or signature-based FDS limitations, mandate machine learning–based FDS [45] capable of detecting novel patterns such as synthetic identity fraud and account laundering through large-scale analysis of issuance, renewal, and inquiry histories, network indicators, and user behavior.
  • Contribution to Sustainability (SDG 9): enhances proactive defense against new attacks, stabilizes the digital economy, and protects economic sustainability.

4.2.3. Personal Data Protection and User Rights

  • Improvement Plan: implement dynamic consent mechanisms [18], enabling users to modify or withdraw consent for AI model training and data use in real-time. Mandate explainable AI (XAI) in automated decision-making and institutionalize regular algorithmic bias audits.
  • Contribution to Sustainability (SDG 16): reinforces data self-determination, transparency and accountability, building trust as the core of social sustainability.

4.2.4. CI/DI Management

  • Improvement Plan: integrate PAD and liveness detection [43] into the identity proofing to detect deepfakes and synthetic identities. Establish graph-based analytics across issuance, renewal, and access histories to uncover organized misuse patterns.
  • Contribution to Sustainability (SDG 16): protects individuals from identity fraud and strengthens system adaptiveness, advancing trustworthy institutions.

4.2.5. Access Log Management

  • Improvement Plan: revise log-retention and review requirements to include AI-based risk scoring and prioritization, and establish AI-automated audit reporting for periodic assessments.
  • Contribution to Sustainability (SDG 9 and 16): shifts from retrospective traceability to real-time response and automated risk management, reinforcing technical and operational sustainability.
Overall, the current audit criteria, the proposed AI-integrated improvement plans, and their expected effects for each domain are summarized in Table 3.

4.3. Proposed New Audit Criteria for Sustainability

To overcome the limitations of a framework that remains largely static and retrospective and to effectively respond to AI-driven threats, it is essential to both introduce new audit criteria and revise existing ones. These enhancements embed AI-enabled, proactive, and adaptive mechanisms that address evolving risks in real-time. Table 4 presents the key components of the redesigned audit criteria from a multidimensional sustainability perspective, encompassing social, economic, and technical dimensions.

4.3.1. Synthetic Identity and Presentation Attack Detection

The criterion is essential for identifying synthetic identities and deepfake-based attacks [15,42]. It assesses whether the AIPI has implemented an AI-based system capable of detecting forged or manipulated biometric information during the identity proofing process and whether regular performance verification is in place.
  • Criterion title: detection of synthetic identities and presentation attacks
  • Assessment description: evaluate whether presentation-attack and deepfake detection technologies are integrated into the identity proofing to identify manipulated biometric data—including synthetic video, voice, fingerprints, or facial information—and whether a regular performance verification system is in place.
  • Assessment scope: identity proofing processes and biometric authentication modules
  • Evidence materials: presentation-attack/forgery-detection reports, performance validation documentation
  • Interview subjects: Service operators and security managers
  • On-site inspection: demonstration of presentation-attack detection functionality

4.3.2. Anomalous Behavior Analytics

This criterion extends the existing FDS operations and access log management requirements. Instead of relying solely on rule-based detection, institutions should establish a framework that utilizes machine learning and deep learning models to detect and respond to abnormal patterns in transaction logs, access records, and issuance histories in real-time [16,46]. Such an approach strengthens the system’s ability to respond to increasingly sophisticated attack scenarios.
  • Criterion title: Anomalous behavior analytics based on transaction and access logs
  • Assessment description: modify the existing FDS operation and access log management criteria to mandate the integration of machine learning– and deep learning–based anomaly detection. Evaluate whether the institution automatically detects abnormal patterns in access records, transaction logs, and issuance histories, and applies these results to real-time incident response.
  • Assessment scope: transaction servers, log management systems, and issuance record management systems
  • Evidence materials: AI detection result reports, anomaly detection logs
  • Interview subjects: security monitoring personnel, data analysis staff
  • On-site inspection: demonstration of AI-based anomaly detection system

4.3.3. AI Explainability and Bias Governance

This criterion ensures transparency of data usage and decision-making in AI algorithms applied to personal data processing and identity proofing. By establishing XAI and institutionalizing regular bias verification, it reinforces users’ data self-determination and prevents discriminatory decision-making.
  • Criterion title: XAI and algorithmic bias governance.
  • Assessment description: AI algorithms used in automated identity proofing and personal data processing must provide explainable decision-making processes. Institutions should regularly assess, report, and remediate data bias and the potential for discrimination.
  • Assessment scope: personal data processing systems and automated decision-making modules.
  • Evidence materials: XAI reports, bias verification results, records of corrective actions
  • Interview subjects: chief privacy officer, AI developers.
  • On-site inspection: demonstration of explainability functions and bias governance procedures.

4.3.4. Predictive Security Management

This criterion builds upon provisions on disaster recovery and emergency response, shifting the focus from post-incident recovery to proactive prevention. It requires an AI-based predictive and early-warning system that utilizes sensor data and system logs to identify early signs of failure or security anomalies before a disaster occurs.
  • Criterion title: Predictive management for disaster and system failure response.
  • Assessment description: revise the current disaster recovery and emergency response standards to assess whether the institution has implemented AI-driven predictive security management-includes verifying the use of sensor data and system logs to detect abnormal facility conditions, trigger automated alerts, and enable preemptive corrective actions.
  • Assessment scope: data centers, internet data centers, and disaster recovery centers.
  • Evidence materials: Reports on predictive model operations, analytical results from sensor data.
  • Interview subjects: facility operators and security administrators.
  • On-site inspection: demonstration of AI-based predictive maintenance and alert systems.

4.3.5. International Compliance and Interoperability

This criterion evaluates whether the AIPI’s security and certification framework maintains alignment with international standards such as NIST SP 800-63, EU verification, and ISO/IEC 29115 [47]. The goal is to ensure that Korea’s national framework remains compatible with global norms, thereby supporting mutual recognition and interoperability in providing identity proofing services to international users.
  • Criterion title: Compliance with international standards and mutual recognition.
  • Assessment description: assess whether the AIPI’s security and certification systems conform to international standards. In particular, verify whether institutional and technical measures are in place to facilitate mutual recognition when offering services to overseas users.
  • Assessment scope: overall operation of the PIPS.
  • Evidence materials: international standards compliance reports, mutual recognition review documents.
  • Interview subjects: international standards officers, policy managers.
  • On-site inspection: verification of compliance measures and current implementation status related to international standards.
These revised and newly introduced criteria go beyond merely supplementing the existing 87 audit items. They represent a structural integration of AI-driven threat response, algorithmic transparency, predictive resilience, and international conformance into the regulatory framework. Through this enhancement, the designation and periodic audit system for AIPIs can evolve from a static, procedure-based assessment to an intelligent and globally trusted evaluation framework, thereby strengthening both the resilience and credibility of national digital identity governance.
Table 4. AI-Integrated improvements for AIPI audit criteria.
Table 4. AI-Integrated improvements for AIPI audit criteria.
Audit CriterionAssessment DescriptionContribution to Sustainability
Detection of Synthetic Identities and AI-Based ForgeryAssess whether presentation-attack/deepfake-detection technologies are implemented to identify manipulated biometric data (synthetic video, voice, or facial information) and whether the institution has a regular performance verification system.Reduces the risk of personal data breaches and identity fraud, thereby enhancing social trust and minimizing economic losses.
Anomalous Behavior Analysis Based on Transaction and Access LogsExpand the existing FDS criteria to require machine learning and deep learning–based anomaly detection. Automatically detect abnormal patterns in access, transaction, and issuance logs and utilize results for real-time response.Strengthens resilience against emerging intelligent attack types, enhances the stability of the digital economy, and improves technical resilience.
Explainability (XAI) and Algorithmic Bias VerificationEnsure that AI algorithms used in automated identity verification and data processing are explainable and that data bias and discrimination risks are regularly verified and corrected.Reinforces user data self-determination and enhances the fairness and transparency of algorithms, contributing to social sustainability.
Predictive Management for Disaster and System Failure ResponseRevise the disaster recovery criteria to include AI-based predictive security management using sensor data and system logs, enabling automated alerts and preemptive actions.Enhances resilience against unpredictable disasters and ensures service continuity, improving technical and operational sustainability
Compliance with International Standards and Mutual RecognitionEvaluate whether the institution’s certification system conforms to global standards (e.g., NIST SP 800-63, EU eIDAS) and whether institutional and technical measures are in place to ensure cross-border mutual recognition.Secures global compatibility of national systems, promotes cross-border digital identity interoperability, and supports the development of a trust-based global ecosystem

4.4. Adaptive Audit Flow

As shown in Figure 1, the adaptive audit pipeline for PIPSs comprises five stages: data collection (logs and signals), PAD (liveness/spoofing), analytics (anomaly and risk scoring), explainability and bias checks, SDG-aligned KPIs and corrective actions.
The data collection stage aggregates audit data generated during identity proofing—authentication/denial logs, transactions, access events, and device signals—normalizes them into a common schema, and applies basic data-quality controls (de-duplication, timestamp standardization, and pseudonymization) to ensure consistent inputs for downstream analysis.
The PAD stage performs liveness and spoofing checks to identify presentation attacks (e.g., printed photos, replayed videos, masks, and deepfakes), produces detection scores and summary metrics (AUC, FAR/FRR, EER), and detects and flags suspicious identity-proofing attempts.
The analytics stage correlates multi-source signals to detect anomalous behavior, computes a risk index using rules, statistical methods, and machine learning, and—when predefined thresholds are exceeded—generates alerts and assigns investigation priority.
The explainability and bias-checks stage records and presents the reasons for approve/deny/step-up decisions in identity proofing (e.g., PAD scores, face-match similarity, document–OCR mismatches, and capture-condition signals), continuously monitors group-wise error gaps (ΔFNR/ΔFPR), and checks whether bias arises with respect to device type, age group, or operating environment.
The SDG-aligned KPIs and corrective-actions stage serves as the control room for identity-proofing operations: it measures governance KPIs on a defined cadence, compares results with targets, and—when thresholds are breached—approves and executes corrective actions such as model retraining, PAD-threshold tuning, data-quality fixes, and runbook updates. Changes are propagated upstream to the PAD, analytics, and data-collection stages to close the adaptive loop, while metrics, determinations, actions, and outcomes are recorded to support auditability and regulatory reporting.

5. Policy Implications with Respect to SDGs 9 and 16

The proposed AI-integrated, adaptive audit framework represents a structural shift in the designation and periodic audit of AIPIs—moving from static, checklist-based oversight to intelligence-driven governance. The policy implications align with SDG 9 and 16 by strengthening infrastructure resilience, transparency, and accountability while preserving inclusion for digitally marginalized users.

5.1. Social Sustainability (SDG 16–16.6, 16.10)

By protecting identities and personal data against AI-enabled threats, the framework strengthens individuals’ rights to data self-determination. Introducing explainability and periodic bias verification enhances transparency and accountability, reinforcing social trust and supporting an inclusive, rights-respecting digital society. To ensure inclusion for digitally marginalized users, authorities should mandate assisted channels (e.g., in-person, phone) and human escalation when digital proofing fails, and require accessibility conformance in personal identity-proofing.
  • Indicative policy actions: mandate model cards and incident registers; require periodic fairness audits and public summaries; ensure assisted channels and human escalation; require accessibility conformance and low-spec UI modes.
  • Indicative assessment metrics: availability of model documentation; bias metrics; rate of substantiated user complaints; incident-disclosure timeliness; accessibility coverage; assisted-channel success rate; Δ usage/access gap across vulnerable cohorts; explainability coverage.
  • Metric definitions and formulas: Table 5 quantifies the social sustainability dimensions—transparency, fairness, accountability, and inclusiveness—aligned with SDG 16 (targets 16.6 and 16.10). Each indicator specifies computation and linkage to governance outcomes in AI-integrated identity-proofing [14,48,49,50,51].

5.2. Economic Sustainability (SDG 9–9.1, 9.c)

Proactive detection and risk-tiered auditing reduce expected losses from identity theft and fraud. Predictable, profile-based AI audit standards help firms plan security investments and pursue responsible innovation, stabilizing the PIPS market and enabling long-term ecosystem growth. Adopt risk-tiered audit profiles (low/medium/high) in which cadence, sampling depth, and evidence requirements scale with institution risk scores; link on-time remediation to incentives and supervisory relief.
  • Indicative policy actions: publish an audit profile and certification cadence; introduce proportional oversight tied to risk scores; apply risk-tiered profiles with scaled cadence/sampling/evidence; link procurement/certification eligibility to profile conformance; provide performance-based incentives.
  • Indicative assessment metrics: fraud loss rate per 10 k verifications; MTTD/MTTR improvement over baseline; percentage of remediation completed on time; false-positive block rate; risk score distribution; profile conformance rate.
  • Metric definitions and formulas (units): Table 6 quantifies economic sustainability outcomes—loss reduction, operational efficiency, and risk-proportional oversight—aligned with SDG 9 (targets 9.1, 9.c) [52,53].

5.3. Technological and Operational Sustainability (SDG 9–9.1)

Behavioral analytics and predictive security management increase adaptability to deepfakes, synthetic identities, and novel attack paths, achieving resilience levels not attainable with static controls. To align resilience with sustainable infrastructure (SDG 9), include environmental-footprint KPIs for AI inference and operations—energy per 1 k verifications, carbon intensity, water-use efficiency (WUE), power usage effectiveness (PUE)—alongside incident-response performance.
  • Indicative policy actions: require PAD/liveness in proofing workflows; require periodic disaster recovery (DR)/business continuity planning (BCP) drills that include AI-enabled attack scenarios; set alert-response SLOs (e.g., notification-to-response ≤ X minutes) and model-drift monitoring with retraining workflows.
  • Indicative assessment metrics: PAD performance (AUC, FAR/FRR under attack conditions); drill success rates; RTO/RPO adherence in recovery tests; energy per 1 k verifications (kWh/1000); carbon intensity (kgCO2e/1000 verifications); water use efficiency—WUE (L/kWh); power usage effectiveness—PUE (unitless); time-to-contain (minutes); drift detection frequency (events/month).
  • Metric definitions and formulas (units): Table 7 quantifies technological and operational sustainability—resilience to AI-enabled attacks and resource efficiency—aligned with SDG 9 (target 9.1) [54].

5.4. International Interoperability (SDG 9 and 16)

Alignment with NIST SP 800-63, EU eIDAS, and related standards enables cross-border recognition and interoperability, facilitating outbound expansion of domestic providers and inbound operation of international firms, thereby extending the sustainability of the digital economy at a global scale. Authorities should publish assurance-level (LoA) mapping matrix and conformance gap lists with remediation plans; run mutual-recognition pilots with peer regulators to operationalize interoperability.
  • Indicative policy actions: adopt mapping matrices to international clauses; pilot mutual-recognition pathways with peer regulators; publish LoA mappings and conformance gap lists with dated remediation plans.
  • Indicative assessment metrics: conformance score against selected profiles; number of services operating under mutual recognition; gap-closure lead time (days); number/duration of cross-border pilots; proportion of controls covered by LoA mapping.
  • Metric definitions and formulas: Table 8 quantifies international interoperability outcomes—standards conformance, cross-border operability, and remediation velocity—aligned with SDG 9 (target 9.c) and SDG 16 (target 16.6) [55].

6. Conclusions

This study proposes an AI-integrated, adaptive audit framework for PIPSs in Korea and demonstrates how governance outcomes can be translated into audit-ready indicators aligned with SDGs 9 and 16. Addressing two questions-which governance capabilities and data signals keep PIPSs effective against AI-enabled threats while safeguarding due process, and how to operationalize those capabilities as measurable indicators-the study reframes oversight from static checklist compliance to intelligence-driven governance.
Five capabilities are arranged into an adaptive audit flow that yields continuous, decision-useful evidence for operators and supervisors. These capabilities are operationalized through indicators with explicit formulas, data sources, and cadences, covering PAD performance and robustness, group-wise error gaps, detection and response timeliness, transparency, and resource-efficiency signals relevant to sustainability. The resulting evidence pipeline supports risk-tiered oversight in which cadence, sampling depth, and evidence requirements scale with institutional risk, while corrective actions are tracked against published targets.
The empirical scope is Korea-specific; parts of the evidence rely on structured expert judgment rather than large-scale field trials; and legal or institutional constraints limit public release of detailed audit templates. Generalization to other jurisdictions requires clause-level mapping to local statutes, operational practices, and data availability, including access to fairness labels or proxies and to operational telemetry for environmental indicators. Indicator targets are context-dependent and should serve as iterative benchmarks rather than universal thresholds.
To mature the framework, a three-step roadmap is outlined: (1) conduct a controlled pilot to instrument the audit flow, populate the indicator codebook, and establish baselines; (2) perform quantitative evaluation at the frequencies in Table 5, Table 6, Table 7 and Table 8 to document gaps relative to targets and the effects of corrective actions, including drift monitoring and model-governance updates; and (3) publish concise regulatory guidance defining a risk-tiered audit profile with clause-level mappings to eIDAS/NIST, interoperability notes, and a maintenance plan for periodic recalibration of metrics, targets, and evidence requirements. Taken together, these steps move the proposal from conceptual specification to reproducible practice toward a sustainable, standards-aligned identity ecosystem in which oversight is adaptive by design, trust is measurable, and cross-border interoperability is demonstrable.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study by Institution Committee due to anonymous, minimal-risk survey research that does not collect personally identifiable information and does not involve human biological materials or intervention is not subject to Institutional Review Board review under the Bioethics and Safety Act of Korea and its Enforcement Decree and Enforcement Rule.

Informed Consent Statement

Informed consent for participation was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The author thanks experts in digital identity governance and sustainability for consultations.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AALAuthenticator Assurance Level
AIPIAccredited Identity Proofing Institution
AUCArea Under the ROC Curve
CIConnecting Information
DIDuplication Information
eIDASelectronic Identification, Authentication and trust Service
FALFederation Assurance Level
FDSFraud Detection System(s)
FNRFalse Negative Rate
FPRFalse Positive Rate
IALIdentity Assurance Level
IDPIdentity Provider
KCCKorea Communications Commission
LoALevel of Assurance
MTTDMean Time to Detect
MTTRMean Time to Restore
NISTNational Institute of Standards and Technology
PADPresentation Attack Detection
PIPSPersonal Identity Proofing Service
PUEPower Usage Effectiveness
RRNResident Registration Number
SDGSustainable Development Goal
WUEWater Usage Effectiveness
XAIExplainable AI

References

  1. Jung, K.; Yeom, H.G.; Choi, D. A Study on Big Data Based Non-Face-to-Face Identity Proofing Technology. KIPS Trans. Comput. Commun. Syst. 2017, 6, 421–428. [Google Scholar]
  2. Laurent, M.; Levallois-Barth, C. 4-Privacy Management and Protection of Personal Data. Digit. Identity Manag. 2015, 137–205. [Google Scholar] [CrossRef]
  3. Kim, J.B. Personal Identity Proofing for E-Commerce: A Case Study of Online Service Users in the Republic of Korea. Electronics 2024, 13, 3954. [Google Scholar] [CrossRef]
  4. Kim, S.G.; Kim, S.K. An Exploration on Personal Information Regulation Factors and Data Combination Factors Affecting Big Data Utilization. J. Korea Inst. Inf. Secur. Cryptol. 2020, 30, 287–304. [Google Scholar]
  5. Kim, J. Improvement of Digital Identify Proofing Service through Trend Analysis of Online Personal Identification. Int. J. Internet Broadcast. Commun. 2023, 15, 1–8. [Google Scholar]
  6. Hwang, J.; Oh, J.; Kim, H. A study on the progress and future directions for MyData in South Korea. In Proceedings of the 2023 IEEE/ACIS 8th International Conference on Big Data, Cloud Computing, and Data Science (BCD), Hochimin City, Vietnam, 14–16 December 2023; pp. 49–51. [Google Scholar]
  7. Kim, J.B. A Study on the Quantified Point System for Designation of Personal Identity Proofing Service Provider based on Resident Registration Number. Int. J. Adv. Smart Converg. 2022, 11, 20–27. [Google Scholar]
  8. eIDAS Regulation. Available online: https://digital-strategy.ec.europa.eu/en/policies/eidas-regulation (accessed on 12 November 2025).
  9. Paul, A.G.; Michael, E.G.; James, L.F. Digital Identity Guidelines; NIST Special Publication: Gaithersburg, MD, USA, 2017. [Google Scholar]
  10. Manby, B. The Sustainable Development Goals and “Legal Identity for All”: “First, Do No Harm”. World Dev. 2021, 139, 105343. [Google Scholar] [CrossRef]
  11. Madon, S.; Masiero, S. Digital Connectivity and the SDGs: Conceptualising the Link through an Institutional Resilience Lens. Telecommun. Policy 2025, 49, 102879. [Google Scholar] [CrossRef]
  12. Lubis, S.; Purnomo, E.P.; Lado, J.A.; Hung, C.-F. Electronic Governance in Advancing Sustainable Development Goals: A Systematic Literature Review (2018–2023). Discov. Glob. Soc. 2024, 2, 77. [Google Scholar] [CrossRef]
  13. Sanina, A.; Styrin, E.; Vigoda-Gadot, E.; Yudina, M.; Semenova, A. Digital Government Transformation and Sustainable Development Goals: To What Extent Are They Interconnected? Bibliometric Analysis Results. Sustainability 2024, 16, 9761. [Google Scholar] [CrossRef]
  14. National Institute of Standards and Technology. NIST SP 800-63A-4: Digital Identity Guidelines—Identity Proofing and Enrollment; NIST: Gaithersburg, MD, USA, 2025. [Google Scholar]
  15. Çınar, O.; Doğan, Y. Novel Deepfake Image Detection with PV-ISM: Patch-Based Vision Transformer for Identifying Synthetic Media. Appl. Sci. 2025, 15, 6429. [Google Scholar] [CrossRef]
  16. Sohail, S.; Sajjad, S.M.; Zafar, A.; Iqbal, Z.; Muhammad, Z.; Kazim, M. Deepfake Image Forensics for Privacy Protection and Authenticity Using Deep Learning. Information 2025, 16, 270. [Google Scholar] [CrossRef]
  17. Bhattacharyya, S.; Jha, S.; Tharakunnel, K.; Westland, J.C. Data Mining for Credit Card Fraud: A Comparative Study. Decis. Support Syst. 2011, 50, 602–613. [Google Scholar] [CrossRef]
  18. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  19. Kim, J.; Yum, K. Enhancing Continuous Usage Intention in E-Commerce Marketplace Platforms: The Effects of Service Quality, Customer Satisfaction, and Trust. Appl. Sci. 2024, 14, 7617. [Google Scholar] [CrossRef]
  20. Farhat, R.; Yang, Q.; Ahmed, M.A.O.; Hasan, G. E-Commerce for a Sustainable Future: Integrating Trust, Product Quality Perception, and Online-Shopping Satisfaction. Sustainability 2025, 17, 1431. [Google Scholar] [CrossRef]
  21. Ebrahimi, S.; Eshghi, K. A Meta-Analysis of the Factors Influencing the Impact of Security Breach Announcements on Stock Returns of Firms. Electron. Mark. 2022, 32, 2357–2380. [Google Scholar] [CrossRef]
  22. Chen, S.J.; Tran, K.T.; Xia, Z.R.; Waseem, D.; Zhang, J.A.; Potdar, B. The Double-Edged Effects of Data Privacy Practices on Customer Responses. Int. J. Inf. Manag. 2023, 69, 102600. [Google Scholar] [CrossRef]
  23. Nastoska, A.; Jancheska, B.; Rizinski, M.; Trajanov, D. Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries. Electronics 2025, 14, 2717. [Google Scholar] [CrossRef]
  24. Sebestyen, H.; Popescu, D.E.; Zmaranda, R.D. A Literature Review on Security in the Internet of Things: Identifying and Analyzing Critical Categories. Computers 2025, 14, 61. [Google Scholar] [CrossRef]
  25. Abdullah, M.; Nawaz, M.M.; Saleem, B.; Zahra, M.; Ashfaq, E.b.; Muhammad, Z. Evolution Cybercrime—Key Trends, Cybersecurity Threats, and Mitigation Strategies from Historical Data. Analytics 2025, 4, 25. [Google Scholar] [CrossRef]
  26. Federal Trade Commission. Consumer Sentinel Network Data Book 2023; FTC: Washington, DC, USA, 2024. Available online: https://www.ftc.gov/reports/consumer-sentinel-network-data-book-2023 (accessed on 12 November 2025).
  27. Statista. Cybercrime Expected to Skyrocket in Coming Years. Available online: https://www.statista.com/chart/28878/expected-cost-of-cybercrime-until-2027/?utm_source=chatgpt.com (accessed on 12 November 2025).
  28. Ghiurău, D.; Popescu, D.E. Distinguishing Reality from AI: Approaches for Detecting Synthetic Content. Computers 2025, 14, 1. [Google Scholar] [CrossRef]
  29. Ramachandra, R.; Busch, C. Presentation Attack Detection Methods for Face Recognition Systems: A Comprehensive Survey. ACM Comput. Surv. 2017, 50, 1–37. [Google Scholar] [CrossRef]
  30. Fan, Z.; Yan, Z.; Wen, S. Deep Learning and Artificial Intelligence in Sustainability: A Review of SDGs, Renewable Energy, and Environmental Health. Sustainability 2023, 15, 13493. [Google Scholar] [CrossRef]
  31. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef]
  32. de Vries, A. The Growing Energy Footprint of Artificial Intelligence. Joule 2023, 7, 2191–2194. [Google Scholar] [CrossRef]
  33. Alonso, Á.; Pozo, A.; Gordillo, A.; López-Pernas, S.; Munoz-Arcentales, A.; Marco, L.; Barra, E. Enhancing University Services by Extending the eIDAS European Specification with Academic Attributes. Sustainability 2020, 12, 770. [Google Scholar] [CrossRef]
  34. Gregušová, D.; Halásová, Z.; Peráček, T. eIDAS Regulation and Its Impact on National Legislation: The Case of the Slovak Republic. Adm. Sci. 2022, 12, 187. [Google Scholar] [CrossRef]
  35. Wihlborg, E. Secure electronic identification (eID) in the intersection of politics and technology. Int. J. Electron. Gov. 2013, 6, 143. [Google Scholar] [CrossRef]
  36. Shemshi, V.; Jakimovski, B. Extended Model for Efficient Federated Identity Management with Dynamic Levels of Assurance Across eIDAS, REFEDS, and Kantara Frameworks for Educational Institutions. Information 2025, 16, 385. [Google Scholar] [CrossRef]
  37. Weitzberg, K.; Cheesman, M.; Martin, A.; Schoemaker, E. Between Surveillance and Recognition: Rethinking Digital Identity in Aid. Big Data Soc. 2021, 8, 20539517211006744. [Google Scholar] [CrossRef]
  38. Temoshok, D.; Choong, Y.-Y.; Galluzzo, R.; LaSalle, M.; Regenscheid, A.; Proud-Madruga, D.; Gupta, S. Lefkovitz, N. NIST SP 800-63-4: Digital Identity Guidelines; NIST Special Publication: Gaithersburg, MD, USA, 2025. [Google Scholar]
  39. Inza, J. The European Digital Identity Wallet as Defined in the EIDAS 2 Regulation. In Governance and Control of Data and Digital Economy in the European Single Market; Law, Governance and Technology Series; Springer: Berlin/Heidelberg, Germany, 2025; pp. 433–452. [Google Scholar]
  40. Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 33–44. [Google Scholar]
  41. McGregor, S. Preventing Repeated Real-World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 2–9 February 2021; Volume 35, pp. 15458–15463. [Google Scholar]
  42. Mökander, J.; Morley, J.; Taddeo, M.; Floridi, L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Sci. Eng. Ethics 2021, 27, 44. [Google Scholar] [CrossRef] [PubMed]
  43. Babaei, R.; Cheng, S.; Duan, R.; Zhao, S. Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis. J. Sens. Actuator Netw. 2025, 14, 17. [Google Scholar] [CrossRef]
  44. Zhang, W.; Yang, D.; Wang, H. Data-Driven Methods for Predictive Maintenance of Industrial Equipment: A Survey. IEEE Syst. J. 2019, 13, 2213–2227. [Google Scholar] [CrossRef]
  45. Kaye, J.; Whitley, E.; Lund, D.; Morrison, M.; Teare, H.; Melham, K. Dynamic consent: A patient interface for twenty-first century research networks. Eur. J. Hum. Genet. 2015, 23, 141–146. [Google Scholar] [CrossRef]
  46. Abbasi, M.; Váz, P.; Silva, J.; Martins, P. Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks. Appl. Sci. 2025, 15, 1225. [Google Scholar] [CrossRef]
  47. ISO/IEC 29115:2013; Information Technology—Security Techniques—Entity Authentication Assurance Framework. International Organization for Standardization: Geneva, Switzerland, 2013. Available online: https://www.iso.org/standard/45138.html (accessed on 12 November 2025).
  48. Namani, Y.; Reghioua, I.; Bendiab, G.; Labiod, M.A.; Shiaeles, S. DeepGuard: Identification and Attribution of AI-Generated Synthetic Images. Electronics 2025, 14, 665. [Google Scholar] [CrossRef]
  49. Schwartz, R.; Vassilev, A.; Greene, K.; Perine, L.; Burt, A.; Hall, P. Towards a Standard for Identifying and Managing Bias in AI; US Department of Commerce, National Institute of Standards and Technology: Gaithersburg, MD, USA, 2022. [Google Scholar]
  50. European Union. Directive (EU) 2022/2555 on Measures for a High Common Level of Cybersecurity across the Union (NIS 2 Directive). Off. J. Eur. Union 2022, 12, 80–152. Available online: https://eur-lex.europa.eu/eli/dir/2022/2555/oj/eng (accessed on 12 November 2025).
  51. ISO 10002:2018; Quality Management—Customer Satisfaction—Guidelines for Complaints Handling in Organizations. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/71580.html (accessed on 12 November 2025).
  52. NIST. Measurement Guide for Information Security: Volume 1—Identifying and Selecting Measures; NIST Special Publication 800-55v1; NIST: Gaithersburg, MD, USA, 2024. [Google Scholar]
  53. Association of Certified Fraud Examiners (ACFE). Occupational Fraud 2024: A Report to the Nations; ACFE: Austin, TX, USA, 2024; Available online: https://legacy.acfe.com/report-to-the-nations/2024/ (accessed on 12 November 2025).
  54. ISO/IEC 30107-3:2023; Information Technology—Biometric Presentation Attack Detection—Part 3: Testing and Reporting. ISO/IEC: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/79520.html (accessed on 12 November 2025).
  55. European Commission. Commission Implementing Regulation (EU) 2015/1502 of 8 September 2015 on Minimum Technical Specifications and Procedures for Assurance Levels for Electronic Identification Means. Off. J. Eur. Union 2015, L235, 7–20. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX%3A32015R1502 (accessed on 12 November 2025).
Figure 1. Adaptive audit flow for PIPS.
Figure 1. Adaptive audit flow for PIPS.
Sustainability 17 10486 g001
Table 1. AI and sustainability in identity proofing: positive contributions and potential negative impacts across social, economic, and technological & environmental dimensions.
Table 1. AI and sustainability in identity proofing: positive contributions and potential negative impacts across social, economic, and technological & environmental dimensions.
DimensionPositive Impacts on SustainabilityNegative Impacts on Sustainability
Social
  • Enhances social trust by preventing identity theft and financial fraud
  • Improves digital accessibility for vulnerable populations
  • Exacerbates social inequality through algorithmic bias
  • Undermines digital trust due to the lack of transparency in automated decision-making
Economic
  • Prevents massive financial losses caused by cybercrime
  • Promotes digital economic activity through efficient identity verification
  • Strengthens corporate innovation and competitiveness (e.g., increased revenue)
  • Increases the cost of building and maintaining AI systems
  • Creates reputational and financial losses from breaches
Technological/Environmental
  • Enhances resilience to adaptive threats
  • Improves stability through predictive maintenance
  • Consumes significant energy and water during AI model training and inference
  • Increases environmental impact from high-performance semiconductor supply chains
Table 2. Comparative analysis of digital identity governance in Korea, the EU, and the U.S.
Table 2. Comparative analysis of digital identity governance in Korea, the EU, and the U.S.
CategoryKoreaEuropean UnionUnited States
Core SystemCI/DIeIDASNIST SP 800-63 (Digital Identity Guidelines)
Personal Linkage MethodCINational/EU-level eID infrastructure (eID Hub)
  • IDP manages unique IDs
  • Federated ID token issuance
Cross-Service Identity RecognitionCINational/EU eID infrastructureAlways Required
Duplicate Registration PreventionDINoneFederated ID Provider
Legal BasisAct on Promotion of Information and Communications Network Utilization and Information ProtectioneIDAS RegulationNone
Personal Data Protection Principles
  • Prohibition of direct use of RRN
  • Encryption-based substitute identifiers
Pseudonymization, minimal disclosure of personal information
  • Risk-based assurance levels
  • User consent-driven model
Key Features
  • CI = Cross-service linkage identifier
  • DI = Duplicate prevention for individual services
Unified authentication via national/EU-level trusted eID
  • Distributed linkage structure
  • IDP-centric authentication and token issuance
Table 3. Proposed Domain-Specific Enhancements to AIPI Audit Criteria.
Table 3. Proposed Domain-Specific Enhancements to AIPI Audit Criteria.
DomainsCurrent Audit CriteriaProposed Improvement PlansExpected Effects
Physical and Environmental ControlOperation of access control devices, retention of access logs for at least six months, installation and storage of CCTV footage, verification of disaster recovery center establishment, etc.
  • Mandate integration of AI-based video and behavior analysis for real-time detection of abnormal activities using access logs and CCTV data
  • Introduce sensor data–driven predictive maintenance for disaster prevention facilities
Enables proactive threat detection and disaster prevention, moving beyond simple record retention and facility possession
Network and System Security OperationsVerification of traditional security infrastructure such as firewalls, IDS/IPS, and FDS, retention and periodic review of logs, etc.
  • Mandate ML–based FDS to overcome limitations of rule-based detection
  • Incorporate AI-driven real-time analysis and risk scoring into log management
Enhances responsiveness to emerging attacks, zero-day vulnerabilities, and AI-driven threats
Personal Data Protection and User RightsVerification of user consent procedures, adherence to data minimization principles, management of access, correction, deletion, and disposal processes, etc.
  • Include real-time consent modification and withdrawal functions under consent management
  • Establish requirements for explainability in automated decision-making and regular AI-bias verification
Strengthens user data self-determination and ensures algorithmic transparency and accountability
Substitute Credential and CI ManagementManagement of CI/DI issuance, storage, and deletion, assignment and maintenance of CP codes, tracking of CI provision history
  • Apply AI-based synthetic identity and presentation-attack detection within CI/DI management
  • Implement automated consistency verification of CP codes and DI values and anomaly detection mechanisms
  • Establish real-time monitoring of encryption key usage and observability
Advances from procedural safety to intelligent forgery/presentation-attack detection and automated integrity assurance
Access Log ManagementRetention of access logs for system administrators and users, periodic review and reporting, etc.
  • Add AI-based risk scoring and prioritization for access log review
  • Introduce AI-automated audit reporting systems for periodic assessments
Evolves from retrospective traceability to real-time response and automated risk management, reinforcing technical and operational sustainability
Table 5. Social sustainability metrics summary.
Table 5. Social sustainability metrics summary.
MetricFormulaData sourceCadenceThreshold
Explainability coverage # a u t o m a t e d   d e c i s i o n s   w i t h   X A I # t o t a l   a u t o m a t e d   d e c i s i o n s × 100 Audit/logging systemMonthly≥99.0%
Fairness gap—FNR (False Negative Rate) F N R = m a x g F N R g m i n g F N R g × 100
, where F N R g = F a l s e   N e g a t i v e s g A c t u a l   P o s i t i v e s g
Model evaluation logs with group labelsQuarterly≤1.0 pp
If the false-negative rate for Group A is 1.8% and for Group B is 2.5%, then ΔFNR = ∣2.5 − 1.8∣ = 0.7 percentage points, which meets the illustrative target (≤1.0 pp)
Fairness gap—FPR (False Positive Rate) F P R = m a x g F P R g m i n g F P R g × 100
, where F P R g = F a l s e   P o s i t i v e s g A c t u a l   N e g a t i v e s g
Model evaluation logs with group labelsQuarterly≤1.0 pp
Substantiated complaint rate # s u b s t a n t i a t e d   c o m p l a i n t s # v e r i f i c a t i o n s × 10,000 Customer support logsMonthly<2/10 k
Incident-disclosure timeliness a v g ( D i s c l o s u r e   D a t e I n c i d e n t   C o n f i r m a t i o n   D a t a ) Incident reportsPer incident≤24 h
Accessibility coverage # W C A G c o m p l i a n t   j o u r n e y s # t o t a l   j o u r n e y s   a s s e s s e d × 100 UX accessibility auditsQuarterly≥95.0%
Assisted-channel success rate # s u c c e s s f u l   a s s i s t e d   v e r i f i c a t i o n s # a s s i s t e d   a t t e m p t s × 100 Contact center/branch logsMonthly≥90.0%
Usage/access gap A c c e s s G a p = C R r e f C R v u l n × 100
, where C R = # s u c c e s s f u l   v e r i f i c a t i o n s # a t t e m p t s
Authentication logsMonthly≤5.0 pp
Adverse-decision notice coverage# adverse automated decisions with notice incl. reasons & recourse/(# adverse automated decisions × 100)Decision/notification logsMonthly≥99.5%
Appeal resolution timemedian (decision on appeal − appeal submission)Appeals/case-management ticketsMonthly≤5 days
Note: “#” denotes “number of”.
Table 6. Economic sustainability metrics summary.
Table 6. Economic sustainability metrics summary.
MetricFormulaData sourceCadenceThreshold
Fraud loss rate T o t a l   f r a u d   l o s s # v e r i f i c a t i o n s × 10,000 Fraud claims registerMonthlyorg target
MTTD/MTTRMTTD = mean (Detection Time − Incident Start)
MTTR = mean (Restore Time − Incident Start)
Incident-response ticketsMonthlyMTTD ≤ 10 min
MTTR ≤ 30 min
On-time remediation rate # r e m e d i a t i o n s   c o m p l e t e d   b y   d u e   d a t e # t o t a l   r e m e d i a t i o n s × 100 Audit follow-up logMonthly≥95.0%
False-positive block rate # n o r m a l   t r a n s a c t i o n s   b l o c k e d   d u e   t o   F P # n o r m a l   t r a n s a c t i o n s × 100 Risk engine logsMonthly≤0.5%
Risk score distributionI: <0.30, M: 0.30–060, H: >0.60Scoring logsWeeklyHigh ≤ 15%
Profile conformance rate # c o n t r o l s   s a t i s f i e d   i n   a s s i g n e d   p r o f i l e # c o n t r o l s   r e q u i r e d   b y   p r o f i l e × 100 Control checklistQuarterly≥98% overall, 100% critical
Note: “#” denotes “number of”.
Table 7. Technological and operational sustainability metrics summary.
Table 7. Technological and operational sustainability metrics summary.
MetricFormulaData SourceCadenceThreshold
PAD performance (AUC, FAR/FRR)AUC, FAR/FRR under attack conditions (spoof, replay, deepfake)PAD evaluation logsMonthlyAUC ≥ 0.98
FAR ≤ 1.0%
FRR ≤ 2.0%
DR/BCP drill success rate # s u c c e s s f u l   d r i l l s # t o t a l   d r i l l s × 100 DR/BCP drill recordsQuarterly≥95%
RTO/RPO adherence # t e s t s   m e e t i n g   R T O / R P O   t a r g e t s # t o t a l   r e m e d i a t i o n s × 100 DR test logsQuarterly≥95%
Energy per 1 k verifications T o t a l   e l e c t r i c i t y # v e r i f i c a t i o n s × 1000 Metering/billingMonthly≤0.50 kWh/1 k
If a service consumes 1.60 kWh while processing 3700 verifications over a week, then Energy/1 k = (1.60 kWh/3700) × 1000 = 0.43 kWh per 1000 verifications, satisfying the illustrative target (≤0.50)
Carbon intensity T o t a l   e m i s s i o n s # v e r i f i c a t i o n s × 1000 Electricity use × grid factorsQuarterly≤org target
WUE T o t a l   w a t e r T o t a l   I T   e n e r g y Data-center facilities metricsQuarterly≤1.5 L/kW
PUE T o t a l   f a c i l i t y   e n e r g y I T   e n e r g y DC utilitiesMonthly≤1.30
Time-to-containavg (Containment Time − Incident Start)IR ticketsMonthly≤15 min
Drift detection frequency # m o d e l d r i f t   a l e r t s m o n t h Model-monitoring systemWeekly<5/week
Note: “#” denotes “number of”.
Table 8. International interoperability metrics summary.
Table 8. International interoperability metrics summary.
MetricFormulaData SourceCadenceThreshold
Conformance score # c o n t r o l s   s a t i s f i e d # c o n t r o l s   r e q u i r e d × 100 Conformance test reportsQuarterlyOverall ≥ 98%
Mutual-recognition service count# services operating under mutual recognitionMRA agreements/MoUsQuarterly≥roadmap target
Gap-closure lead time (days)Remediation Completion Date − Gap Identification DateRemediation tracker (JIRA/GRC)MonthlyMedian ≤ 30 days
Cross-border pilots (count/duration)Count; total pilot-daysProject/PoC reportsQuarterly≥2 pilots/quarter or ≥60 pilot-days
LoA mapping coverage (%) # c o n t r o l s   m a p p e d   t o   L O A # c o n t r o l s   i n   s c o p e × 100 LoA mapping registerQuarterly100% of in-scope controls
Note: “#” denotes “number of”.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J. Enhancing Personal Identity Proofing Services Through AI for a Sustainable Digital Society in Korea. Sustainability 2025, 17, 10486. https://doi.org/10.3390/su172310486

AMA Style

Kim J. Enhancing Personal Identity Proofing Services Through AI for a Sustainable Digital Society in Korea. Sustainability. 2025; 17(23):10486. https://doi.org/10.3390/su172310486

Chicago/Turabian Style

Kim, JongBae. 2025. "Enhancing Personal Identity Proofing Services Through AI for a Sustainable Digital Society in Korea" Sustainability 17, no. 23: 10486. https://doi.org/10.3390/su172310486

APA Style

Kim, J. (2025). Enhancing Personal Identity Proofing Services Through AI for a Sustainable Digital Society in Korea. Sustainability, 17(23), 10486. https://doi.org/10.3390/su172310486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop