Next Article in Journal
LLM-Generated Samples for Android Malware Detection
Previous Article in Journal
APOLLO: Autonomous Predictive On-Chain Learning Orchestrator for AI-Driven Blockchain Governance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Universal Digital Identity Stakeholder Alignment: Toward Context-Layered RAG Architectures for Ecosystem-Aware AI

Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
*
Author to whom correspondence should be addressed.
Submission received: 14 December 2025 / Revised: 3 January 2026 / Accepted: 6 January 2026 / Published: 14 January 2026

Abstract

A universal approach to managing a person’s digital identity may be the single most important advancement to the Internet since its inception, promising the seamless flow of information, averting cybercrime, eliminating login credentials, and restoring privacy and trust through greater control of one’s identity online. However, this advancement brings significant risks, especially regarding personal privacy. It demands the meticulous development of digital identity infrastructure that balances robust data security measures with ethical handling of sensitive information, thereby safeguarding against misuse and unauthorised access. Currently, a consolidated vision for digital identity implementation remains unresolved, and aligning the different stakeholders’ motives and expectations is a challenging task. This article reviews and analyses the perspectives and expectations of four key stakeholder groups—government, business, academia, and consumers—regarding a digital identity ecosystem, aiming to increase trust in an eventual design framework. Using an online survey stratified across government, business, academia, and consumers, we identify areas of alignment and divergence regarding privacy, trust, usability, and governance expectations. We then encode these stakeholder expectations into a layered conceptual structure and illustrate its use as metadata for context-layered retrieval-augmented generation (RAG) in digital identity scenarios.

1. Introduction

The Internet is a useful tool that enriches life, enables access to information and increases productivity. However, its design is not perfect [1]. We see misuse of the Internet, in the form of cybercrime, costing consumers billions of dollars every year [2,3]. In addition, its design promotes development freedoms which often prevent ease of integration between online systems [4,5]. Collectively, these problems restrict the true potential of the knowledge economy and put users at risk, underscoring the growing need for robust and user-centric solutions to secure online interactions.

1.1. Digital Identity Landscape

A fundamental aspect of safe and efficient online interactions is the management of digital identities. However, the wide-ranging and often opaque nature of today’s digital onboarding processes means users may face additional risks that are not always evident at first glance. Those risks include abuse and nefarious use of cloud computing, insecure application programming interfaces, malicious insiders, shared technology vulnerabilities, data loss or leakage, and account, service, and traffic hijacking [6]. Furthermore, Solove highlights the complex societal and psychological effects of digital onboarding and data usage risks. He emphasises that increased data collection and usage impact not only information privacy, where personal data control is at risk, but also decision-making privacy, where individual autonomy in choices is involved. This growing influence on private information and personal decision-making highlights concerns about individual autonomy [7]. Additionally, the challenges in protecting privacy and autonomy are compounded by the continuously evolving nature of online services, making this a dynamic and complex issue with significant implications for society and individual psychology.
As more people’s activities move online, there is a growing urgency to address these shortcomings—with many seeing the concept of distributed digital identity access control as a panacea. A distributed digital identity is not stored or controlled by a single centralised entity, such as a government or corporation, but is instead distributed across a decentralised network of nodes or entities. This concept is thoroughly explored in foundational works by Goodell [8], Nakamoto [9], and Swan [10], who have made significant contributions to the understanding of blockchain and decentralised technologies in this context, which empower users with enhanced control over their personal and identity information.
While there are a number of well-documented benefits of digital identity—including convenience and efficiency, a reduction in fraud and identity theft, increased financial inclusion, improved government services, and making cross-border transactions easier—there are equally as many arguments against digital identity. These concerns include privacy issues, security risks, potential for surveillance, lack of control, and exclusion of members of society [11]. Lyon [12] delves into these ethical and societal challenges, offering a critical perspective on the implications of digital identity systems. Ultimately, the degree to which the concerns can be mitigated, and therefore the likelihood that digital identity will be well received, will depend on its implementation approach.

1.2. Stakeholder Involvement

Although the concept of digital identity has been recognised for over two decades, as evidenced by patents dating back more than 25 years [13] and the seminal work of Cameron [14], there has been a notable surge in engagement from various key digital identity stakeholder groups in recent times. This increase in activity prompts critical inquiries regarding the alignment of motivations and perspectives among these stakeholders. Furthermore, it raises the question of whether this increased involvement will culminate in the development of an effective implementation of digital identity systems that benefit society as a whole. Another concern is whether individuals will find themselves navigating and managing their digital identities through an increasingly complex and diverse array of technological solutions. In addition to its conceptual value, stakeholder-aligned analysis may also inform AI-supported retrieval and reasoning, helping large language models operate within semantically grounded boundaries.
Therefore, effective collaboration and communication among these stakeholders are essential for the successful development, implementation and governance of universal digital identity systems. Balancing the interests and concerns of all parties is crucial to achieving a secure, privacy-respecting and widely adopted system.

1.3. Research Gap

As artificial intelligence (AI) technologies are increasingly applied in digital identity systems [15], including areas such as biometric verification, behavioural analytics, document processing, and trust scoring [16], the importance of establishing clear and shared conceptual foundations becomes more evident. These systems may encode assumptions about roles, legitimacy, and authority that are not always aligned with stakeholder expectations. Without explicit attention to stakeholder alignment, the application of AI risks reinforcing disparities in governance, usability, and trust.
This concern is amplified by the increasing use of large language models supported by ontological or graph-based backends to reduce hallucination and improve semantic accuracy [17]. In such contexts, well-structured representations of digital identity ecosystems become essential—not only for technical integrity but for ethical and policy legitimacy. Without a coherent conceptual foundation, digital identity efforts risk misalignment with societal needs, technical incompatibility, and the unintentional embedding of conflicting assumptions through artificial intelligence systems.
In this context, the structured insights into stakeholder roles, responsibilities, and governance presented in this study serve as a useful input for AI systems that rely on retrieval and reasoning across complex ecosystems. RAG architectures, for example, can incorporate these stakeholder-aligned layers to constrain retrieval scope, ground outputs in domain-specific expectations, and enhance the interpretability and alignment of AI-generated responses in digital identity contexts [18].
Although prior research has significantly advanced our understanding of usability for widespread adoption [19,20,21,22,23], the paramount importance of privacy in protecting personal information [24,25,26,27,28], the role of trust in fostering confidence among users [29,30,31,32], the public trust of required biometric technology [33], and the overarching security of the technologies involved [34,35,36,37,38,39,40], there is limited empirical insight into how stakeholder perspectives align—or diverge—around the core values, roles, and governance structures required to support broadly acceptable identity infrastructure. This gap is especially notable given the fragmented nature of current digital identity initiatives, where efforts often proceed in isolation, leading to limited interoperability and misaligned design priorities [41].
As a preliminary investigation, this pilot study integrates insights from both digital identity technology and sociology to explore how universal digital identity ecosystems are conceived, developed, and adopted, thereby offering an initial foundation for future, more extensive research. The study is deemed a pilot because it constitutes an initial, exploratory phase of research with a limited scope and sample size, aiming to test key concepts, refine methods (e.g., survey instruments and interview protocols), and assess feasibility before scaling up to broader investigations [42]. Notably, it is the first study to investigate both stakeholder alignment and misalignment in constructing a universal digital identity system, offering a fresh viewpoint on how these systems are perceived and might be accepted in broader social contexts (for an expanded discussion of this research gap, see Section 1.4).
To comprehensively explore these dynamics, this study adopts a socio-technical lens rooted in socio-technical systems theory, paired with the Value-Sensitive Design (VSD) framework. This integrated approach acknowledges the complex interplay between social and technical components in digital identity ecosystems, emphasising the need to balance human and technological considerations in their design, implementation, and adoption. By combining socio-technical systems theory with VSD, the study seeks to offer a holistic understanding of stakeholder perspectives and the socio-technical challenges inherent in creating a universal digital identity system, ensuring that core human values are intentionally and systematically incorporated into the research and design processes.

1.4. Research Motivation

In this study, the term “universal digital identity ecosystem” refers not to a singular, globally deployed solution, but to a conceptual model encompassing the interactions of diverse stakeholders, design principles, and implementation frameworks across jurisdictions. While a fully universal approach may not be technically or politically feasible, clarifying foundational values and stakeholder expectations is vital for guiding context-aware, interoperable, and ethically grounded system design.
The assumption that a single universal digital identity is desirable, for example, remains largely untested, and numerous additional assumptions—such as perspectives on trust and usability—could significantly influence the research, design and implementation of such a system. Given these circumstances, this research aims to establish a foundational understanding by taking an initial step towards verifying these core assumptions. This objective is articulated through several key goals, as outlined below:
  • Understanding Stakeholder Perceptions: Existing literature on digital identity stakeholders is valuable, but often focuses on specific ecosystems or implementations, and involves stakeholders actively engaged with the technology—potentially creating an echo chamber effect. There is a need for broader perspectives from a wider range of stakeholder groups to critically evaluate the underlying assumptions about digital identity technologies and their general acceptance [43,44,45].
  • Testing Barriers to Adoption: Existing literature also outlines integration barriers. There is a need to analyse further the barriers that specifically relate to stakeholder acceptance and hinder the adoption of digital identity technologies among different stakeholder groups, including technical, cultural and regulatory challenges [46,47].
  • Policymaking and Regulatory Compliance: To provide empirical data and insights to help policymakers, technology developers and institutional leaders make informed decisions about the deployment and governance of digital identity systems. And to assess the impact of regulatory compliance on the acceptance of digital identity systems, and how these regulations affect stakeholder trust and confidence in these systems [48,49,50].
  • Enhancing Privacy and Security Measures: Digital identity solutions are complex and stakeholder perceptions may differ across different system components. There is a motivation to evaluate stakeholder concerns related to digital identity privacy and security, and determine how these concerns influence the acceptance and use of digital identity technologies and their ecosystems [24,27,51].
  • Improving Technology Design: Given the variety of potential technology implementations for digital identity—such as biometrics, blockchain, smart devices and others [26,52,53]—it is crucial to inform and enhance the design of these systems by aligning them with user needs and values. Understanding stakeholder perspectives is an essential precursor to selecting a successful implementation model.
  • Testing User-Centric Assumptions: Existing literature tells us that identity has evolved into a user-centric paradigm. There is a need to validate stakeholder perspectives about whether this is truly desirable. This will help to ensure the systems are built and implemented with a suitable focus on user control and data management [54,55].
  • Advancing Academic Research: To contribute to the academic literature on technology acceptance by applying the specific context of digital identity, potentially setting a precedent for future research in similar technology acceptance studies.
  • Supporting Interdisciplinary Collaboration: To encourage interdisciplinary collaboration among technologists, sociologists, policy analysts and other stakeholders in exploring and addressing the complex issues surrounding digital identity systems.
These motivations highlight the value of investigating stakeholder perspectives as a basis for enhancing the effectiveness, inclusivity, and societal alignment of digital identity technologies.

1.5. Study Objectives and Methodology

Building on these motivations, this study focuses on understanding stakeholder attitudes towards key social factors such as privacy, trust, and usability in the implementation and acceptance of digital identity systems. Its overarching goal is to establish a foundational understanding of stakeholder alignment within a universal digital identity ecosystem, aiming to validate or challenge prevailing assumptions in these areas and improve the likelihood of successful implementation.
Scope—This study is positioned as a socio-technical pilot investigation of stakeholder alignment in a “universal digital identity ecosystem”, understood as a conceptual model spanning stakeholders, values, and governance expectations across jurisdictions rather than a single globally deployed solution. The study does not propose a new identity protocol or reference architecture, nor does it evaluate a specific national programme end-to-end; instead, it characterises how stakeholder groups prioritise core values and allocate roles and responsibilities, and considers how these expectations may inform governance and ecosystem design.
Contributions—This study makes three contributions to the wider digital identity environment: (1) It provides an empirically grounded comparison of stakeholder alignment and divergence across privacy, trust, usability, and governance expectations using survey data stratified across government, business, academia, and consumers. (2) It synthesises these findings into a stakeholder-aligned conceptual structure spanning values, stakeholders, roles, and responsibilities, and includes a placeholder artefacts layer (e.g., credentials, wallets, authenticators) to support downstream technical elaboration and the RAG proof-of-concept. The structure is intended to support clearer reasoning about ecosystem readiness and design trade-offs. (3) It demonstrates, via a lightweight proof-of-concept, how the resulting structure can act as metadata for context-layered retrieval-augmented generation (RAG), constraining AI outputs to ecosystem- and role-relevant knowledge boundaries.
Based on the existing literature and the goals of this study, the following overarching research question guides this investigation:
  • Research Question: How do stakeholder perspectives align or diverge regarding privacy, trust, and usability in the design of a universal digital identity ecosystem?
To explore this question in more detail, we examine the following sub-questions:
  • RQ1: How do stakeholder groups differ in their prioritisation of core values (e.g., privacy, trust, usability)?
  • RQ2: How do stakeholder groups understand their roles and responsibilities within the digital identity ecosystem?
  • RQ3: How do levels of trust in digital identity systems vary across stakeholder groups?
To explore these, the study employs a predominantly quantitative approach using an online survey stratified across government, business, academia, and consumers. Interpretation is supplemented through literature triangulation (e.g., published usability case evidence). An online survey serves as the primary data collection tool, leveraging the Power versus Interest Grid [56] to categorise stakeholders into government, business, academia, and consumer groups. Recruitment utilied convenience sampling across social media platforms, online forums, and direct engagement with professionals and government programmes. In total, 243 responses were collected and analysed using statistical techniques to assess alignment and divergence in stakeholder perspectives. Details of the methods employed are provided in the Section 5. Additionally, Section 5.3 presents a lightweight implementation of a layered RAG architecture using LangChain, demonstrating how the structured stakeholder insights developed through this study can inform AI-supported reasoning in digital identity contexts.
By addressing these research questions, this study aims to provide actionable insights into the socio-technical challenges of digital identity systems and foster the development of frameworks that align stakeholder needs with technical feasibility.

1.6. Section Breakdown

In the following section, we analyse the evolution of digital identity, highlighting technological advancements and key driving factors. Section 5, ‘Methodology’, details the data mining techniques and analytical frameworks used in our questionnaire analysis. Section 6, ‘Results/Discussion’, presents our findings, focusing on the alignment or misalignment in digital identity patterns revealed by the data. Section 7, ‘Limitations’, discusses the constraints of our methodology and their potential impact on our findings. Finally, Section 8, ‘Conclusions’, synthesises our insights and reflects on their implications for stakeholder alignment and the feasibility of a unified universal digital identity approach.

2. Background

In this section, we delve into the evolution and background of digital identity, exploring the key factors that have driven significant paradigm shifts over time. We also consider various perspectives and concerns raised by stakeholders in the digital identity arena, recognising potential challenges that could arise in the pursuit of a cohesive unified approach to digital identity.

2.1. Persona

In the realm of identity verification, public key-based certificates have historically been the standard. However, the increasing concern for personal information privacy has led to a demand for a more detailed and precise method. Personas, accompanied by specific credentials, provide a framework where associated certificates support various distinct claims or attributes. This method allows for verification without disclosing extraneous personal information [57].
Clarke introduced the term ’digital persona’ in 1992 and further explored it in subsequent works [58]. His research highlighted the concept of granularity, referring to the individual elements that collectively construct an individual’s identity. He identified that digital identities were often compiled through data surveillance, typically without the user’s awareness. This early recognition of the potential risks associated with unregulated data collection has been echoed in recent high-profile cases, such as the Facebook and Cambridge Analytica scandal, and have been a focus of the General Data Protection Regulation [59]. These events underscore the necessity of advancing towards more secure and private identity verification methods, such as those offered by personas [60]. Clarke’s contributions laid the groundwork for the contemporary understanding and implementation of digital identities—underlining the significance of personas in this context, and representing a significant step in the development of what is now referred to as digital identity.
Beyond the technical and practical considerations of digital identity, the concept quickly extends into a philosophical debate. While an exhaustive philosophical discussion is beyond the scope of this work, the parallels between Jenkins’ frameworks of similarity and difference in social identity and the global management of digital identities cannot be overlooked.
“Without social identity, there is no human world. Without frameworks of similarity and difference, people would be unable to relate to each other in a consistent and meaningful fashion.” Richard Jenkins [61]
Building upon Clarke’s work, it can be posited that an individual’s digital identity in society is essentially an amalgamation of perceptions formed by their online footprint. This digital footprint comprises all instances where a user has contributed personal information, engaged with online services, or presented digital credentials, such as a vaccine passport. Collectively, these actions contribute to a profile that can either directly or indirectly identify a person. In a similar vein, the digital identity of a device is characterised by a unique identifier coupled with its access patterns. These elements together facilitate the formation of a comprehensive profile of the online entity.

2.2. Digital Identity

While Clarke’s pioneering research laid the groundwork for understanding the complexities and potential risks associated with digital identities, the field continued to evolve with contributions from other experts. Among them was Kim Cameron, a Canadian computer scientist and Microsoft’s Chief Architect of Access who further advanced this domain. Cameron, known for his formulation of the 7 Laws of Identity, offered a more defined perspective on digital identity, describing it as:
“a set of claims made by one digital subject (e.g., a user) about itself or about another digital subject”. Kim Cameron [14]
This definition highlights the nature of identity in the digital realm, where it is often perceived as a given or partially irrelevant, contrasting sharply with the earlier concerns raised by Clarke.
In their 2014 study, Whitley et al. built upon Cameron’s definition, drawing a crucial distinction between ‘identity’ (an individual’s self-perception) and ‘identification’ (how others recognise an individual) [62]. This delineation helps us to understand the two perspectives of Clarke and Cameron, and lays a crucial foundation for understanding modern digital identity, emphasising the difference between managing one’s digital identity credentials and the act of presenting them to a relying party.
The definitions put forward by Clarke and Cameron both have merit. For the purpose of this paper, we have adopted Cameron’s definition of a set of claims as our working definition for a digital identity because it relates more closely to the fundamental challenge of proof and the underlying technology required to build digital identity infrastructure.
Reflecting this perspective, contemporary approaches to digital identity have evolved beyond simple sign-in credentials. They now embrace a more detailed approach, enabling users to share specific identity elements as needed—such as vaccine status during a pandemic or age verification in a bar—without revealing their entire identity profile. In line with this development, Cameron reasserted the argument that “proof, not identity, is the key to wider adoption” [63].

2.3. Distributed Networking

The vision asserted by Cameron, of a distributed network of proofs, represents a paradigm shift from traditional identity authentication methods. These conventional methods associate an entity with an identifier, aiming to validate if a user is the same individual who previously accessed a system. This approach has been a staple in most global authentication schemes. However, the rise of social media and advanced data-sharing practices is emphasising the need for digital identities that more closely resemble a person’s real identity. This shift aims to facilitate seamless access across multiple systems without the burden of managing separate usernames and passwords for each one.
As well as the ambiguity concerning definition, a key issue with digital identities is the diverse nature and number of implemented access management frameworks that deal with a user’s digital identity in very different ways. This lack of uniformity acts as a barrier to information sharing, and exposes a user to cyberstalking or data-mining risks as their behaviours can be monitored.
In a traditional implementation of credential-based authentication, centralised identity involves an authority, such as a government or corporation, administering login credentials from an identity provider service for users of its systems. This model creates some “separation-of-concern” between the service provider and identity provider services. However, it has limited scalability because each domain must have separate credentials.
The federated identity model solves the scalability issues by liberating the identity provider service from the boundaries of a single domain and promotes cross-domain support for authentication through the sharing of credentials between multiple identity provider services. At a technical level, federation can be defined as the set of agreements, standards and technologies that enable service providers to establish a collective trust domain where multiple service providers can share identities [55]. A large number of technologies have been devoted to federated identity, including the Security Assertion Markup Language (SAML), OpenID and the OAuth authorisation framework [64]. At a functional level, federation allows a user to employ a single credential to access multiple services across multiple domains.
A limitation of the centralised and federated identity models is that users are not central to the model. Their exclusion from important steps in the transaction flow means they cannot take part in the protection of their own identities. Accordingly, the user-centric paradigm repositions users in the middle of the transaction, between identity providers and relying parties [55]. This affords users a greater level of control and the opportunity to determine what information is shared and with whom.
OpenID, initially celebrated as both a user-centric and federated identity management solution, encountered numerous usability issues that hindered its effectiveness in truly centering the user experience. Supported by platforms like Google, OpenID was designed to simplify logins across various services [65,66]; however, its usability challenges led to a reevaluation of its user-centric claim. In contrast, platforms such as Facebook and Twitter have primarily utilised OAuth [67,68] for similar purposes, focusing on streamlining the authentication process. The development of OpenID Connect, which layers OpenID and OAuth together, represents an evolution aimed at addressing OpenID’s limitations, an approach that Google has adopted. Although their global success varies, digital wallet solutions have also been widely adopted in certain regions [53]. Challenges such as consumer trust and technological infrastructure readiness play significant roles in their adoption rates, aside from any inherent technological complexity.

2.4. De-Perimeterised Security

This transition towards a distributed topology for digital identity aligns with the concept of de-perimeterisation—the disappearance of boundaries between systems and organisations, which are becoming connected and fragmented at the same time [69]. Pieters et al. argue that the primary challenge posed by de-perimeterisation lies in the reorganisation of security protocols. In the context of digital identity, this concept signifies a shift from the traditional perimeter-based security model towards a more flexible approach, which is better suited to the dynamic nature of contemporary digital interactions. As distributed digital identities emerge, facilitating continuous cross-network and cross-border interactions, the traditional method of securing a defined network boundary becomes increasingly obsolete.
Instead, this new approach advocates for identity-centric security measures, focusing on authentication and authorisation irrespective of the entity’s location. This change is a direct response to the intricacies of our interconnected digital landscape, where securing individual identities and their data takes precedence over protecting traditional network perimeters. Such a strategy becomes even more pertinent in the context of today’s widespread remote work, mobile device usage, and cloud computing. Together, Cameron’s vision and the principle of de-perimeterisation reflect a comprehensive rethinking of identity verification and security, adapting to the dynamic needs of the digital era.
In 2011, the Jericho Forum—an international group working to define and promote de-perimeterisation—released a white paper titled ‘Identity Commandments’, which defined the principles that must be observed when planning an identity ecosystem. They stated that, in the digital world, the identity of a user extends beyond just people to all five entity types—people, organisations, computing devices, code (including self-protecting data), and agents [70]. This classification is extended by the Global Identity Foundation in its work towards developing a global identity solution (Global Identity Foundation website: https://www.globalidentityfoundation.org/).

2.5. Self-Sovereign Identity

While user-centric identity is a proactive step for those users looking to retain control of their digital identity, a number of fundamental limitations still exist. The reliance on one or more identity providers means that the scalability of the federated network remains limited. Additionally, users who wish to participate in a user-centric federated network (e.g., Google or Facebook) must adhere to the rules of the providing company. Furthermore, service providers such as banks may be uncomfortable entrusting (for example) social media providers with security access policies [11].
Concerns over conflicting service provider security requirements and policies have driven the need for “self-sovereign identity” (SSI)—a decentralised approach that enables individuals to exert full control over how much of their true identity is exposed via their digital identity. Any participant in a self-sovereign network should adhere to a number of key principles. All data held against an identity should only be shared by user consent and any shared data should be the minimum necessary to conduct a transaction. Information on how data is accessed must be transparent to the owner. The network must also be transparent, both in terms of how it works and how it is managed [71].
Identities should be managed by a single entity, which must have safeguards in place to ensure an identity is safe, regardless of the status of the managing company. Identities should be fully portable from one provider to another, and all related technologies should be open source—which is potentially a problem because of the large number of patents being assembled by the major players in the market.
Overall, the goal of the network is for the credentials to be transportable for the widest number of participating systems. This means any network must support existing international identification systems and provide a framework for conflict management while maintaining user privacy [72]. Failure to do so will effectively introduce barriers to adoption.
It has been suggested that achieving this objective could be effectively supported by decentralised identity infrastructure, such as distributed ledger technology (DLT). DLT enables parties to utilise DLT-based identifiers to present claims, thus proving identity without the direct involvement of an identity provider during the presentation phase [73]. However, it is important to note that DLT is not the sole method for implementing such self-sovereign identity (SSI) solutions. Recent developments by the OpenID Foundation have introduced protocols like “OpenID for verifiable credential issuance and presentations”, which also support SSI by separating the process into two distinct phases: issuance and presentation. This separation ensures the identity provider is not involved in the presentation phase, thereby enhancing user autonomy and privacy [74,75].

2.6. Standardisation

The shift towards a more integrated digital identity system necessitates a high degree of standardisation [46]. Just as SMTP standardised email communication and HTTP provided a universal framework for the web, a universally accepted standard for digital identity is crucial. Standards would not only ensure interoperability and uniformity across various digital platforms and regions, but would also facilitate critical principles. These include widespread adoption, inclusiveness, adherence to established standards, robust security, and government recognition.
Toth and Anderson-Priddy asserted that Cameron’s vision of a proofing network of cryptographically verified claims was, at least partially, realised by the W3C’s Verifiable Claims Working Group (VCWG). The group launched a project to develop a standardised, machine-readable identity schema, promoting interoperability amongst global information systems and their access management systems (see Figure 1), as well as the Internet community at large [71].
W3C describes a verifiable credential as being equal to a physical credential, except that digital technologies make verifiable credentials tamper-evident and more trustworthy than physical ones. What follows is a short list of likely information that credentials may carry as defined by the W3C.
  • Information related to identifying the subject of the credential.
  • Information related to the issuing authority.
  • Information related to the type of credential.
  • Evidence related to how the credential was derived.
  • Information related to constraints on the credential.
The need for universal digital identities is also underscored by initiatives like the W3C’s “Web of Things”, which envisions devices having structured digital twins, achievable only if each entity possesses a digital identity [76]. As this concept evolves, focus will shift towards resolving architectural challenges such as heterogeneity, scalability and usability. The development of a distributed digital identity infrastructure is essential to address these issues and facilitate widespread adoption.
These initiatives, however, represent just the start of the broader standardisation challenge, which can be appreciated when considering Hasselbring’s three integration dimensions: Autonomy, Distribution and Heterogeneity [4].
Under Autonomy, standardising user consent mechanisms and identity verification methods is crucial for independent yet cohesive operation across platforms. In the Distribution dimension, data encryption standards and cross-border data transfer protocols need uniformity to secure and facilitate identity portability globally. For Heterogeneity, creating consistent guidelines for handling and storing sensitive personal data and standardised processes for identity recovery and revocation are essential to manage diverse systems and technologies involved in digital identities.

2.7. National Digital Identity Programmes

While not without challenges, common sense dictates that having digital identities usable across domains and geographic divides provides numerous benefits to society, and there is evidence that governments agree. The Digital Identity Working Group (DIWG), which involves eight member states (Australia, Canada, Finland, New Zealand, Israel, Singapore, the Netherlands, and the United Kingdom), and the European Identity Standard eIDAS, which involves more than 30 countries, are two such examples.
The eIDAS regulation in Europe, which stands for Electronic Identification, Authentication and Trust Services, provides a clear framework for electronic identification and trust services for electronic transactions [77,78]. Its widespread adoption across European Union member states exemplifies how a unified digital identity standard can coexist with varied national implementations. This level of interoperability facilitates secure and seamless cross-border electronic transactions, fostering trust and efficiency in the digital single market of the EU.
However, the idea of replacing passwords with government-issued electronic IDs has sparked considerable debate, especially among privacy advocates [79]. The fundamental challenge, therefore, lies in developing trusted intermediate digital identity infrastructure that supports a diverse set of identity approaches and corresponding digital identity-based transactions without tipping the scales towards invasive surveillance or eroding individual privacy.
Following the UK’s national identity card scheme drawing to a close, the failed and costly initiative launched a decade ago is still reflected in an ongoing public distrust of national identity initiatives [80]. Rising social concerns regarding data privacy and security represent a threat to future digital identity initiatives that cannot be ignored.
A post-mortem report delved into the pressing need for improved digital identity solutions in the UK—highlighting the challenges faced across various industries, such as healthcare, banking and travel, where reliable remote customer identification remains a key issue [81]. Despite the emergence of innovative, sector-specific solutions, the UK lacks a cohesive digital identity infrastructure, leading to fragmented markets and rising identity fraud, which is projected to cost billions.
Legal, regulatory and operational hurdles, including data protection laws and anti-money-laundering legislation, further complicate the landscape. The report underscores the potential economic benefits of a robust digital identity system, which could significantly reduce fraud and operational costs, with the potential to add substantial value to the economy.
The report advocates for further research to understand the costs and models of potential schemes, drawing insights from international experiences where similar challenges have been successfully addressed. It suggests that learning from these global examples could guide the UK in overcoming its inertia and fragmented approach to digital identity—ultimately contributing to a more secure and efficient future.
The report provides a strong basis for a questionnaire on stakeholder alignment in national digital identity programmes due to its comprehensive coverage of legal, regulatory and operational challenges in digital identity. It concludes with key success factors (outlined in Table 1) for implementing national identity, offering a focused framework for the questionnaire. This ensures the questionnaire is relevant and addresses key aspects of digital identity initiatives. Additionally, the report’s balance between the challenges and potential economic benefits of digital identities, including issues like privacy and security, makes it especially pertinent for understanding current stakeholder perspectives and concerns.

2.8. Towards Universal Digital Identity

Considering the benefits and challenges of digital identity ecosystems, the primary aim of a universal digital identity system is to streamline and enhance interactions with digital services for users in a way that is both standardised and supportive of diverse implementation approaches. To accomplish this, a collaborative effort is required among diverse stakeholders who must work together to establish a framework that is not only reliable and trustworthy, but also prioritises privacy and accessibility, making digital spaces more user-friendly and secure.

3. Literature Review

A universal digital identity ecosystem is inherently complex, involving numerous subsystems, technologies, regulations, policies, and stakeholders. This complexity arises from the need to integrate diverse technical components with societal needs and regulatory frameworks, ensuring security, privacy, and usability.
To identify and test key concerns central to universal digital identity ecosystem construction, we can apply socio-technical systems theory to identify research gaps within academic literature and analyse the interplay between technological and social elements, highlighting areas where current research may be lacking.
In this section, we introduce and discuss socio-technical systems theory, establish our socio-technical dimensions, conduct a literature review for those dimensions and overlapping areas, and highlight where the research gap toward a universal digital identity ecosystem lies.

3.1. Applying a Socio-Technical Lens

Socio-technical systems (STS) theory is a cross-disciplinary approach to exploring the interconnectedness and mutual development of social and technical elements within an organisational or systemic environment. Seminal work by Trist discusses the importance of leveraging these perspectives to ensure systems are effective and aligned with the broader societal context in which they operate [82].
Alter asserts that describing and evaluating socio-technical systems along multiple dimensions of integration can provide crucial insights for improving outcomes for all stakeholders [83]. The concept of a universal digital identity embodies a complex socio-technical system that requires a nuanced understanding of its various dimensions and their interactions.
It should be noted there are many ways to categorise socio-technical dimensions and in line with Trist’s original research, this study proposes the following critical dimensions underpinning digital identity research: (1) Digital Identity (Technology); (2) Context (Environment + Integration); and (3) Stakeholders (People + Organisational entities). Each dimension plays a vital role in shaping the development, implementation and adoption of digital identity systems, and their overlaps highlight the interplay between technical capabilities and social dynamics.
In this paper, the contextdimension is treated not only as technical deployment environment, but also as an institutional and legal setting (e.g., regulation, compliance expectations, and accountability structures) that both constrains and is reshaped by technical design choices. This enables explicit consideration of how technical mechanisms, social adoption dynamics, and legal requirements co-evolve in digital identity ecosystems. With this established, we discuss each of the individual dimensions.

3.2. Dimensions

The literature on digital identity ecosystems is extensive, with significant contributions addressing the core dimensions of digital identity, context and stakeholders—as shown in Figure 2.
Digital Identity—as well as the research cited earlier in this section, research on digital identity is also well covered and frequently explores fundamental concepts and components, including authentication methods [68], multifactor authentication [84,85], attributes and credentials [20,86], emerging technology such as biometrics [87], blockchain [88,89] and artificial intelligence (AI) [90], as well as security considerations such as encryption [91,92].
These emerging technologies and others are frequently proposed as enabling components in digital identity ecosystems, but each introduces distinct socio-technical risks that can affect privacy, trust, and inclusion. In particular, AI is playing an increasingly functional role in digital identity systems, particularly in areas such as biometric verification, behavioural analytics, and risk-based authentication [16]. While these applications can increase efficiency and scalability, they also raise concerns regarding transparency, potential bias, and the extent to which individuals retain agency over identity-related decisions [93]. Biometric authentication can reduce reliance on passwords and support convenient access, but biometric systems can exhibit demographic performance differences and non-trivial failure modes, with implications for equitable access [94]. Blockchain-based and decentralised approaches (e.g., DID/VC ecosystems) can support integrity and auditability of credential-related records, but may also raise privacy and governance concerns if immutable logs or linkable metadata are not carefully constrained [95,96].
In this context, semantic structures such as ontologies and knowledge graphs are increasingly used to support large language models and other AI tools. These structures provide a formal representation of key concepts, relationships, and constraints, which can help reduce hallucinations and ensure that AI outputs remain grounded in domain-relevant logic [17]. This study contributes to this effort by offering a stakeholder-layered conceptual structure that can support semantically constrained AI responses, especially when operationalised using retrieval-augmented generation frameworks. For digital identity systems, this highlights the importance of developing well-structured and principled conceptual models, as these form the basis for both human governance and machine interpretation of identity-related information.
These developments further emphasise that digital identity systems are not purely technical artefacts, but socio-technical constructs in which institutional assumptions, stakeholder values, and power asymmetries are embedded. As such, concerns around bias, opacity, and loss of user control are not isolated technical risks—they are directly linked to broader issues of trust, fairness, and legitimacy [15] that must be addressed through governance, design, and stakeholder alignment.
These considerations underscore the role of identity management systems not only as technical implementations but as sites where institutional assumptions, stakeholder roles, and societal values are encoded and operationalised.
Studies in these areas often emphasise identity management systems [97], examining how digital identities are created, managed and utilised across various platforms, and focusing strongly on security protocols [98] to prevent identity theft and fraud. However, there is a notable gap in comprehensive studies on integrating emerging technologies into digital identity systems with the notable exception of Palfrey and Gasser’s work on Interoperability and eInnovation [99]. Detailed analyses of interoperability between different digital identity platforms, and user-centric studies that focus on end-user experience and usability of digital identity technology are otherwise lacking.
Context—this dimension is well covered by research that explores legal and regulatory frameworks governing digital identities, such as the GDPR [59] in Europe and various national data protection laws [7]. Technological context research delves into the infrastructure required for digital identity systems [8,63], including network security, cloud services [100], and digital certificates [101,102]. Socio-economic studies investigate how digital identity systems can bridge access gaps in different economic settings [103,104]. Nonetheless, there is a lack of comparative studies across jurisdictions (see Table 2) to understand the global landscape of digital identity regulations, in-depth research into the impact of cultural factors on the adoption and use of digital identities, and longitudinal studies that track the evolution of digital identity regulations and technologies over time.
To partially address this gap and to situate the present study’s policy-relevant questions (e.g., Q23–Q24) within real-world regulatory variation, we summarise several illustrative approaches. In the European Union, the updated eIDAS framework has been extended through the European Digital Identity Regulation, which introduces EU Digital Identity Wallets and strengthens cross-border interoperability expectations [105]. In the United Kingdom, the UK Digital Identity and Attributes Trust Framework specifies rules and standards against which providers can be independently certified [106]. New Zealand has enacted a statutory trust framework (Digital Identity Services Trust Framework Act 2023) supported by implementing regulations, establishing governance arrangements and requirements for trusted digital identity services [107,108]. Australia has legislated an economy-wide accreditation approach under the Digital ID Act 2024, building on prior trust framework work (TDIF) [109,110]. These regimes differ in scope and institutional design, but collectively illustrate common regulatory levers relevant to universal ecosystem discussions: accreditation/certification of participants, defined role obligations, oversight and auditability, and alignment with privacy and data protection norms.
Stakeholder—focused research identifies the roles and perspectives of various groups, including developers [111], governments, private sector entities [112], NGOs and end-users [113], often focusing on their needs and concerns, particularly regarding privacy [114], security [39], and trust [29,115]. The literature also extensively covers the role of governments [116] and regulatory bodies in shaping digital identity policies [117].
Despite a considerable body of work that discusses digital identity stakeholders, gaps remain in research on the differences between conventional societal groupings and distinct stakeholders of digital identity technology. Additionally, collaboration and conflicts between different stakeholder groups, analysis of the business models and economic incentives for private sector participation, and detailed case studies of successful stakeholder engagement and public–private partnerships in digital identity projects are all in need of further investigation.

Dimensional Overlap

At the intersections of these dimensions, there is even less research available.
Integration—Studies exploring the overlap between digital identity and context [112,118,119] investigate how technologies comply with legal and regulatory standards, and the technical requirements for ensuring secure and interoperable digital identities across different environments. However, there is a need for more case studies on country-specific implementations, research on the challenges of maintaining compliance in rapidly evolving technological contexts, and analyses of the many trade-offs—such as security and usability [19]—in various regulatory environments.
Solutions—Research at the intersection of digital identity and stakeholders addresses the usability and accessibility of digital identity systems, considering diverse user needs and adoption rates [35,120]. Nonetheless, more detailed user experience studies that identify specific pain points, research on the inclusivity of digital identity systems for marginalised populations, and analyses of how stakeholder feedback is incorporated into system design are needed.
Interests—Finally, studies at the intersection of context and stakeholders examine the impact of regulatory, cultural and socio-economic factors on stakeholder engagement and trust, emphasising transparency and accountability [121,122]. Despite this, comparative studies on cultural influences on stakeholder trust, research on the role of public awareness campaigns in fostering trust and adoption, and analyses of the socio-economic impacts of digital identity systems on different population segments are still lacking.

4. Research Framework

In the previous section, we explored background information on digital identity ecosystems and relevant academic literature, emphasising the domain’s inherent complexity. This complexity often renders generalised research frameworks for technology design and acceptance insufficient for addressing the unique challenges posed by digital identity systems. Fischer, for example, emphasises the distinct nature of socio-technical systems, noting that while technical systems are designed to support human needs and enhance capabilities, social systems are inherently dynamic and evolve. Designing digital identity solutions that remain both relevant and practical, therefore, requires a framework sensitive to these socio-technical dynamics [123].
Key areas of concern within a digital identity ecosystem include privacy, security, trust, and the delicate balance between user convenience and control. Layered onto these are the complexities of stakeholder diversity and the need for an adaptable ecosystem structure. Given these multi-dimensional challenges—spanning ethics, trust, stakeholder diversity, and usability—no single established research framework perfectly fits the study’s needs. Furthermore, traditional frameworks rarely consider how socio-technical structures can be used as substrates for AI systems that must reason across ethical, contextual, and institutional boundaries. In this study, such a socio-technical structure is explored not only for conceptual alignment but also as a practical scaffolding for AI-assisted reasoning, as demonstrated through a prototype layered RAG implementation.
While the Technology Acceptance Model (TAM) provides insights into user acceptance, it does not fully address how diverse stakeholder groups might hold divergent values or how trust influences overall system alignment [124]. Community-Based Participatory Research (CBPR) ensures community involvement, but its scalability issues limit broader comparative analyses needed for capturing multiple stakeholder perspectives [125]. Constructive Technology Assessment (CTA) is anticipatory but may not easily accommodate iterative trust assessments [126]. In contrast, Value-Sensitive Design (VSD) provides a multi-stage, iterative process that supports careful investigation of stakeholder values, clarifies roles and responsibilities, and can accommodate evolving trust dynamics over time.
VSD is a methodological approach that integrates human values into technology research and design. By emphasising stakeholder engagement, ethical considerations, and alignment with societal priorities such as privacy, trust, and usability, VSD is well-suited to address the study’s needs [127]. Its three stages—conceptual investigation, empirical investigation, and technical investigation—enable a comprehensive exploration of stakeholder values and the tensions between them. The iterative nature of VSD supports continuous refinement, ensuring that digital identity systems can evolve alongside changing socio-technical conditions [128].
Despite its strengths, VSD has known limitations, including challenges in identifying universal values, cultural biases, and its resource-intensive nature. These critiques, detailed by Davis and Nathan, highlight the importance of adaptive methodologies and flexible implementation [129]. Acknowledging these challenges from the outset allows for strategies to mitigate them, such as phased stakeholder engagement and culturally diverse advisory inputs.
By positioning VSD within an STS lens as the guiding research framework, this study acknowledges that digital identity solutions must consider both technical and social elements as intertwined. This perspective directly supports the research question—“How do stakeholder perspectives align or diverge regarding privacy, trust, and usability in the design of a universal digital identity ecosystem?”—and provides a methodological basis for examining the associated research questions. Specifically, RQ1 addresses how stakeholders prioritise core values such as privacy, trust, and usability differently; RQ2 explores whether stakeholder groups diverge in their understanding of roles and responsibilities; and RQ3 investigates how levels of stakeholder trust in digital identity infrastructure align, potentially impacting solution design.
Taken together, the STS dimensions operationalise the technological (identity mechanisms and infrastructure), social (stakeholder roles, trust, usability, and education), and legal/institutional (regulation, governance, and liability) aspects of digital identity ecosystems. The relationships between these aspects are reciprocal: legal constraints shape permissible data flows and architectural choices; technical affordances (e.g., decentralised credentials or automated risk scoring) alter governance options and accountability burdens; and social acceptance and trust condition whether legal and technical arrangements are perceived as legitimate and adoptable.
VSD provides a structured pathway for developing systems in light of these relationships. The conceptual investigation identifies stakeholders, values, and normative constraints; the empirical investigation examines how stakeholder groups prioritise values and allocate responsibilities; and the technical investigation (beyond the scope of this pilot) translates the resulting value requirements and observed tensions into concrete system features and governance controls. In this way, VSD is used not only as an interpretive lens, but as a disciplined mechanism for linking social expectations and legal obligations to technological design decisions.
In practice, this means using VSD’s conceptual phase to identify key values and anticipated stakeholder roles, thereby testing the foundational assumptions of RQ1 and RQ2. VSD’s empirical phase can employ both qualitative and quantitative methods; in this cycle, we implement a quantitative online survey stratified across stakeholder groups, and we supplement interpretation through literature triangulation (e.g., published usability case evidence) rather than additional primary qualitative data collection. Although this cycle does not implement a full technical investigation stage, we include a minimal prototype demonstration to illustrate how stakeholder-aligned layers can be operationalised for context-layered RAG.
By iterating between conceptual insights and empirical evidence, VSD accommodates the complexities inherent in the research questions, allowing for adjustments and refinements as new insights emerge. This iterative, value-oriented approach ensures that the eventual system design can adapt to cultural differences, ethical considerations, and stakeholder feedback. In doing so, VSD not only guides the study’s design and methodology but also positions its findings to inform the responsible development of future digital identity systems aligned with stakeholder values, roles, and trust conditions—ultimately addressing the research questions.
While this study implements only a single cycle of conceptual and empirical investigations due to practical constraints, the VSD framework readily supports additional iterations. Future research can build upon these initial findings, conducting further iterative cycles to continually refine and validate the alignment of stakeholder values with system design. In this way, the flexible and adaptive nature of VSD remains available for ongoing improvement, ensuring the evolution of digital identity solutions remains aligned with societal needs.

5. Methodology

In the previous sections, we explored the socio-technical context of digital identity solutions, introduced our overarching research questions, and established VSD as our guiding research framework. Building on this foundation, this section delves deeper into our methodology, detailing the specific approaches used to address those questions.
These methods are designed to provide a comprehensive understanding of how clearly a universal digital identity is defined, the degree of stakeholder alignment regarding roles and responsibilities, and the level of trust within and between stakeholder groups. Such insights are critical for identifying gaps, potential conflicts, and trust barriers that could impede effective deployment and widespread adoption of digital identity systems.
In pursuit of a universal digital identity approach, our aim was to capture and analyse a broad range of stakeholder perspectives, stratified across four identified stakeholder groups. The methods employed to achieve this goal are outlined below.
To thoroughly address our research questions, we utilised the VSD framework, focusing on the first two stages—conceptual investigation and empirical investigation. The third stage, technical investigation, which involves the development of specific technologies, lies beyond the scope of this study as a full technical investigation; however, a minimal proof-of-concept prototype is included Section 5.3) without formal evaluation.

5.1. Stage 1—Conceptual Investigation

The first stage of the VSD investigative process (see Table 3)—the conceptual investigation—is centred on identifying and understanding the values relevant to the digital identity context. This stage is essential in establishing a foundation for ethical and value-driven technology development. We start by identifying stakeholders within the digital identity ecosystem. Next, we outline the core values that will guide our research. Finally, we address how to manage potential value tensions that may arise.

5.1.1. Stakeholder Identification

Aligning with VSD principles, it is crucial to identify and engage with stakeholders who can provide diverse points of view and contribute to a comprehensive understanding of the values that should guide the development of digital identity systems. In this study, we strike a balance between time and resource constraints and the value of diverse stakeholder perspectives by conducting a stakeholder analysis and grouping stakeholders into a smaller number of categories.
Stakeholders and Planning
In the context of a digital identity ecosystem, a stakeholder can be broadly defined as any individual, group or organisation that can influence or is influenced by the development, implementation and outcomes of digital identity systems. This definition aligns with Freeman’s classic definition [130], in which he describes a stakeholder as “any group or individual who can affect or is affected by the achievement of the organization’s objectives”.
Expanding on this, many scholarly articles [131,132,133] advocate for a wider inclusion of parties who are affected by or can affect strategy, emphasising the importance of considering even nominally powerless groups. This broader view aligns with democratic and social justice principles, acknowledging the relevance of all parties affected by digital identity systems, regardless of their direct power or influence.
Conversely, Eden and Ackermann [56] offer a narrower perspective, limiting stakeholders to those with the power to invoke strategic change. However, given the varied impacts and broad reach of digital identity ecosystems, an inclusive approach to stakeholder identification and analysis, as suggested by Lewis [134] and others, seems more appropriate in ensuring a comprehensive and ethical consideration of all impacted parties.
Stakeholder Analysis Importance
Historically, disregarding stakeholder interests and insights has led to a multitude of adverse outcomes, as evidenced by a range of studies [135,136,137,138,139].
In the realm of digital identity, the lack of thorough stakeholder analysis risks creating disjointed ecosystems, analogous to the global inconsistencies in traffic norms (right- vs. left-side driving) and measurement systems (metric vs imperial). Such disparities exemplify how differing standards and practices can lead to systems that do not operate cohesively.
In the context of digital identity, this lack of uniformity could result in significant challenges in cross-border interoperability, data standardisation and system integration, making it problematic to establish a seamless and efficient universal digital identity framework.
Different geographical regions may show reluctance in adopting interoperable digital identity infrastructure for reasons such as sovereignty concerns, diverse legal and regulatory environments, and varied cultural attitudes towards privacy and data security. Regions often prefer to have autonomous control over their digital identity systems to ensure alignment with local standards and laws. However, on the international front, there is a discernible shift towards interoperability. Initiatives like eIDAS in the European Union are leading this movement by facilitating secure and seamless electronic interactions across borders. Moreover, the increasing focus on digital onboarding is also catalysing unified approaches to online services. This trend towards digital onboarding emphasises the need for standardised, cross-border digital identity solutions, further underscoring the benefits of interoperability such as enhanced security, user convenience, and the potential for expanded global collaboration and economic development.
Stakeholder Groups
By incorporating and analysing data from different stakeholder groups, the study ensures a comprehensive understanding of the socio-technical environment. This multi-perspective approach helps to identify varying needs and expectations, which is a core aspect of VSD. The use of quantitative methods to analyse these perspectives ensures that the insights are robust and can guide the development process effectively.
Based on Eden and Ackermann’s research [56], we split universal digital identity ecosystem stakeholders into four quadrants using the “Power versus Interest Grid”. This grid is a strategic tool that categorises stakeholders based on two key dimensions: their level of interest in the digital identity ecosystem and their power to influence it. In this context, interest is interpreted in the Eden and Ackermann sense as the degree of sustained engagement in shaping the initiative (e.g., ongoing participation in governance and decision-making), rather than the extent to which a stakeholder is affected by digital identity outcomes. Stakeholders can be divided into the following quadrants:
Government (High Power, High Interest—Players): Government bodies often have significant power and a high level of interest in the digital identity ecosystem. They are key players because they not only regulate and set policies, but also have a vested interest in the security, privacy and efficiency of these systems for public services. Their decisions and regulations can have a profound impact on how digital identity systems are developed and used.
Business (High Power, Low Interest—Context Setters): Businesses, particularly those not directly in the digital identity sector, might have high power due to their economic influence, but a relatively lower direct interest in the day-to-day operations of digital identity systems. They are context setters as they can influence the ecosystem through their market choices and requirements—yet their primary focus might be on how these systems impact their operations, customer relations and compliance requirements.
Academia (Low Power, High Interest—Subjects): Academic institutions and researchers often have a strong interest in the digital identity ecosystem, focusing on areas like technological innovation, policy implications and societal impact. However, their direct power to influence the practical development and implementation of these systems is comparatively limited. They are critical for providing research, insights and innovation, but usually do not have direct control over industry practices or policies.
Consumer (Low Power, Contingent Interest—Crowd): Consumers are the end-users of digital identity systems and typically have limited individual power to directly shape ecosystem architecture or policy. However, their interest should be treated as contingent rather than uniformly low: engagement is often episodic and can increase sharply when identity arrangements affect privacy, access to essential services, surveillance concerns, or day-to-day usability. Consistent with this interpretation, the survey results show that consumer respondents rate consumer involvement in vision-setting and in standards/rules/regulation-setting above neutral (see the Vision and Design alignment tables), indicating non-trivial interest when framed as governance participation rather than technical delivery.
This quadrant division provides a practical engagement heuristic, while recognising that consumer interest may be latent and becomes salient under specific conditions; accordingly, effective governance should include explicit mechanisms for consumer consultation and feedback.

5.1.2. Value Establishment

Having identified the key stakeholders, we now turn to establishing the core values that will guide our research. These values are derived from both the literature review and the identified digital identity programme success factors outlined earlier in the study input.
Literature Review
To add context to the discussion and support the perspectives attained from the questionnaire, supporting research was also conducted in the form of a literature review of research related to digital identity ecosystems and a review of government press releases. An emphasis was placed on trust framework programmes as these had the most publicly accessible information published at the time. A trust framework established by a government is a set of rules, policies and standards that define the principles and mechanisms for establishing and verifying trust in digital identity and authentication processes within a specific jurisdiction [29].
Identifying Values
Drawing from this study’s literature review and identified success factors (see Table 1), it is essential to consider a range of values that underpin the ethical and effective development of universal digital identity systems. These values guide our analysis and ensure that the perspectives and needs of all stakeholders are comprehensively addressed. The following values are integral to this investigation:
  • Vision: The overarching goal and strategic direction for establishing a universal digital identity system, encompassing long-term objectives and desired outcomes for all stakeholders.
  • Privacy: The protection of personal and sensitive information from unauthorised access and misuse, ensuring an individual’s control over personal data and maintenance of confidentiality.
  • Security: Measures and protocols to safeguard digital identity systems against threats, breaches and vulnerabilities, ensuring the integrity, availability and protection of data and services.
  • Identity: The recognition and verification of individuals within the digital identity ecosystem, ensuring accurate representation and authentication of personal identities.
  • Education: Efforts to inform and train stakeholders about the digital identity system, its benefits, risks and proper usage, ensuring widespread understanding and informed participation.
  • Usability: The design and implementation of user-friendly interfaces and processes that make the digital identity system accessible, intuitive and easy to navigate for all stakeholders.
  • Design: The overall architecture and layout of the digital identity system, ensuring it meets functional requirements and aligns with the values and needs of stakeholders.
  • Trust: The confidence stakeholders have in the digital identity system’s reliability, security and ethical standards, fostering acceptance and reliance on the system.
  • Liability: The accountability mechanisms in place for managing and addressing failures, breaches or misuse of the digital identity system, ensuring that responsibilities are clearly defined and upheld.
  • Ethics: Despite not being identified as a success factor, ethics is added as a value to ensure the technology design process respects stakeholder rights, prevents harm, enhances trust, aligns with societal norms, and fosters responsible innovation.
Collectively, these values span technological concerns (e.g., security, identity, design), social concerns (e.g., usability, education, trust), and legal/institutional concerns (e.g., privacy, liability and governance), reflecting the socio-technical character of digital identity ecosystems.

5.1.3. Value Tensions

Value tensions are conflicts that arise when different stakeholders prioritise different values, leading to potential ethical and practical dilemmas in the design and implementation of technology. These tensions are a natural part of the design process, especially in complex systems like digital identity ecosystems, where multiple stakeholders with varying perspectives and priorities are involved.
Common examples of value tensions include:
  • Privacy vs. Security—Ensuring robust security measures often requires access to personal data, which can conflict with the value of privacy.
  • Usability vs. Robustness—Designing user-friendly systems can sometimes compromise the robustness and comprehensive functionality needed for certain applications.
  • Transparency vs. Confidentiality—Providing transparent processes and data use policies can conflict with the need to maintain confidentiality and protect sensitive information.
  • Individual Control vs. Social Good—Giving individuals control over their personal data might conflict with broader societal benefits that can be gained from data aggregation and analysis.
In this study, value tensions are surfaced through the questionnaire by capturing diverging stakeholder priorities across privacy, trust, usability, and governance. While the study does not implement the technical investigation stage of VSD in this cycle, the empirical results provide sufficient signal to discuss the most salient tensions indicated by the data. To enhance practical applicability, Section 6.8 synthesises these observed tensions using concrete examples from the survey tables and outlines candidate governance, policy, and design mechanisms that can mitigate them in future iterations.

5.2. Stage 2—Empirical Investigation

In line with VSD principles, this study emphasises the importance of involving key stakeholders—government, business, academia and consumers—early in the development process of a digital identity ecosystem. This early involvement is crucial for capturing diverse perspectives and values, which are then quantified and analysed to inform the design and implementation strategies of digital identity systems.
Consistent with VSD’s empirical investigation phase, our primary method for collecting data was a survey—a widely used approach in over 90% of technology acceptance studies [140].
The implementation of this survey involved several distinct steps, each essential to the application of quantitative methods and characterised by specific advantages and limitations. These steps are outlined below:
  • Survey Design: This stage involved developing a questionnaire with binary questions and 5-point Likert scale items, where a midpoint of 3 indicates neutrality. This format was chosen to simplify statistical evaluation and efficiently capture detailed responses. The structured questionnaire approach facilitates systematic data collection and analysis, offering significant advantages in terms of efficiency.
    However, the use of predetermined response options has inherent limitations. It may not capture the full depth of complex attitudes, possibly oversimplifying diverse stakeholder opinions and losing the finer nuances of respondent perceptions [141], which are essential for understanding subtle attitude differences. Nonetheless, this approach was selected for its practical balance between efficiency and depth. The structured format significantly aids in data handling and analysis, providing a reliable foundation for generating insights that outweigh the potential drawbacks of missing some complexities.
  • Survey Implementation: The implementation stage used online questionnaire distribution to ensure a broad and efficient reach across diverse demographic groups. This method was chosen for its cost-effectiveness and quick dissemination, which is suitable for effectively collecting a moderate number of responses. However, this approach can exclude populations with limited Internet access or low digital comfort, potentially introducing sample bias and affecting the representativeness of the findings [142].
    For this study, the advantages of reaching an audience quickly and cost-effectively outweighed the potential biases. This was especially pertinent in a pilot study focused on digital identity, where efficient data collection from a diverse sample was crucial. The online platform was considered acceptable, given the likelihood that participants were familiar with technology. More details on the questionnaire instrument are provided in Section 5.3 below.
  • Survey Validation: Survey validation ensured the reliability and accuracy of the instrument in capturing stakeholder perspectives on universal digital identity ecosystems. The validation process addressed content validity through expert reviews, ensuring that the survey comprehensively covered dimensions such as privacy, trust, and usability. Construct validity was established by aligning questions with socio-technical systems theory and Value-Sensitive Design (VSD) principles, while pilot testing refined question clarity and relevance. Internal consistency was measured using Cronbach’s alpha, confirming the reliability of Likert-scale items. To reduce biases, questions were neutrally worded, anonymised responses were assured. These steps, along with feedback from diverse stakeholder groups, helped ensure that the survey instrument was robust, reliable, and effective in addressing the study’s research objectives.
  • Data Acquisition: The data acquisition stage utilised electronic collection methods to facilitate immediate data aggregation and minimise errors from manual entry. This approach enhances data accuracy and speeds up the collection process. However, the effectiveness of this method depends on robust questionnaire design and the honesty of participants, as poor question framing and respondent misrepresentation can skew the results. Despite these risks, this method was chosen for its efficiency and precision in managing the dataset.
  • Data Analysis: The data analysis stage applied statistical techniques to transform raw data into meaningful insights about stakeholder attitudes and perceptions. This method provides objective, quantifiable insights essential for identifying trends and patterns. However, it may overlook subtleties and qualitative nuances [141], possibly misrepresenting the complexity of stakeholder perceptions.
    Despite these limitations, statistical analysis was chosen for its ability to deliver rigorous, data-driven conclusions. The benefits of objectively measuring and analysing data to support decisions and policies based on empirical evidence were deemed to outweigh the potential drawbacks of missing finer details, particularly in large-scale studies.
  • Results Dissemination: The results dissemination stage synthesised and contextualised findings within the study’s objectives and theoretical frameworks, supporting evidence-based decision-making and policy formulation. However, the interpretation of quantitative data can be influenced by analysts’ perspectives and chosen methodologies, introducing potential biases [143]. These risks were mitigated using peer review and transparently outlining the methodology employed in this study.
Each stage was integral to the research process, offering structured insights into participants’ perceptions, while also presenting challenges that required careful consideration and methodological rigour to address.

5.2.1. Sampling Mechanism

Convenience sampling was employed in this research as a practical means of participant recruitment [144]. The study leveraged the convenience of online platforms and social media channels to reach a wide audience quickly and efficiently. Potential participants were invited to take part in the research through various social media networks and online communities, and those who expressed interest or responded were included in the study.
Convenience sampling enabled efficient recruitment across multiple stakeholder communities, but it does not produce a probability-based sample. As a result, responses are susceptible to self-selection and coverage effects (e.g., overrepresentation of individuals who are digitally engaged, already interested in digital identity, or active in the online communities used for recruitment). Accordingly, the findings are interpreted as descriptive and hypothesis-generating, rather than as population-level estimates for any particular jurisdiction or demographic subgroup.
Participants were sought through a number of avenues including:
  • Social Media Platforms—Platforms including Facebook, Twitter and Reddit were used to collect consumer viewpoints. The objective was to engage a diverse demographic to acquire a range of perspectives from regular users of digital identity systems.
  • Online Working Groups and Forums—Engagement with members of specialised digital identity groups and forums was undertaken. This method proved effective in obtaining a variety of perspectives from both the business and government sectors.
  • Professional Platforms—LinkedIn was utilised as the principal platform for recruiting digital identity professionals. This approach facilitated targeted communication with industry experts, ensuring comprehensive representation of business perspectives.
  • Academic Forums—Interaction with academic circles was conducted, focusing on individuals with knowledge of digital identity. This method aimed to incorporate scholarly insights into the study.
  • Government Programmes—The United Nations e-Government Survey report [145] was used to identify advanced digital societies in the public sector. Correspondence was directed at government programmes in nations actively developing cross-border identity frameworks, including those adhering to the European eIDAS standard and members of the Digital Identity Working Group. This strategy aimed to collect insights from governmental stakeholders engaged in digital identity implementation and policymaking.
The survey was distributed via English-language online channels and did not capture respondents’ country or region; therefore, geographic distribution cannot be quantified in this pilot and should not be interpreted as jurisdictionally representative.
In line with VSD principles, this study used convenience sampling for its practicality in including diverse stakeholder perspectives. Although this method has limitations, such as not fully capturing the diversity of views, it was effective in quickly gathering initial core insights from government officials, business representatives, academics, and consumers.
This initial data collection phase provided a foundation for iterative refinement. The insights gained will guide more targeted and representative sampling strategies in future research, aligning with VSD’s commitment to iterative design and continuous stakeholder engagement. By reflecting on the limitations and planning for more inclusive follow-up studies, the research adheres to VSD’s ethical and inclusive principles.
A total of 243 participants were recruited, with 226 complete responses retained for analysis. While not statistically representative of any specific population, the sample was intentionally structured to reflect conceptual diversity across stakeholder roles in the digital identity ecosystem. Participants self-identified into four categories—government, business, academia, and consumers—based on Eden and Ackermann’s Power vs Interest Grid (see Section Stakeholder Groups ). These categories were selected to capture archetypal roles rather than demographic or geographic subpopulations. Stakeholder categories are treated as self-reported analytic strata rather than independently verified affiliations. As a basic plausibility check, the resulting strata exhibit coherent differences on anchoring items (e.g., vision leadership and regulation preferences), supporting their use for descriptive comparison in this pilot.
As a pilot study, the aim was not generalisability but rather exploratory insight into how different stakeholder groups perceive values such as privacy, trust, and usability, as well as their attitudes toward ecosystem roles and areas of responsibility. The inferences drawn are descriptive and diagnostic in nature—intended to validate key concepts, reveal potential misalignments, and inform the refinement of instruments for future, more representative research cycles.

5.2.2. Questionnaire Instrument

In alignment with the study’s established values above, a structured questionnaire (see Table 4) which was also derived from and linked to the identified digital identity success factors (see Table 1) was meticulously developed to gather comprehensive data on stakeholder perceptions and attitudes towards a universal digital identity. It underwent a rigorous pilot testing phase to ensure the questionnaire’s effectiveness and clarity. During this phase, and in alignment with VSD’s iterative approach, feedback was solicited to evaluate the instrument’s comprehensibility and reliability, allowing for necessary adjustments to improve its accuracy and respondent experience.
The final version of the questionnaire included a variety of question types to capture a broad spectrum of responses, reflecting VSD’s commitment to understanding stakeholder values in depth. It comprised both closed-ended questions for quantitative analysis and Likert-scale questions to measure the intensity of respondents’ attitudes. This diverse format facilitated a more nuanced understanding of stakeholders’ views. The questionnaire was administered using SurveyMonkey, a popular online tool that supports wide distribution and efficient data collection. It was available for a period of six weeks, during which time we monitored participation to avoid extreme imbalance across stakeholder strata and to guide additional outreach where responses were sparse.
The questionnaire successfully garnered a total of 243 responses. However, 17 responses were incomplete and subsequently excluded from the final analysis to maintain the integrity and quality of the data. The remaining 226 complete responses were subjected to quantitative analysis to derive meaningful insights. This process involved statistical techniques to identify trends, assess variances, and understand the core factors influencing stakeholders’ acceptance of digital identity systems. The comprehensive data analysis helped in formulating evidence-based conclusions that are critical for policymakers, designers, and developers involved in the shaping of digital identity frameworks.
The instrument also included an optional open-ended prompt (Q31) for additional comments; these responses were not systematically analysed in this pilot and are reserved for follow-on qualitative work.
By adhering to VSD principles, this approach ensured that the diverse values and perspectives of stakeholders were systematically integrated into the research process, providing a robust foundation for ethical and inclusive digital identity solutions.

5.2.3. Statistical Analysis and Reporting

All survey responses were exported from SurveyMonkey and imported into Jupyter Notebook (version 7.4.4) for analysis using Python (version 3.13). A reproducible pipeline was applied to ensure that all descriptive summaries and tables were generated programmatically from the same cleaned dataset. The workflow comprised:
  • Data cleaning and inclusion criteria: responses were screened for completeness and validity. Incomplete submissions were excluded from analysis (Section 5.2.2 reports the retained sample), and item-level missing responses were treated as missing-at-random and excluded on a pairwise basis for the affected question only.
  • Coding and normalisation: Likert-scale items were encoded numerically (1–5, with 3 neutral). Binary and multi-choice items were summarised as counts and percentages by stakeholder group. For ranking-style items, the scale was inverted to maintain consistent interpretive direction (higher scores indicate higher priority), consistent with the note reported alongside the relevant tables.
  • Stratified descriptive statistics: for each question, descriptive statistics were computed overall and by stakeholder group (Government, Business, Academia, Consumer). For Likert items we report mean ( μ ) and standard deviation ( σ ) to support comparability across stakeholder strata; for categorical items we report selection percentages.
  • Reporting: questions were grouped by the study’s thematic areas (e.g., vision, education, national identity, usability, design, data management, trust, liability) and presented in alignment tables. Narrative interpretation focuses on patterns of convergence/divergence between stakeholder groups and is intended as descriptive and hypothesis-generating rather than population-inferential, consistent with the pilot nature of the study.

5.2.4. Controls

We employed several key controls within our research methodology to ensure the study’s validity and reliability. First, we established robust data cleaning and validation procedures, which included systematically checking the data for accuracy, completeness and consistency. As part of this process, incomplete questionnaires were discarded to ensure only complete and reliable data was used for analysis.
In addition to data-quality screening, instrument validation was addressed through (i) content validity checks using expert review and alignment of items to the study constructs and research questions (Table 4), and (ii) pilot testing to refine question clarity and minimise ambiguity (Section 5.2.2). Internal consistency was assessed for relevant multi-item Likert batteries using Cronbach’s alpha (with all groups scoring α > 0.75), with the intent of confirming that grouped items were coherent enough to be interpreted together within the descriptive analysis reported in Section 6. This step prevented skewed results that could have misrepresented the true perspectives of the stakeholders, aligning with VSD’s commitment to accurately reflecting stakeholder values.
Surveys are also susceptible to common method bias (CMB), which can compromise the reliability and validity of empirical findings. To mitigate this risk, the study implemented procedural (ex-ante) controls to prevent the occurrence of CMB [143]. Addressing such biases ensured that stakeholder feedback was genuinely represented—a key aspect of VSD’s emphasis on ethical considerations and integrity.
In terms of the questionnaire’s design, general structure, length, disclosure of survey progress, visual presentation, interactivity, and question/response format, all were carefully considered to avoid introducing additional biases [146,147]. This careful design process aligned with VSD’s iterative and user-centred approach, ensuring that the survey effectively captured the nuanced values and perspectives of stakeholders.
Furthermore, we maintained a consistent questionnaire administration process across all participant groups to minimise potential response bias. This consistency was crucial for reducing variations that could have influenced how participants responded, such as differences in questionnaire presentation or the environment in which they were conducted. By standardising the administration process—ensuring all participants received the same instructions and that questionnaires were conducted under similar conditions—we aimed to eliminate external factors that could have affected the participants’ responses [148]. This methodological rigour supported VSD’s goal of creating technologies that are not only effective but also ethically sound and reflective of stakeholder values.
These controls were rigorously applied to ensure high data quality and consistent questionnaire administration, providing a solid foundation for reliably analysing diverse stakeholder perspectives on digital identity. By adhering to these stringent methodological standards, we upheld VSD principles by ensuring that the empirical data accurately informed the iterative design process. This led to more ethical and inclusive technology development.

5.2.5. Ethical Considerations

Value-Sensitive Design (VSD) provides a value-led approach for integrating ethical considerations into technology research and development. However, critiques note that its flexibility can be overly open-ended, motivating the need for explicit procedural safeguards to ensure consistent ethical standards [149]. Yetim argues that ethical responsibility should not sit solely with researchers and developers, but should be supported through broader accountability and public legitimacy [150]. Accordingly, privacy and ethics were treated as core values throughout this study, consistent with VSD’s emphasis on systematically incorporating stakeholder values [151].
This study involved human participants completing an online survey. The study protocol was reviewed and approved by the University of Oxford Department of Computer Science Departmental Research Ethics Committee (DREC) (Reference: CS_C1A_021_025).
Before participation, individuals were provided with a detailed information sheet describing the study’s purpose, procedures, potential risks and benefits, and how responses would be handled. Contact details for the researchers and the ethics committee were provided on the introductory page of the survey. Participants were required to explicitly endorse consent items prior to commencing, reflecting established informed-consent practice for online questionnaires [152].
Given the sensitivity of digital identity topics, confidentiality was maintained throughout data collection and analysis. Survey configuration settings were used to anonymise responses and restrict access to response data, ensuring no personal identifiers were linked to survey outputs. Results are reported only in aggregated form. Participants’ autonomy was respected by allowing them to skip questions or withdraw at any time, thereby upholding the right to opt out and reducing the risk of discomfort or perceived coercion.
These measures collectively mitigate the ethical risks associated with online data collection while supporting responsible, privacy-respecting research into digital identity ecosystems.

5.3. Stage 3—Technical Investigation

Technical Investigation in VSD involves the actual design and development of technology that incorporates and balances the identified values through the creation of prototypes, iterative testing and refinement based on stakeholder feedback. While this stage is crucial for embedding values into the technical aspects of the design, it is beyond the scope of this pilot study.
To demonstrate the practical applicability of the structured stakeholder insights generated in this study, a trivial implementation was developed. This prototype illustrates how the empirical results—particularly those pertaining to stakeholder roles, responsibilities, and value priorities—can be operationalised within a layered Retrieval-Augmented Generation (RAG) architecture. While not a central focus of the study, this implementation serves to underscore the potential of using structured stakeholder-aligned data to enhance AI-supported reasoning and retrieval within complex socio-technical ecosystems such as digital identity.

5.3.1. Prototype Architecture, Functionality, and Operationalisation

The prototype is implemented as a lightweight Python notebook using LangChain, and is intended to demonstrate how stakeholder-aligned findings can be operationalised as structured constraints for AI-assisted retrieval and reasoning. The architecture comprises five conceptual components: (i) a small corpus of short knowledge fragments expressed as natural-language statements; (ii) a vector store that indexes these fragments using embeddings and layer-aligned metadata; (iii) a set of layer-resolution functions that infer (or accept) layer values from the user query (e.g., ecosystem, stakeholder group, and role); (iv) a metadata-driven retrieval routine that progressively narrows the candidate context in accordance with the layered model; and (v) a final prompt-augmentation and generation step in which the retrieved context is supplied to a large language model (LLM) to produce a response.
Operationalisation of study findings is performed by translating key conceptual and empirical outputs into these knowledge fragments and tagging them with metadata fields that mirror the six-layer structure described in Section 5.3.2. For example, alignment-table outputs can be represented as concise statements (e.g., where liability is primarily assigned to institutional actors rather than end-users) and then indexed for retrieval under the relevant stakeholder, role, and value scopes (see Section 6.8 for the empirical liability signal). This implementation is intentionally minimal: the listing shows two trimmed documents for illustration, but the same pattern generalises to indexing a larger set of fragments derived from the full set of alignment tables and selected literature-based role definitions.

5.3.2. Layer Construction and Data Sources

The layered architecture was derived directly from the structure of the research design and findings, as follows:
  • Ecosystem Layer—This foundational layer captures the domain scope (i.e., universal digital identity).
  • Values Layer—Values were established during the conceptual investigation phase of the Value-Sensitive Design (VSD) methodology.
  • Stakeholder Layer—Stakeholder categorisation is drawn from the Power-Interest Grid adapted from Eden and Ackermann, comprising Government, Business, Academia, and Consumer groups.
  • Roles Layer—Functional roles (e.g., issuer, verifier, policy maker, auditor, relying party) are based on classifications derived from the literature review and pre-survey reports, and were used to formulate survey items (see Table 4).
  • Responsibilities Layer—This layer encodes empirical insights into stakeholder perceptions of role-based responsibilities, as elicited through survey responses.
  • Artefacts Layer—Although not a direct focus of the investigation, this layer was included to accommodate downstream identity artefacts such as verifiable credentials, digital wallets, authentication devices, and biometric records.
In implementation terms, each layer corresponds to an explicit metadata field (e.g., ecosystem, values, stakeholder, role, responsibility, artefact). Knowledge fragments are stored at the most specific level supported by the study output (e.g., a responsibility statement scoped to a stakeholder group and role), and retrieval begins with the ecosystem scope and progressively narrows as additional layer values are resolved.

5.3.3. RAG Implementation

Using LangChain, an open-source Python framework for building applications with large language models (LLMs), a lightweight prototype was implemented to simulate chained retrieval and prompt generation across the six semantically defined layers. LangChain provides tools for managing prompt templates, linking multiple reasoning steps (chains), and performing semantic document retrieval through integration with vector databases. In the prototype, each layer is treated as a filter or metadata constraint applied sequentially, ensuring that retrieved knowledge fragments are aligned with the semantic scope defined at each level. The high-level logic is represented as follows:
At runtime, the prototype executes the following sequence: (1) infer the ecosystem scope for the query; (2) infer one or more relevant value tags (e.g., privacy, trust, usability) when applicable; (3) infer the stakeholder group (Government, Business, Academia, Consumer); (4) infer a functional role label where relevant (e.g., relying party, issuer, purchaser); (5) retrieve the top-k semantically similar fragments from the vector store subject to metadata filters consistent with the resolved layers; and (6) compose an augmented prompt in which the retrieved fragments are presented as the context for generation. This operationalises the study’s layered structure by constraining retrieval scope before generation, rather than allowing the model to answer from unconstrained background knowledge (see Listing 1).
Listing 1. Prototype metadata-driven retrieval layer (LangChain + Chroma) used to resolve stakeholder liability queries (adapted from [153]).
Digital 06 00004 i001Digital 06 00004 i002

5.3.4. Example Usage

The following example illustrates the input query, the augmented prompt, and the generated response in the context of a consumer liability scenario illustrated in the code sample. In this example, the query is resolved to the digital-identity ecosystem and the consumer stakeholder group; retrieval then selects liability-relevant fragments within that scope before the augmented prompt is constructed, illustrating how the pipeline can express the study’s empirical signal that consumer liability is not generally supported relative to institutional liability allocation (see Section 6.7 and Section 6.8).
  • Given the Input Query:
“Someone has used my digital identity to make online purchases, am I liable?”
  • The following Augmented Query is created:
“Answer the following question based on the context below:
Question: Someone has used my digital identity to make online purchases, am I liable?
Context: The subject of this query belongs to the consumer stakeholder group. Consumers are typically not liable for fraud if they act responsibly and report incidents promptly.”
  • Which delivers the final response:
“If your digital identity was used fraudulently for online purchases and you acted responsibly—such as securing your credentials and promptly reporting the incident—you are typically not liable. Most liability frameworks limit consumer responsibility in such cases, provided negligence cannot be demonstrated.”
This minimal implementation illustrates how layered metadata, derived from structured stakeholder analysis, can be used to constrain and contextualise AI-generated outputs in a semantically grounded manner.
Each retrieve function accesses a vector store or structured metadata index to isolate semantically relevant chunks of context from documents or survey-informed embeddings. The resulting context is then passed to a large language model (LLM) to generate an output constrained by the retrieved evidence.
While the prototype demonstrates the structural feasibility of one possible implementation of a stakeholder-layered RAG approach, no formal evaluation was conducted. Assessing retrieval accuracy, alignment, and response quality remains a key area for future research. Nevertheless, the structure illustrates how stakeholder alignment—when explicitly layered—can constrain AI outputs to relevant, policy-aligned, and role-specific knowledge boundaries.

6. Results and Discussion

In this section we present the results from the questionnaire and discuss key observations gleaned from the results amidst additional context provided by our literature analysis and supplementary research.

6.1. Vision Alignment

With previously mentioned considerations in mind, it must be acknowledged that one of the greatest obstacles to achieving a universal digital identity is establishing an agreed vision with which implementation will seek to comply.
Establishing a vision for digital identity ecosystems requires a clear, shared understanding of objectives, encompassing security, privacy, interoperability and user-centricity. It involves aligning the interests of diverse stakeholders, including governments, businesses and individuals, to ensure inclusivity. However, this vision faces complexities such as varying regional legal and regulatory environments, technological differences, and cultural perspectives on privacy. Challenges include ensuring security and privacy, managing economic disparities, digital access, and national sovereignty concerns. A cohesive vision sets the foundation for successful technology implementation, legal framework development, and user engagement.
Additionally, achieving interoperability and widespread acceptance among diverse stakeholders with pre-existing systems is a significant technical and organisational task. Ethical issues, notably concerns about surveillance and the misuse of identity data are also important considerations. These diverse challenges highlight the need for a collaborative and comprehensive approach in developing a universal digital identity framework.
Canada’s vision for digital identity focuses on creating a streamlined, efficient process for accessing government services online. Emphasising simplicity and convenience, this approach allows individuals to use a trusted digital identity for quick verification and transaction processes. The goal is to enable Canadians to easily interact with government services from any device, anywhere. This vision is being collaboratively developed with both government and private sector stakeholders, aiming to establish robust digital identity standards nationally and internationally [31]. Similarly, Norden asserts that Iceland’s vision is to enhance digital governance and foster an inclusive and accessible e-government environment [154].
While a design within a single ecosystem may appear sound, the impact of dealing with expanded perimeters of concern can be substantial. Historically, driving on the left versus the right side of the road and necessary conversions between metric and imperial measurement systems are just two examples of how the absence of a unified vision can impact us long after implementation is complete.

Results

In Table 5, the fact that all four stakeholder groups independently agree adds weight to the combined mean of 4.0, that the Government is the most responsible for establishing a digital identity vision. What may be surprising is that the Consumer group with a stratified mean of 3.8 also scored highly, outscoring businesses in all strata. It seems that, collectively, governments are expected to lead in constructing a vision in consultation with consumers.
With regard to understanding the motivations of various parties in establishing digital identity, it appears the Consumer group is lagging behind, with the Government and Business groups scoring higher. Whether this indicates an absence of a perceived agenda or a lack of consultation/education on digital identity is hard to tell, but we will look at education in the next section.

6.2. Education

The failed UK initiative to develop a national identity card scheme highlighted the crucial role of strong public awareness and education in the success of national digital identity programmes. It is essential to foster government and private sector collaboration, and implement comprehensive public education campaigns.
These campaigns should clearly inform the public about the benefits, uses and security measures of digital identity programmes, with particular emphasis on managing and protecting personal data under regulations like the GDPR. Moreover, there is a need for tailored educational approaches to reach diverse audiences, including segments with limited digital literacy.
Leveraging educational institutions to integrate digital identity education into curricula can also play a vital role in increasing understanding among younger generations. Continuous updates and effective communication channels for public engagement and feedback are essential to keep the public informed and address their concerns in the dynamic digital identity landscape [80].

Results

The results in Table 6 indicate that all stakeholder groups are aligned in their expectation that the Government will lead in educating users about digital identity technology. What is interesting, however, is the acknowledgement by all groups of the need for each stakeholder group to be involved in this education process—with the minimum stratified mean being a high 3.5. Elevated numbers in the Business stakeholder group may indicate a recognition of the need for further education in the digital identity space.
The feedback concerning achieved levels of education tells a different story. Results indicate levels of education well below perceived importance, with the Consumer group participants, in particular, indicating they are currently uneducated about digital identity technology. These two questions demonstrate that consumers can only participate in education of digital identity when they themselves understand the problem space.
To translate the education priorities in Table 5 into tangible ecosystem actions, future implementations should combine (i) plain-language public guidance on enrolment, data sharing, and recovery; (ii) transparency artefacts such as user-facing data-flow explanations and regular public reporting on audit outcomes and incident statistics; (iii) targeted training for service providers and support staff on exception handling and inclusive access; and (iv) multi-stakeholder communication channels (e.g., citizen panels, provider forums) to surface concerns early and build shared expectations. These activities focus on trust-building through demonstrated accountability and consistent user experience, rather than awareness campaigns alone.

6.3. National Identity

In June 2013, Edward Snowden released top-secret NSA documents about ’Five Eyes’, a multinational intelligence alliance involving the United States, United Kingdom, Canada, Australia and New Zealand. The documents revealed how intelligence agencies from the alliance had unlawfully collected information about the alliance’s citizens [155].
In New Zealand, the release occurred shortly after it had become known that the Government Communications Security Bureau (GCSB) had illegally spied on 88 citizens and permanent residents since 2009 [156]. As the Government moved to pass the GCSB Amendment Bill 2013, which extended domestic surveillance powers, the public debated the GCSB’s lack of transparency and its powers. Although public opinion polls revealed a 75 per cent disapproval rating of the proposed bill, it eventually passed [155,157].
Presently, New Zealand does not have a mandatory identifier. Digital Identity New Zealand (DINZ) states that the New Zealand public remains wary of any system that may be used to track users’ movements online. This includes any future digital identity framework, and transparency concerning the sharing of data within the ecosystem.
DINZ adds that while a mandatory national identity has its benefits, it is not a prerequisite for digital identity. New Zealand’s complex socio-economic landscape means it is unlikely that central government will initiate a mandatory identifier—and there is certainly no public appetite for such a mechanism. Consequently, a self-sovereign ecosystem with multiple identity providers has become the chosen path forward for New Zealand. While a mandatory identity may accelerate adoption in the early phases, DINZ is confident that the richness of identity partners in the ecosystem, and their product offerings, will lead to a quicker maturity.
Canada, which has an identity programme that closely matches New Zealand’s, has seen a number of digital identity developments in recent years. The Digital Identification and Authentication Council of Canada (DIACC)—a not-for-profit organisation comprising both public and private sector interests—is delivering an open framework for the development of identity solutions. There are now public and private schemes in place, led by a consortia of prominent banking and telecom companies in the private sector [80]. The DIACC has focused its efforts on the specification of standards—such as Transport Layer Security (TLS) from the Internet Engineering Task Force (IETF), and Decentralised Identifiers (DIDs) from the W3C—with a clear emphasis on open standards which are referenced in Canada’s trust framework. This approach appears to correspond well with New Zealand’s transparent and decentralised trust framework programme [31].

Results

In contrast to the movement made by governments in the United Kingdom, Australia, New Zealand and Canada towards a self-sovereign approach, a combined result of 3.5 in this questionnaire indicates that most users see merit in a national identity register. Scores were lower in all stakeholder groups when considering whether national identity should be mandatory, with a neutral combined mean of 3.0.
When asked whether they understood the history of the relevant nation’s identity schemes, participants responded at similar levels to the questions about education. This would indicate the importance of including historical schemes in the digital identity syllabus. Surprisingly, however, the Business group registered higher when responding to the public education question, which tends to suggest there is some commercial education taking place—perhaps as a result of product marketing that is not filtering through to the other groups.

6.4. Usability

In the context of future digital identity ecosystems, network topologies are important as they must adapt to support a multitude of users and/or devices, which are both static and mobile, online or offline, accessing services within Cloud, Edge and Fog network configurations at will [158]. For this roaming to be seamless, a highly secure and robust identification system must be established.
Once the network is in place, the key to obtaining a digital identity is to mitigate or remove accessibility barriers for the user. For example, the UK Government asserts that a lack of documentation (e.g., a passport or driver’s licence) should not be a barrier to obtaining a digital identity [46].
To remove similar accessibility barriers, Singapore has developed SingPass, a comprehensive digital identity system managed by the Government Technology Agency (GovTech) that provides access to over 400 digital services from both the government and private sectors. SingPass ensures robust security through encryption and multi-factor authentication, and features such as biometric verification in its mobile application contribute to a secure yet user-friendly experience. Integrated with this is MyInfo, a digital vault for personal data that enables automatic form-filling across services, exemplifying Singapore’s commitment to a seamless and secure digital society.
Similarly, myGovID—overseen by the Australian Digital Transformation Agency—is the country’s principal mechanism for digital identity verification. It parallels physical identification by enabling individuals to authenticate their identity using a mobile application that verifies information against accredited Australian identity documents.
Singapore and Australia have attempted to address the dichotomy between technical concerns and user-centric usability requirements that drive real-world implementation. Any universal digital identity approach must carefully balance these two perspectives. On this matter, Cameron asserts that a global identity solution does not need an identity provider. Rather, it needs a claim provider, which opposes traditional views that identity is a core component of any identity management ecosystem.
Cameron further states that technology solves the wrong problem: “We don’t need names or identifiers but proofs,” with the user in control of releasing the proofs [14]. Fundamentally, Cameron suggests that identity should be relegated from a core component to an optional credential—one of many that can be maintained in a digital wallet. In doing so, the identity ecosystem is simplified and numerous accessibility and usability concerns are addressed.

Results

As Table 7 shows, privacy and usability score highly as important concerns for a digital identity ecosystem. The former has a combined mean of 4.4, while the latter also scores highly with a combined mean of 4.1 across all strata.
Participants were asked whether digital identity should be or could be globally implemented. Government scored significantly lower than the other stakeholder groups when asked if a country’s digital identity should work in other countries. This may be quite telling and something that deserves further research. It could well be an indicator that governments are concerned with protecting their citizens’ personal data while they are abroad. However, Academia and Consumers felt it was likely unfeasible that digital identity would be implemented globally, which warrants further research.
Regarding issuing, holding and controlling verifiable credentials, participants from all stakeholder groups agreed that governments should be most responsible for performing all three tasks. This is an interesting result given the perceived mistrust of central authority—with the exception of a marginally positive indicator that Consumers feel they should be in control of managing their credentials.
User experience insights from published case studies (literature triangulation). While Table 8 indicates that usability is a high-priority concern across stakeholder groups, published evaluations of deployed digital identity systems provide more granular evidence on where user friction typically arises and how this can affect access, trust, and secure behaviour. In the UK, official reviews of GOV.UK Verify reported low completion through identity-proofing and substantial variation by service context, illustrating that “usability” in national identity services extends beyond interface design to include proofing completion rates, recovery and exception-handling pathways, and the operational burden created when verification fails [159,160].
Comparable end-to-end usability and inclusion concerns are reported in biometric-centric identity schemes. Government reporting on Aadhaar biometric authentication explicitly documents non-zero failure rates and motivates attention to authentication performance and exception handling [161], while programme-level analysis highlights how operational constraints can translate into repeated attempts or temporary exclusion in specific service settings [162]. At the wallet layer, Sauer et al. apply the MEUSec method to the “Hidy” digital identity wallet and identify multiple usability and information-security issues, emphasising that misunderstandings of wallet concepts and workflows may contribute to insecure behaviour and reduced confidence [163]. Complementing these accounts, a service-platform case study (ProCIDA, coordinated with Italy’s SPID) reports positive pilot feedback for simplified service access using digital identity [164]. Survey-based evidence also finds usability to be a significant determinant of willingness to use national digital identity systems [165].
In addition, qualitative interview research on everyday “digital identity” and footprint management suggests that users often perceive only partial control over their online identity, accept a portion of activity as routine-driven, and may prioritise convenience over security practices while delegating data protection to services they trust in principle [166]. Collectively, these studies contextualise the high usability prioritisation observed in Table 8 by showing that usability in digital identity ecosystems is an end-to-end socio-technical property spanning enrolment, authentication reliability, exception handling, recovery/continuity, and user comprehension—aligning with ecosystem readiness considerations concerning smooth customer journeys and the removal of barriers to access (e.g., F16 and F19).

6.5. Design

Designing standards, rules and regulations in digital identity ecosystems is crucial for ensuring security, privacy and user trust. It facilitates interoperability across platforms and domains, and adherence to legal requirements, while being adaptable to future technological changes. Moreover, these regulations are essential for promoting equitable access and inclusivity, underlining their role as fundamental in maintaining the integrity and effectiveness of digital identity systems.
The New Zealand Government asserted that regulatory and operational gaps in providing digital identity services—including the lack of a regime to regulate those services—had been identified as risks to network integrity [115,167,168]. Specifically, it was believed an inconsistent application of data privacy, identification and security standards could lead to systematic issues and breaches—thus posing risks to both customers and businesses, further undermining trust and confidence in the digital identity ecosystem, and slowing adoption.
On 5 November 2018, the Cabinet considered the paper ’Developing Options for a New Approach to Digital Identity’. It determined that the Government would continue working with citizens, agencies and the private sector to identify regulatory gaps. As a response to these challenges, New Zealand, Australia, Canada and the United Kingdom—which do not have single-identity registries—have identified trust frameworks as a way to address similar regulatory and operational gaps.
In response to similar challenges, the European Union has made strides through the design of its eIDAS (Electronic Identification, Authentication, and Trust Services) regulation. eIDAS was designed to establish a comprehensive and cohesive regulatory environment for electronic identification and trust services across the EU. It focuses on enhancing the security and interoperability of digital transactions within the internal market, addressing concerns similar to those identified by New Zealand.
The eIDAS regulation in the EU sets clear standards for electronic identification, aligning digital identity schemes across member states with a unified framework. This approach not only strengthens data privacy and security, but also builds user trust and confidence. By providing a legal framework that ensures the mutual recognition of electronic IDs in the EU, eIDAS effectively addresses the risks of inconsistent standards, thereby aiding in the smoother adoption and integration of digital identity services throughout Europe.

Results

As seen in Table 9, all stakeholder groups strongly indicated that standards, rules and regulations were important, and that governments should lead the way in regulating digital identity. Responses to this question from Academia indicate that the group believes it has an important part to play in the design of the standards, rules and regulations. Furthermore, Academia scored Business lowly, indicating their belief in the importance of applying grounded academic rigour to universal digital identity design.
Conversely, all participants firmly indicated that governments should work with businesses to develop the technology required for digital identity solutions. Consumers scored lower than other groups—this is somewhat at odds with user-centricity which values consumers’ involvement in design from the perspective of user acceptance.
  • Governance model implications and consumer roles. Responses to Q23–Q24 indicate strong support for regulation (92–100% across strata) and a preference for government-led regulation, suggesting that stakeholders expect a formal governance regime rather than a purely market-driven identity ecosystem (Table 7). Such regimes can be instantiated through different models: (i) centralised schemes where government sets rules and operates core identity services; (ii) federated/trust-framework approaches where multiple providers and relying parties are accredited against common assurance, security, and privacy requirements; and (iii) hybrid/self-sovereign designs where users hold credentials while governance authorities still maintain standards, registries, audits, and dispute mechanisms. A practical implication is that consumer roles extend beyond being identity subjects: consumers require institutionalised channels for consultation (e.g., citizen panels or consumer advisory functions), transparency over ecosystem rules, and accessible complaints and redress pathways. This is particularly important given that consumer involvement is rated lower than other groups in aspects of technology development (Table 7), implying that consumer input may be most effectively embedded through governance, oversight, and regulatory requirements on usability, accessibility, and accountability rather than through technical design participation alone.
These results are consistent with cross-jurisdictional practice in which regulation is increasingly operationalised through trust frameworks and accreditation/certification regimes rather than through a single mandated technical architecture. The examples summarised in Section 3.2 (e.g., EU eIDAS/EU Digital Identity Wallets, the UK Trust Framework, and the NZ and Australian trust/accreditation legislation) illustrate how policy can enable multiple implementations while still imposing baseline requirements for assurance, privacy, and accountability [105,106,107,109].

6.6. Data Management

Data management in digital identity ecosystems refers to the systematic process of collecting, storing, accessing and safeguarding personal and identity data within digital identity frameworks. This topic is explored through numerous papers including the works of Bedushi and Faber et al.
Bedushi addresses the challenge faced by over a billion people lacking official identity documents, underlining the UN’s goal of legal identity for all by 2030. The article highlights the increasing role of technology in identification processes and its human rights implications, particularly concerning data protection, privacy and discrimination. Bedushi argues that the success of digital identity technologies depends on their compliance with strict data protection and privacy standards to avoid human rights violations [169].
Faber et al. propose a blockchain-based solution for managing personal data in digital identities, focusing on a decentralised approach that enhances user control and privacy, in line with GDPR principles [170]. Their model aims to improve security and transparency, thereby increasing trust in the digital identity ecosystem. By prioritising user control over personal data, this approach addresses current challenges in data privacy and security, setting a new standard for digital identity management that is secure, transparent and user-centric.
South Korea’s PASS system, developed by the Korea Financial Telecommunications and Clearings Institute (KFTC), is a key example of data management in digital identity ecosystems. As a mobile-based digital authentication system primarily used for financial transactions, PASS incorporates multi-modal authentication methods like biometrics and PINs, underpinned by strong encryption and high-level security protocols. It also demonstrates interoperability by allowing users to authenticate across various banks and services. Concurrently, there is an increasing focus in South Korea on using blockchain for decentralised digital identity solutions, highlighting a trend towards user-controlled personal data management—a perspective supported by numerous research papers [171,172,173,174].
Emerging technology implications (AI, biometrics, blockchain). Building on the technology considerations outlined in Section 3.2, the data-management results (Table 9) and trust-related findings (Table 10) suggest that emerging technologies can amplify both adoption benefits and surveillance concerns if deployed without explicit constraints. For AI-enabled verification and risk scoring, the practical implication is that governance should require transparency, auditability, and contestability where automated outcomes materially affect access to services [93]. For biometrics, the key concern is inclusion and remediability: ecosystems should support alternative methods and recovery pathways, and minimise biometric exposure given demographic performance variation and failure modes [94]. For blockchain-based registries and decentralised credential ecosystems, the main implication is privacy-preserving implementation under clear accountability: avoid placing personal data on-chain, minimise linkable metadata, and adopt standards-based DID/VC patterns within defined governance and liability arrangements [95,96].

Results

In this questionnaire, data management questions focused on data privacy and data sharing with all stakeholder groups, feeling that information should be shared between government departments, although Consumer response was notably lower at 3.3 than Government and Business, both at 3.7.
When it came to tracking user activity online, all stakeholder groups agreed that businesses were most likely to conduct some form of tracking. It should be noted that scores for Government, Business and Law Enforcement were all high, suggesting a high degree of scepticism amongst users that could hinder the adoption of digital identity technology.
Only government participants felt that existing customer data should be used to auto-enrol customers in digital identity schemes—with the other stakeholder groups opting for self-management of this aspect of their digital identities. Additional education would be required to increase trust in data sharing policies.

6.7. Trust

The issue of trust in digital identity systems is crucial, particularly in the context of their use by governments during both normal and crisis situations. Reflecting on the controversial historical uses of such systems, contemporary implementations prioritise not violating citizens’ rights, requiring collaboration across various stakeholders. This approach involves a commitment to open standards, transparency in intellectual property, adaptability to new technologies, and enforced interoperability. As exemplified by Canada’s experience in creating a national identity ecosystem, the essence of these efforts is to build and maintain trust in digital identity projects [31].
Weaver, CEO of Digital Identity New Zealand, underlines trust as a critical challenge in digital identity, fueled by public concerns over privacy and the use of personal data. He notes the importance of transparency in building trust and the need for tech companies to adapt their business models to prioritise user privacy and information transparency [175].
Multiple countries are developing trust frameworks for digital identity systems to address these trust concerns. The Digital Identity Working Group (DIWG), formed in 2020, includes Australia, Canada, Finland, Israel, New Zealand, Singapore, the Netherlands, and the UK, and focuses on standardising digital identity protocols to ensure interoperability.

Results

Interestingly, the Business, Academia and Consumer groups did not believe their personal data would remain private in a digital identity network (see Table 11)—a result which may have a significant impact on the adoption of digital identity technology.
Government, Business and Academia trusted Government the most, whereas consumers elected to trust themselves marginally more than Government. Even though all stakeholder groups trusted Business least of all to maintain personal privacy, there was still support for Business to link identity across different contexts. This could indicate an acknowledgement of the need to integrate cloud service platforms, and warrants further research.

6.8. Liability

When government services act as the relying party in identity transactions, concerns are raised about liability and responsibility for illegitimate identity transactions. Whitley asserts that these concerns were never satisfactorily resolved with the previous UK identity system as it was never clear what liability the Government would face if a transaction relied on an official identity card [81].
The issue becomes more complicated in a decentralised self-sovereign ecosystem involving multiple identity providers. Assuming some level of liability is necessary, the subsequent concern would be whether liability could be reduced by incorporating additional security vectors such as biometric verification or standards that reduce risk for the relevant parties.
In Beduschi’s research into data privacy and human rights in a post-COVID world [169] she asserts that there is a level of responsibility in being a participant in a digital identity ecosystem. Digital identity providers, both private and public, should be held accountable for their actions and omissions by end-users. On the operating side, DINZ states that its members are unlikely to initiate digital identity solutions unless the level of risk they carry is clarified. Following discussions with its members, DINZ now expects New Zealand’s trust framework to contain provisions for liability.

Results

Across all stakeholder groups, liability was, for the most part, shared between Government and Business (see Table 12). With a sample mean of 5.1%, there was limited support for consumers to be liable for the loss themselves. There was, however, more support (13.1%) for “None of above”, which suggested that some participants felt a regulatory body or the offender should be liable.
  • I. Value tension analysis and candidate mitigation approaches. The survey results indicate several value tensions that are likely to affect the legitimacy and adoptability of universal digital identity ecosystems. These tensions do not imply mutually exclusive goals; rather, they highlight where design and governance choices must explicitly manage trade-offs.
Governance leadership vs privacy/surveillance concerns. Across the sample, government is consistently expected to lead in establishing a digital identity vision and is supported as a legitimate actor in national identity infrastructure (Table 3 and Table 6). At the same time, respondents express strong concern that digital identity may be used for online activity monitoring (Table 9) and report low confidence that personal data will remain private in a digital identity network (Table 10). This juxtaposition suggests a practical tension: stakeholders may accept government leadership and regulation, but only if governance arrangements credibly constrain surveillance and secondary use. Candidate mitigation approaches include purpose limitation and proportionality rules, independent oversight and auditability, and architectural separation between identity proofing, credential issuance, and service access so that behavioural data are not centrally aggregated.
Convenience and data reuse vs individual control/consent. The results also suggest tension between frictionless onboarding and autonomy. While respondents generally accept some degree of information sharing between government departments (Table 9), only the government subgroup supports the use of existing customer data for auto-enrolment, with other groups preferring self-management (Table 9). Practical responses to this tension include opt-in and revocable consent models, staged enrolment pathways, clear user-facing explanations of data flows, and privacy-preserving default settings that minimise data reuse unless explicitly authorised.
Interoperability and private-sector integration vs low trust in commercial actors. Interoperability and market participation often require commercial actors to play meaningful roles in identity ecosystems (e.g., technology delivery and cross-domain integration), yet business is consistently ranked as least trusted to manage identity while preserving privacy (Table 10) and is perceived as highly likely to track online activity (Table 9). This tension can be mitigated through stronger accreditation and compliance requirements for relying parties and identity providers, explicit liability and redress mechanisms (Table 11), and privacy-preserving interoperability patterns (e.g., context-specific identifiers and minimal disclosure of attributes) that reduce unnecessary cross-context linkability.
Security/robustness vs usability and inclusion. Finally, respondents rate both privacy and usability as important (Table 8), and the surrounding discussion emphasises the need to balance technical robustness with user-centric adoption requirements. In practice, overly burdensome security controls can increase exclusion or shift users toward insecure workarounds, whereas overly permissive designs can undermine trust. Candidate mitigation approaches include step-up authentication (risk-based escalation), inclusive recovery and exception-handling pathways, multi-channel access options, and targeted education interventions (Table 5) to reduce misunderstanding of safeguards and responsibilities.
To summarise these tensions in a form useful to practitioners, Table 13 maps each observed tension to the survey evidence and to candidate mitigation approaches that can be evaluated in subsequent VSD iterations.
  • II. Liability allocation, consumer protection, and regulatory implications. Liability in digital-identity-based transactions can originate from distinct failure points, including (a) proofing or issuance errors (e.g., false acceptance), (b) credential compromise or wallet/device theft, (c) relying-party misuse or inadequate transaction controls, and (d) systemic failures in registries or authentication services. In centralised models, liability may be concentrated in the state operator; in federated or self-sovereign ecosystems, allocation is typically shared across accredited providers and relying parties, requiring explicit role-specific obligations and enforceable minimum controls. The survey results indicate limited support for consumers bearing financial loss directly (sample mean 5.1%) and non-trivial selection of “none of above” (13.1%), implying an expectation that liability should largely sit with institutions (government/business) or with an accountable offender/regulatory mechanism rather than with end-users (Table 11).
To enhance practical applicability, these findings motivate governance and regulatory design features that protect consumers while clarifying responsibilities: (i) explicit allocation of liability across ecosystem roles (issuer, wallet provider, authentication service, relying party) and across error types; (ii) defined consumer duties that are proportionate and operationalisable (e.g., reasonable care and prompt reporting) coupled with clear “safe harbour” conditions when those duties are met; (iii) accessible dispute-resolution and redress mechanisms (including time-bound investigation and reimbursement pathways); and (iv) accreditation, audit, and (where appropriate) insurance/compensation requirements for participating providers. These mechanisms also connect to the broader trust and regulation findings, where stakeholders support regulation as a means to ensure privacy and network integrity (Table 7 and Table 10).

7. Limitations of the Study

This research takes an innovative approach to studying digital identity, despite the lack of a specific theoretical framework within this emerging field. Recognising this gap, our study takes an exploratory approach by strategically integrating guiding principles from established generalised research frameworks alongside insights from the more focused “Innovate Identity” report [80]. This combination enriches our questionnaire design, utilising standardised controls such as Likert scales to ensure consistent and meaningful analysis. Lack of rigorous testing of this approach, however remains a limitation.
In this study, participants were recruited using convenience sampling via multiple online platforms and professional communities (Section 5.2.1). This approach is appropriate for exploratory work; however, it introduces selection and coverage biases. Respondents are more likely to be digitally engaged, comfortable completing online surveys, and already exposed to digital identity discourse. Individuals with limited internet access, lower digital literacy, or who are not active in the recruitment channels used are therefore underrepresented, consistent with the known limitations of online-only survey distribution [142]. In addition, stakeholder strata are uneven (e.g., government respondents comprise a smaller subgroup than consumers), which limits inferential comparison and may influence combined summaries.
Because country/region was not collected, we cannot assess whether particular regulatory contexts are over- or under-represented; future cycles will record jurisdiction and use quota/stratified recruitment across regions.
Future research can strengthen demographic and geographic diversity by adopting more structured recruitment designs, including: (i) stratified or quota-based sampling across stakeholder strata and key demographic variables (e.g., age, education, digital literacy, and urban/rural context); (ii) multi-jurisdiction sampling plans that explicitly target participation across regions and differing regulatory and identity-governance contexts; (iii) multilingual instruments and region-specific dissemination; and (iv) partnerships with public agencies, civil-society organisations, and professional panels to reach populations that are less visible in online convenience samples. Where population marginals are available, post-stratification/weighting can further reduce imbalance and improve the interpretability of cross-group comparisons.
Additionally, being able to comment on digital identity technology requires a certain amount of prerequisite knowledge and experience. Future studies should take further steps to assess participants’ level of expertise on the subject matter and provide additional information where possible in order to achieve clarity. This applies, to a greater degree, to non-Internet users.
Even within the stakeholder groupings we have used, there is a great deal of diversification. The commercial needs of the banking sector, for example, are entirely different from the requirements of the health sector. Further research should seek to unpack each group into a more granular structure and identify unique perspectives on the technology.

8. Conclusions

This study does not aim to generate statistically generalisable claims. Rather, it provides exploratory and diagnostic insights based on a conceptually structured sample of stakeholder roles within the digital identity ecosystem. The results reflect patterns of value alignment and divergence among archetypal groups—government, business, academia, and consumers—rather than any nationally or demographically bounded population. These findings should therefore be interpreted as indicative and hypothesis-generating, forming a foundation for more targeted empirical work in future phases.
Despite its exploratory nature and limited generalisability, the research yields empirically grounded insights into stakeholder attitudes toward digital identity systems, offering valuable insights into the assumptions that underpin their universal design. Analysis of the sample has led to several key conclusions that relate directly to our research questions:
  • Core Values (RQ1)—Stakeholders exhibit similar prioritisation of privacy and usability, as indicated by responses to Q3 (importance of privacy, usability, and trust). However, significant differences emerge in other core values, such as education (Q8, Q10), network regulation (Q23, Q24), and collaboration (Q7, Q16), reflecting diverse stakeholder priorities and perspectives. These discrepancies underscore the challenges of achieving alignment and the importance of fostering mutual understanding to create a balanced and inclusive digital identity ecosystem. The findings underpin the value in a unified digital identity approach that accommodates these diverse priorities.
  • Roles and Responsibilities (RQ2)—Responses to Q7 (involvement of groups in establishing vision) and Q16 (control and management of digital identity) reveal a general shared understanding among stakeholders about their roles and responsibilities within the ecosystem. However, significant differences were observed in responses to Q9 (understanding of motivations of various parties), Q14 (importance of government in issuing credentials), and Q15 (who should hold/store credentials), indicating misalignment in how stakeholders perceive the distribution of responsibilities and authority in the digital identity ecosystem. These differences highlight areas that require targeted education and negotiation to achieve greater alignment.
  • Trust (RQ3)—Concerns about data privacy persist across all stakeholder groups, as shown in responses to Q27 (use of digital identity to track activities) and Q28 (trust in data privacy within a digital identity network). These concerns emphasise trust as a critical barrier to widespread acceptance and highlight the necessity for robust data protection mechanisms.
Overall, this pilot demonstrates the feasibility of the survey instrument and analysis pipeline for eliciting patterns of stakeholder alignment and value tensions within a conceptually defined universal digital identity ecosystem. The findings are indicative and hypothesis-generating rather than ecosystem-level validation, and they motivate larger follow-on studies using stratified and multi-jurisdiction sampling and complementary qualitative methods to test the robustness of the observed patterns.
Additionally, the following more detailed indicators have been observed in conjunction with our overarching research question:
  • Government’s Leadership Role—Responses to Q14 (importance of government in issuing credentials) and Q16 (importance of government in controlling digital identity) indicate a strong expectation for governments to lead the development of the digital identity ecosystem.
  • Trust in Government—While concerns about government overreach were reflected in Q27 (online activity monitoring), responses to Q13 (understanding of national identity scheme history) and Q11 (support for a national identity register) show a paradox where participants trust governments to lead but remain cautious about potential surveillance.
  • Support for Regulation—Responses to Q23 (should digital identity networks be regulated) and Q24 (who should regulate digital identity) demonstrate strong support for regulating the digital identity network to ensure trust and privacy, addressing trust concerns associated with RQ3.
  • Collaboration Between Government and Business—In researching RQ2, collaboration between government and business entities is highlighted in responses to Q21 (establishing standards, rules, and regulations) and Q25 (liability for financial loss). This partnership and associated roles appears to be accepted as essential for governing digital identity effectively.
  • Concerns About Online Monitoring—Responses to Q27 (use of digital identity to track activities) and Q28 emphasise elevated concerns about online monitoring, underscoring the need to address these apprehensions and enhance understanding of digital identity technology.
These key conclusions are based on a sample set of 243 participants. The conclusions drawn are indicative only and warrant further research with a larger sample set.

Contributions and Future Research

This paper makes three contributions to research and practice on digital identity ecosystems. First, it provides exploratory empirical evidence on where key stakeholder groups align or diverge on core socio-technical concerns—particularly privacy, trust, usability, and governance expectations—using survey results stratified across government, business, academia, and consumers. Second, it synthesises these findings into a stakeholder-aligned conceptual structure spanning ecosystem, values, stakeholders, roles, responsibilities, and artefacts, intended to support clearer reasoning about ecosystem readiness and to make value tensions more explicit during design and governance deliberation.
Third, it presents a lightweight proof-of-concept illustrating how this structure can be operationalised as sequential semantic constraints in a context-layered retrieval-augmented generation (RAG) pipeline, supporting ecosystem-aware and role-relevant AI outputs.
More specifically, within these headline contributions the study contributes initial insight into stakeholder consensus and leadership expectations (e.g., stakeholder groupings and vision alignment; Q1, Q7), government leadership and standards-setting roles (Q11, Q21), and trust dynamics (including privacy trust and relative trust allocation; Q28, Q29). It also surfaces evidence that complicates simple assumptions of uniform government mistrust (e.g., juxtaposing surveillance concerns with leadership expectations; Q27, Q29), and it provides preliminary support for hybrid governance positions that combine government oversight with broader stakeholder engagement (Q16, Q24). Finally, the findings on regulatory preferences and public–private coordination (Q23, Q24; Q7, Q22), and on privacy–usability tensions and role expectations (Q3, Q19; Q14-Q16), offer early signals that can inform subsequent, more comprehensive ecosystem design work.
Based on the conclusions from the study on digital identity ecosystems, several promising areas for future research are identified:
  • Scale and Longitudinal Studies—Expanding the sample size and conducting longitudinal research to observe changes in stakeholder perceptions over time as digital identity technologies evolve. Insights from Q3 and Q28 indicate that stakeholder priorities and trust concerns may shift over time.
  • Governance Models—Investigating various governance frameworks, including self-sovereign, centralised, and hybrid models, to determine their effectiveness and stakeholder acceptance, as suggested by responses to Q24 and Q16.
  • Regulatory Frameworks—Developing and testing specific regulatory frameworks to evaluate their impact on the privacy, security, and usability of digital identities, based on the diverse stakeholder responses in Q23 and Q15.
  • Public–Private Partnerships—Examining effective structures for collaboration between government and business in creating liability models for digital identities, drawing from insights in Q25 and Q7.
  • Social Inclusion—Researching how digital identity systems affect social inclusion, particularly for marginalised groups, and the potential for either exacerbating or reducing social inequalities, as indicated by feedback to Q19 and Q20.
  • Technological Advancements—Continuously assessing the reception and impact of emerging technologies in digital identity systems, including AI-enabled verification and risk-based authentication, biometric authentication, and decentralised identifier/credential technologies (e.g., DID/VC). Future work should evaluate not only performance and usability, but also transparency, demographic fairness, and governance implications, as suggested by the technological usability concerns in Q30.
  • Privacy Concerns—Focusing on developing privacy-enhancing technologies and improving stakeholder education and trust in digital identity systems, as highlighted by responses to Q3 and Q12.
  • Cultural and Ethical Implications—Exploring the cultural and ethical considerations of digital identities across different societies to ensure global applicability and acceptance, a need underscored by diverse perspectives in Q6 and Q28.
  • Representative and Cross-Jurisdiction Sampling—Implementing stratified sampling across demographic groups and jurisdictions (including lower-connectivity settings and underrepresented populations) to test whether the alignment patterns observed in this pilot generalise under different cultural, regulatory, and identity-governance conditions.
These research directions can further refine the development of digital identity systems, ensuring they are secure, equitable and acceptable across various contexts.

Author Contributions

Conceptualization, M.C. and A.M.;methodology, M.C.; investigation, M.C.; formal analysis, M.C.; data curation, M.C.; writing—original draft preparation, M.C.; writing—review and editing, M.C. and A.M.; supervision, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Commonwealth Scholarship Commission in the UK (Reference: CSC CR-2019-67).

Institutional Review Board Statement

The study involved human participants through an online survey. Ethical review and approval were provided by the Department of Computer Science Departmental Research Ethics Committee (DREC), University of Oxford (Reference: CS_C1A_021_025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available in this article. Further information is available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Longstaff, T.A.; Ellis, J.T.; Hernan, S.V.; Lipson, H.F.; Mcmillan, R.D.; Pesante, L.H.; Simmel, D. Security of the Internet. In The Froehlich/Kent Encyclopedia of Telecommunications; CRS Press: London, UK; New York, NY, USA, 1996; Volume 5, pp. 231–255. [Google Scholar]
  2. Society, I. 2018 Cyber Incident and Breach Trends Report; Internet Society: Reston, VA, USA, 2018. [Google Scholar]
  3. Anderson, R.; Barton, C.; Böhme, R.; Clayton, R.; Gañán, C.; Grasso, T.; Levi, M.; Moore, T.; Vasek, M. Measuring the Changing Cost of Cybercrime. In Proceedings of the 2019 Workshop on the Economics of Information Security, Boston, MA, USA, 3–4 June 2019. [Google Scholar]
  4. Hasselbring, W. Information system integration. Commun. ACM 2000, 43, 32–38. [Google Scholar] [CrossRef]
  5. Comb, M.J.A. Achieving Business @ The Speed of Thought. Ph.D. Thesis, Massey University, Palmerston North, New Zealand, 2016. [Google Scholar]
  6. Ko, R.K.; Jagadpramana, P.; Mowbray, M.; Pearson, S.; Kirchberg, M.; Liang, Q.; Lee, B.S. TrustCloud: A framework for accountability and trust in cloud computing. In Proceedings of the 2011 IEEE World Congress on Services, Washington, DC, USA, 4–9 July 2011. [Google Scholar]
  7. Solove, D.J.; Schwartz, P.M. Information Privacy Law; Aspen Publishing: Waltham, MA, USA, 2020. [Google Scholar]
  8. Goodell, G.; Aste, T. A Decentralised Digital Identity Architecture. Front. Blockchain 2019, 2, 17. [Google Scholar] [CrossRef]
  9. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. Technical Report, 2008. Available online: https://assets.pubpub.org/d8wct41f/31611263538139.pdf (accessed on 16 December 2023).
  10. Swan, M. Blockchain for Business: Next-Generation Enterprise Artificial Intelligence Systems. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar] [CrossRef]
  11. Laatikainen, G.; Kolehmainen, T.; Abrahamsson, P. Self-Sovereign Identity Ecosystems: Benefits and Challenges. Technical Report. 2021. Available online: https://jyx.jyu.fi/jyx/Record/jyx_123456789_77892 (accessed on 12 July 2023).
  12. Lyon, D. Surveillance, Snowden, and Big Data: Capacities, consequences, critique. Big Data Soc. 2014, 1, 2053951714541861. [Google Scholar] [CrossRef]
  13. Comb, M.; Martin, A. Mining digital identity insights: Patent analysis using NLP. EURASIP J. Inf. Secur. 2024, 2024, 21. [Google Scholar] [CrossRef]
  14. Cameron, K. The Laws of Identity. Microsoft Corp 2005, 12, 8–11. [Google Scholar] [CrossRef]
  15. Onesi-Ozigagun, O.; Ololade, Y.J.; Eyo-Udo, N.L.; Ogundipe, D.O. AI-driven biometrics for secure fintech: Pioneering safety and trust. Int. J. Eng. Res. Updat. 2024, 6, 001–012. [Google Scholar] [CrossRef]
  16. Michael, K.; Abbas, R.; Jayashree, P.; Bandara, R.J.; Aloudat, A. Biometrics and AI Bias. IEEE Trans. Technol. Soc. 2022, 3, 2–8. [Google Scholar] [CrossRef]
  17. Agrawal, G.; Kumarage, T.; Alghamdi, Z.; Liu, H. Can Knowledge Graphs Reduce Hallucinations in LLMs?: A Survey. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Mexico City, Mexico, 16–21 June 2024. [Google Scholar]
  18. Rau, D.; Wang, S.; Dejean, H.; Clinchant, S. Context Embeddings for Efficient Answer Generation in RAG. arXiv 2024, arXiv:2407.09252. [Google Scholar] [CrossRef]
  19. Buhler, G.; Entschew, E.; Selhorst, M. Security Versus Usability—User-Friendly Qualified Signatures Based on German ID Cards. In ISSE 2014 Securing Electronic Business Processes; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2014; pp. 94–105. [Google Scholar] [CrossRef]
  20. Brunner, C.; Gallersdorfer, U.; Knirsch, F.; Engel, D.; Matthes, F. DID and VC:Untangling decentralized identifiers and verifiable credentials for the web of trust. In Proceedings of the 2020 3rd International Conference on Blockchain Technology and Applications, New York, NY, USA, 21 March 2020; pp. 61–66. [Google Scholar] [CrossRef]
  21. Mayer, S.; Guinard, D.; Wilde, E.; Kovatsch, M. The Seventh international workshop on the Web of Things. In Proceedings of the Seventh International Workshop on the Web of Things, Stuttgart, Germany, 7 November 2016; pp. 1–4. [Google Scholar] [CrossRef]
  22. Sartor, S.; Sedlmeir, J.; Rieger, A.; Roth, T.H. Love at First Sight? A User Experience Study of Self-Sovereign Identity Wallets. Technical Report, 2022. Available online: https://www.researchgate.net/publication/360021644_Love_at_First_Sight_A_User_Experience_Study_of_Self-Sovereign_Identity_Wallets (accessed on 21 December 2024).
  23. Sellung, R.; Kubach, M. Research on User Experience for Digital Identity Wallets: State-of-the-Art and Recommendations. In Proceedings of the Open Identity Summit 2023, Berlin, Germany, 15–16 June 2023; Volume P-335, pp. 39–50. [Google Scholar] [CrossRef]
  24. Cavoukian, A. Privacy by Design. Identity Inf. Soc. 2010, 3, 1–12. [Google Scholar]
  25. Mont, M.C.; Pearson, S.; Bramhall, P. Towards accountable management of identity and privacy: Sticky policies and enforceable tracing services. Proc.—Int. Workshop Database Expert Syst. Appl. DEXA 2003, 2003, 377–382. [Google Scholar] [CrossRef]
  26. Singh, P. Aadhaar and data privacy: Biometric identification and anxieties of recognition in India. Inf. Commun. Soc. 2021, 24, 978–993. [Google Scholar] [CrossRef]
  27. Cavoukian, A. Privacy by Design The 7 Foundational Principles Implementation and Mapping of Fair Information Practices. Technical Report. 2009. Available online: https://student.cs.uwaterloo.ca/~cs492/papers/7foundationalprinciples_longer.pdf (accessed on 15 July 2020).
  28. Khatchatourov, A.; Laurent, M.; Levallois-Barth, C. Privacy in Digital Identity Systems: Models, Assessment, and User Adoption; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  29. Anonymous. Trusted Digital Identity Framework (TDIF) | Digital Identity, 2019. Available online: https://web.archive.org/web/20240805120217/https://www.digitalidsystem.gov.au/tdif (accessed on 11 July 2023).
  30. Frengley, B. How trustworthy is the Trusted Digital Identity Framework? Evaluating security and privacy in Australian digital identity. Master Thesis, University of Melbourne, Melbourne, Australia, 2020. [Google Scholar]
  31. Abraham, S. Building Trust: Lessons from Canada’s Approach to Digital Identity. ORF Issue Brief 2020, 367, 1–10. [Google Scholar]
  32. Josang, A.; Fabre, J.; Hay, B.; Dalziel, J.; Pope, S. Trust requirements in identity management. Conf. Res. Pract. Inf. Technol. Ser. 2005, 44, 99–108. [Google Scholar]
  33. Alhussain, T.; Drew, S. Towards User Acceptance of Biometric Technology in E-Government: A Survey Study in the Kingdom of Saudi Arabia. In Proceedings of the IFIP Advances in Information and Communication Technology, Lisboa, Portugal, 9–12 June 2009; Volume 305, pp. 26–38. [Google Scholar] [CrossRef]
  34. Alpár, G.; Hoepman, J.H.; Siljee, J. The Identity Crisis. Security, Privacy and Usability Issues in Identity Management. arXiv 2011, arXiv:1101.0427. [Google Scholar] [CrossRef]
  35. Hackett, M.; Hawkey, K. Security, Privacy and Usability Requirements for Federated Identity. Technical Report, 2012. Available online: https://www.researchgate.net/profile/Michael-Hackett/publication/225303840_Security_Privacy_and_Usability_Requirements_for_Federated_Identity/links/09e414fdb76a87d777000000/Security-Privacy-and-Usability-Requirements-for-Federated-Identity.pdf (accessed on 9 March 2020).
  36. Acquisti, A. Privacy and Security of Personal Information. In Economics of Information Security; Springer: Boston, MA, USA, 2006; pp. 179–186. [Google Scholar] [CrossRef]
  37. Jaatun, M.G.; Zhao, G.; Rong, C. Strengthen Cloud Computing Security with Federal Identity Management Using Hierarchical Identity-Based Cryptography; Technical Report; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  38. Alanzi, H.; Alkhatib, M. Towards Improving Privacy and Security of Identity Management Systems Using Blockchain Technology: A Systematic Review. Appl. Sci. 2022, 12, 12415. [Google Scholar] [CrossRef]
  39. Sule, M.J.; Zennaro, M.; Thomas, G. Cybersecurity through the lens of Digital Identity and Data Protection: Issues and Trends. Technol. Soc. 2021, 67, 101734. [Google Scholar] [CrossRef]
  40. Security, P.S.N. The Digital Identity Issue; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  41. Domingo, A.I.S.; Enriquez, A.M. Digital Identity: The Current State of Affairs Digital Identity: The Current State of Affairs. Technical Report, 2018. Available online: https://www.bbvaresearch.com/wp-content/uploads/2018/02/Digital-Identity_the-current-state-of-affairs.pdf (accessed on 15 July 2024).
  42. In, J. Introduction of a pilot study. Korean J. Anesthesiol. 2017, 70, 601–605. [Google Scholar] [CrossRef]
  43. Satchell, C.; Shanks, G.; Howard, S.; Murphy, J. Knowing me, knowing you: End user perceptions of identity management systems. In Proceedings of the 14th European Conference on Information Systems, 12–14 June 2006. [Google Scholar]
  44. Edu, J.; Hooper, M.; Maple, C.; Crowcroft, J. Moving Beyond Frameworks: Stakeholders’ Perceptions of Risk Assessment in National Electronic Identity System. 2023; preprint. [Google Scholar] [CrossRef]
  45. Kramer, M. Creating a value-driven Digital Identity Future Engaging multiple stakeholders in strategic dialogues to balance values in the emergent ecosystem of digital identity in Europe. Technical Report, 2023. Available online: https://repository.tudelft.nl/record/uuid:c33f31e5-86a0-4535-a5c1-8e6f07f45b5e (accessed on 11 May 2024).
  46. Elliott, J.; Birch, D.; Ford, M.; Whitcombe, A. Overcoming Barriers in the EU Digital Identity Sector; Institute for Prospective Technological Studies, European Commission: Ispra, Italy, 2007. [Google Scholar]
  47. Lam, W. Barriers to e-government integration. J. Enterp. Inf. Manag. 2005, 18, 511–530. [Google Scholar] [CrossRef]
  48. Olsen, T.; Mahler, T. Risk, responsibility and compliance in ‘Circles of Trust’—Part I. Comput. Law Secur. Rep. 2007, 23, 342–351. [Google Scholar] [CrossRef]
  49. Sullivan, C. Digital identity–From emergent legal concept to new reality. Comput. Law Secur. Rev. 2018, 34, 723–731. [Google Scholar] [CrossRef]
  50. Pattiyanon, C.; Aoki, T. Compliance SSI System Property Set to Laws, Regulations, and Technical Standards. IEEE Access 2022, 10, 99370–99393. [Google Scholar] [CrossRef]
  51. Beduschi, A. Digital identity: Contemporary challenges for data protection, privacy and non-discrimination rights. Big Data Soc. 2019, 6, 2053951719855091. [Google Scholar] [CrossRef]
  52. Manohar, A.K.; Briggs, J. Identity Management in the Age of Blockchain 3.0. HCI for Blockchain—CHI 2018 Workshop. 2018; pp. 1–9. Available online: https://web.archive.org/web/20191210185156/https://nrl.northumbria.ac.uk/34110/ (accessed on 26 April 2020).
  53. Borak, M. Leave Your Wallet at Home, WeChat Is Now Issuing ID Cards · TechNode, 2017. Available online: https://technode.com/2017/12/26/leave-wallet-home-wechat-now-issuing-id-cards/ (accessed on 28 September 2020).
  54. Maliki, T.E.; Seigneur, J.-M. A survey of user-centric identity management technologies. In Proceedings of the The International Conference on Emerging Security Information, Systems, and Technologies (SECUREWARE 2007), Valencia, Spain, 14–20 October 2007. [Google Scholar]
  55. Bramhall, P.; Hansen, M.; Rannenberg, K.; Roessler, T. User-centric identity management. IEEE Secur. Priv. 2007, 5, 84–87. [Google Scholar] [CrossRef]
  56. Ackermann, F.; Eden, C. Strategic management of stakeholders: Theory and practice. Long Range Plan. 2011, 44, 179–196. [Google Scholar] [CrossRef]
  57. Bauer, D.; Blough, D.M.; Cash, D. Minimal information disclosure with efficiently verifiable credentials. In Proceedings of the ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 27–31 October 2008; pp. 15–24. [Google Scholar] [CrossRef]
  58. Clarke, R. Roger Clarke’s ‘Digital Persona’, 1994. Available online: https://www.rogerclarke.com/DV/DigPersona.html (accessed on 20 May 2020).
  59. European Union. General Data Protection Regulation (GDPR), 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 12 March 2024).
  60. Isaak, J.; Hanna, M.J. User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer 2018, 51, 56–59. [Google Scholar] [CrossRef]
  61. Jenkins, R. Social Identity; Routledge: London, UK, 2014. [Google Scholar]
  62. Whitley, E.A.; Gal, U.; Kjaergaard, A. Who do you think you are? A review of the complex interplay between information systems, identification and identity. Eur. J. Inf. Syst. 2014, 23, 17–35. [Google Scholar] [CrossRef]
  63. Setsaas, J.E.; Cameron, K.; Birch, D. Distributed Identity—Should it be the way forward! In Proceedings of the Distributed Identity—Should It Be the Way Forward! Online, 20 May 2020. [Google Scholar]
  64. Selvanathan, N.; Jayakody, D.; Damjanovic-Behrendt, V. Federated identity management and interoperability for heterogeneous cloud platform ecosystems. In Proceedings of the ACM International Conference Proceeding Series, Limassol, Cyprus, 8–12 April 2019. [Google Scholar] [CrossRef]
  65. Mashima, D.; Ahamad, M. Towards A User-Centric Identity-Usage Monitoring System. In Proceedings of the 2008 The Third International Conference on Internet Monitoring and Protection, Bucharest, Romania, 29 June–5 July 2008. [Google Scholar]
  66. Motykowski, P. An Analysis of User-Centric Identity Technology Trends, Openid’s First Act. Technical Report, 2011. Available online: https://regis.lunaimaging.com/luna/servlet/allCollections?homepageView=2 (accessed on 4 May 2024).
  67. Mainka, C.; Mladenov, V.; Schwenk, J.; Wich, T. SoK: Single Sign-On Security—An Evaluation of OpenID Connect. In Proceedings of the 2017 IEEE European Symposium on Security and Privacy (EuroS&P), Paris, France, 26–28 April 2017; pp. 251–266. [Google Scholar] [CrossRef]
  68. Sharif, A.; Carbone, R.; Sciarretta, G.; Ranise, S. Best current practices for OAuth/OIDC Native Apps: A study of their adoption in popular providers and top-ranked Android clients. J. Inf. Secur. Appl. 2022, 65, 103097. [Google Scholar] [CrossRef]
  69. Pieters, W.; Cleeff, A.V. The precautionary principle in a world of digital dependencies. Computer 2009, 42, 50–56. [Google Scholar] [CrossRef]
  70. Forum, J. “Identity” Commandments. Available online: https://collaboration.opengroup.org/jericho/commandments_v1.2.pdf (accessed on 7 April 2020).
  71. Toth, K.C.; Anderson-Priddy, A. Self-Sovereign Digital Identity: A Paradigm Shift for Identity. IEEE Secur. Priv. 2019, 17, 17–27. [Google Scholar] [CrossRef]
  72. Allen, C. The Path to Self-Sovereign Identity; 2016. Available online: https://www.lifewithalacrity.com/article/the-path-to-self-soverereign-identity/ (accessed on 26 July 2020).
  73. Avellaneda, O.; Bachmann, A.; Barbir, A.; Brenan, J.; Dingle, P.; Duffy, K.H.; Maler, E. Decentralized Identity: Where Did It Come From and Where Is It Going? IEEE Commun. Stand. Mag. 2019, 3, 10–13. [Google Scholar] [CrossRef]
  74. Lodderstedt, T.; Yasuda, K.; Looker, T. OpenID for Verifiable Credential Issuance—Editor’s Draft; 2024. Available online: https://web.archive.org/web/20240425220919/https://openid.github.io/OpenID4VCI/openid-4-verifiable-credential-issuance-wg-draft.html (accessed on 5 May 2024).
  75. Terbu, O.; Lodderstedt, T.; Yasuda, K.; Looker, T. OpenID for Verifiable Presentations—Editor’s Draft; 2024. Available online: https://openid.github.io/OpenID4VP/openid-4-verifiable-presentations-wg-draft.html (accessed on 4 May 2024).
  76. W3C. W3C Verifiable Credentials; W3C: Cambridge, MA, USA, 2019. [Google Scholar]
  77. European Parliament and Council. Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on Electronic Identification and Trust Services for Electronic Transactions in the Internal Market and Repealing Directive 1999/93/EC (eIDAS Regulation); European Parliament and Council: Brussels, Belgium, 2014. [Google Scholar]
  78. European Parliament and Council. Discover eIDAS | Shaping Europe’s Digital Future; European Parliament and Council: Brussels, Belgium, 2023. [Google Scholar]
  79. Harbach, M.; Fahl, S.; Rieger, M.; Smith, M. On the Acceptance of Privacy-Preserving Authentication Technology: The Curious Case of National Identity Cards. In International Symposium on Privacy Enhancing Technologies Symposium; Technical Report; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  80. Identity, I. Digital Identity in the UK: The Cost of Doing Nothing; Open Identity Exchange: London, UK, 2018. [Google Scholar]
  81. Whitley, E.A. Trusted Digital Identity Provision: GOV.UK Verify’s Federated Approach. In CGD Policy Paper; CGD: Boulder, CO, USA, 2018; pp. 94–120. [Google Scholar]
  82. Trist, E. The Evolution of Socio-Technical Systems; Ontario Quality of Working Life Centre: Toronto, ON, Canada, 1981. [Google Scholar]
  83. Alter, S. Dimensions of Integration in Sociotechnical Systems; Technical Report; Ontario Quality of Working Life Centre: Toronto, ON, Canada, 2020. [Google Scholar]
  84. Cristofaro, E.D.; Du, H.; Freudiger, J.; Norcie, G. A Comparative Usability Study of Two-Factor Authentication. arXiv 2013, arXiv:1309.5344. [Google Scholar]
  85. Jin, A.T.B.; Ling, D.N.C.; Goh, A. Biohashing: Two factor authentication featuring fingerprint data and tokenised random number. Pattern Recognit. 2004, 37, 2245–2255. [Google Scholar] [CrossRef]
  86. Lux, Z.A.; Thatmann, D.; Zickau, S.; Beierle, F. Distributed-Ledger-based Authentication with Decentralized Identifiers and Verifiable Credentials. In Proceedings of the 2020 2nd Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS), Paris, France, 28–30 September 2020. [Google Scholar]
  87. Gelb, A.; Clark, J. Identification for development: The biometrics revolution. Cent. Glob. Dev. Work. Pap. 2013, 44. [Google Scholar] [CrossRef]
  88. Yli-Huumo, J.; Ko, D.; Choi, S.; Park, S.; Smolander, K. Where is current research on Blockchain technology?—A systematic review. PLoS ONE 2016, 11, e0163477. [Google Scholar] [CrossRef] [PubMed]
  89. Wolfond, G. A Blockchain Ecosystem for Digital Identity: Improving Service Delivery in Canada’s Public and Private Sectors. Technol. Innov. Manag. Rev. 2017, 7, 35–40. [Google Scholar] [CrossRef]
  90. Mir, U.; Kar, A.K.; Gupta, M.P. AI-enabled digital identity—Inputs for stakeholders and policymakers. J. Sci. Technol. Policy Manag. 2022, 13, 514–541. [Google Scholar] [CrossRef]
  91. Boldyreva, A.; Goyal, V.; Kumart, V. Identity-based encryption with efficient revocation. In Proceedings of the ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 27–31 October 2008; pp. 417–426. [Google Scholar] [CrossRef]
  92. Kihara, M.; Iriyama, S. New Authentication Algorithm Based on Verifiable Encryption with Digital Identity. Cryptography 2019, 3, 19. [Google Scholar] [CrossRef]
  93. NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0); Technical Report; NIST: Gaithersburg, MD, USA, 2023.
  94. Grother, P.J.; Ngan, M.L.; Hanaoka, K.K. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects; Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2019.
  95. W3C. Decentralized Identifiers (DIDs) v1.0: Core Architecture, Data Model, and Representations; W3C Recommendation; W3C: Cambridge, MA, USA, 2022. [Google Scholar]
  96. W3C. Verifiable Credentials Data Model v2.0; W3C Recommendation; W3C: Cambridge, MA, USA, 2025. [Google Scholar]
  97. Kumar, V.; Bhardwaj, A. Identity Management Systems. Int. J. Strateg. Decis. Sci. 2018, 9, 63–78. [Google Scholar] [CrossRef]
  98. Singh, K.; Dib, O.; Huyart, C.; Toumi, K. A Novel Credential Protocol for Protecting Personal Attributes in Blockchain; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  99. Palfrey, J.; Gasser, U. Digital Identity Interoperability and eInnovation. Technical Report, 2007. Available online: https://dash.harvard.edu/entities/publication/73120378-7f84-6bd4-e053-0100007fdf3b (accessed on 9 July 2024).
  100. Li, W.; Wu, J.; Cao, J.; Chen, N.; Zhang, Q.; Buyya, R. Blockchain-based trust management in cloud computing systems: A taxonomy, review and future directions. J. Cloud Comput. 2021, 10, 35. [Google Scholar] [CrossRef]
  101. Sedlmeir, J.; Smethurst, R.; Rieger, A.; Fridgen, G. Digital Identities and Verifiable Credentials. Bus. Inf. Syst. Eng. 2021, 63, 603–613. [Google Scholar] [CrossRef]
  102. Husz, O. Bank Identity: Banks, ID Cards, and the Emergence of a Financial Identification Society in Sweden. Enterp. Soc. 2018, 19, 391–429. [Google Scholar] [CrossRef]
  103. Metcalf, K.N. How to build e-governance in a digital society: The case of Estonia. Rev. Catalana Dret Public 2019, 2019, 1–12. [Google Scholar] [CrossRef]
  104. Madon, S.; Schoemaker, E. Digital identity as a platform for improving refugee management. Inf. Syst. J. 2021, 31, 929–953. [Google Scholar] [CrossRef]
  105. European Commission. The European Digital Identity Regulation; European Commission: Brussels, Belgium, 2025.
  106. UK Government. UK Digital Identity and Attributes Trust Framework; UK Government: London, UK, 2025.
  107. New Zealand Department of Internal Affairs. Trust Framework for Digital Identity Legislation; New Zealand Department of Internal Affairs: Wellington, New Zealand, 2025.
  108. New Zealand Parliamentary Counsel Office. Digital Identity Services Trust Framework Regulations 2024; New Zealand Department of Internal Affairs: Wellington, New Zealand, 2024.
  109. Australian Government. Digital ID Act 2024; Australian Government: Canberra, Australia, 2024.
  110. Australian Government. Trusted Digital Identity Framework (TDIF); Australian Government: Canberra, Australia, 2024.
  111. Bazarhanova, A.; Smolander, K. The Review of Non-Technical Assumptions in Digital Identity Architectures. 2020. Available online: https://aisel.aisnet.org/hicss-53/st/design_responsible_system/2/ (accessed on 20 September 2025).
  112. Leung, D.; Nolens, B.; Arner, D.W.; Frost, J. Corporate Digital Identity No Silver Bullet, but a Silver Lining. 2022. Available online: https://www.bis.org/publ/bppdf/bispap126.pdf (accessed on 9 July 2024).
  113. Barnett, M.L.; Etter, M.A.; Hannigan, T.; Reger, R.K.; Zavyalova, A.A. Social Media and Social Evaluations. Acad. Manag. Proc. 2019, 2019, 13845. [Google Scholar] [CrossRef]
  114. Nyst, C.; Falchetta, T. The right to privacy in the digital age. J. Hum. Rights Pract. 2017, 9, 104–118. [Google Scholar] [CrossRef]
  115. NZ Governmen. Progressing Digital Identity: Establishing a Trust Framework; Technical Report; New Zealand Government: Wellington, New Zealand, 2020.
  116. Dixon, P. Digital Identity Ecosystems. Technical Report, 2019. Available online: https://worldprivacyforum.org/wp-content/uploads/2019/02/WPF_DigitalID_PositionPaper_2019fs.pdf (accessed on 9 July 2024).
  117. Atick, J.J. Digital Identity: The Essential Guide. Technical Report, 2016. Available online: https://www.id4africa.com/main/files/Digital_Identity_The_Essential_Guide.pdf (accessed on 7 September 2024).
  118. Bouncken, R.; Barwinski, R. Shared digital identity and rich knowledge ties in global 3D printing—A drizzle in the clouds? Glob. Strategy J. 2020, 11, 81–108. [Google Scholar] [CrossRef]
  119. Wang, F.; Filippi, P.D. Self-Sovereign Identity in a Globalized World: Credentials-Based Identity Systems as a Driver for Economic Inclusion. Front. Blockchain 2019, 2, 28. [Google Scholar] [CrossRef]
  120. Josang, A.; Zomai, M.A.; Suriadi, S. Usability and privacy in identity management architectures. Conf. Res. Pract. Inf. Technol. Ser. 2007, 68, 143–152. [Google Scholar]
  121. Zloteanu, M.; Harvey, N.; Tuckett, D.; Livan, G. Digital identity: The effect of trust and reputation information on user judgement in the sharing economy. PLoS ONE 2018, 13, e0209071. [Google Scholar] [CrossRef] [PubMed]
  122. Lyons, B.; Wessel, J.; Ghumman, S.; Ryan, A.M.; Kim, S. Applying models of employee identity management across cultures: Christianity in the USA and South Korea. J. Organ. Behav. 2014, 35, 678–704. [Google Scholar] [CrossRef]
  123. Fischer, G.; Herrmann, T. Socio-technical systems: A meta-design perspective. Int. J. Sociotechnology Knowl. Dev. (IJSKD) 2011, 3, 1–33. [Google Scholar] [CrossRef]
  124. Marangunic, N.; Granic, A. Technology acceptance model: A literature review from 1986 to 2013. Univers. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  125. Hacker, K. Community-Based Participatory Research; Sage Publications: Newbury Park, CA, USA, 2013. [Google Scholar]
  126. Rip, A. Constructive Technology Assessment. In Futures of Science and Technology in Society; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2018; pp. 97–114. [Google Scholar] [CrossRef]
  127. Doorn, N.; Schuurbiers, D.; de Poel, I.; Gorman, M.E. Early Engagement and New Technologies: Opening Up the Laboratory; Technical report; Springer: Dordrecht, The Netherlands, 2014. [Google Scholar]
  128. Friedman, B. Value-Sensitive Design. Interactions 1996, 3, 16–23. [Google Scholar] [CrossRef]
  129. Davis, J.; Nathan, L.P. Value Sensitive Design: Applications, Adaptations, and Critiques. Technical Report; In Handbook of Ethics, Values, and Technological Design; Springer: Dordrecht, The Netherlands, 2015. [Google Scholar]
  130. Freeman, R.E. Strategic Management: A Stakeholder Approach; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  131. Nutt, P.C.; Backoff, R.W. Strategic Management of Public and Third Sector Organizations: A Handbook for Leaders. 1992. Available online: https://cir.nii.ac.jp/crid/1971712334774012441 (accessed on 13 December 2023).
  132. Johnson, G.; Scholes, K. Exploring Corporate Strategy; Financial Times Prentice Hall: Hoboken, NJ, USA, 2002. [Google Scholar]
  133. Bryson, J.M. What to do when stakeholders matter: Stakeholder identification and analysis techniques. Public Manag. Rev. 2024, 6, 21–53. [Google Scholar] [CrossRef]
  134. Lewis, C.W.; Gilman, S.C. The Ethics Challenge in Public Service: A Problem-Solving Guide; John Wiley and Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  135. Nutt, P.C. Why decisions fail: Avoiding the blunders and traps that lead to debacles. In Academy of Management Perspectives; Academy of Management: Valhalla, NY, USA, 2003. [Google Scholar]
  136. Tuchman, B. The March of Folly: From Troy to Vietnam; Alfred A. Knopf, Inc.: New York, NY, USA, 1984; 461p. [Google Scholar]
  137. Bryson, J.M.; Bromiley, P.; Jung, Y.S. Influences of Context and Process on Project Planning Success. J. Plan. Educ. Res. 1990, 9, 183–195. [Google Scholar] [CrossRef]
  138. Bryson, J.M.; Bromiley, P. Critical factors affecting the planning and implementation of major projects. Strateg. Manag. J. 1993, 14, 319–337. [Google Scholar] [CrossRef]
  139. Margerum, R.D. Collaborative planning building consensus and building a distinct model for practice. J. Plan. Educ. Res. 2002, 21, 237–253. [Google Scholar] [CrossRef]
  140. Vogelsang, K.; Steinhuser, M.; Hoppe, U. A Qualitative Approach to Examine Technology Acceptance. Technical Report, 2013. Available online: https://aisel.aisnet.org/icis2013/proceedings/GeneralISTopics/7/ (accessed on 5 December 2024).
  141. Atieno, O.P. An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 2009, 13, 13. [Google Scholar]
  142. Lefever, S.; Dal, M.; Matthiasdottir, A. Online data collection in academic research: Advantages and limitations. Br. J. Educ. Technol. 2007, 38, 574–582. [Google Scholar] [CrossRef]
  143. Slattery, E.L.; Voelker, C.C.; Nussenbaum, B.; Rich, J.T.; Paniello, R.C.; Neely, J.G. A practical guide to surveys and questionnaires. Otolaryngol.—Head Neck Surg. 2011, 144, 831–837. [Google Scholar] [CrossRef]
  144. Roberts, K. Convenience Sampling Through Facebook; SAGE Publications, Ltd.: Newbury Park, CA, USA, 2014. [Google Scholar] [CrossRef]
  145. Li, J. E-Government Survey 2022; Technical report; United Nations: New York, NY, USA, 2022.
  146. Vicente, P.; Reis, E. Using questionnaire design to fight nonresponse bias in web surveys. Soc. Sci. Comput. Rev. 2010, 28, 251–267. [Google Scholar] [CrossRef]
  147. Gorrell, G.; Ford, N.; Madden, A.; Holdridge, P.; Eaglestone, B. Countering method bias in questionnaire-based user studies. J. Doc. 2011, 67, 507–524. [Google Scholar] [CrossRef]
  148. Kock, F.; Berbekova, A.; Assaf, A.G. Understanding and managing the threat of common method bias: Detection, prevention and control. Tour. Manag. 2021, 86, 104330. [Google Scholar] [CrossRef]
  149. Dantec, C.A.L.; Poole, E.S.; Wyche, S.P. Values as Lived Experience: Evolving Value Sensitive Design in Support of Value Discovery; ACM Press: New York, NY, USA, 2009. [Google Scholar]
  150. Yetim, F. Bringing Discourse Ethics to Value Sensitive Design: Pathways toward a Deliberative Future. Ais. Trans. Hum.-Comput. Interact. 2011, 3, 133–155. [Google Scholar]
  151. Friedman, B.; Kahn, P.H., Jr.; Borning, A. Value Sensitive Design and Information Systems. In Early Engagement and New Technologies: Opening Up the Laboratory; Springer: Dordrecht, The Netherlands, 2013; pp. 55–95. [Google Scholar]
  152. Knussen, C.; McFadyen, A. Ethical issues involved in using Survey Monkey. 2010. Available online: https://www.gcu.ac.uk/__data/assets/word_doc/0019/34426/surveymonkey_oct_2014.doc (accessed on 6 November 2023).
  153. LangChain contributors. LangChain. GitHub repository. Available online: https://github.com/langchain-ai/langchain (accessed on 10 October 2025).
  154. Norden. The Nordic Digital Ecosystem Actors, Strategies, Opportunities. Technical Report, 2015. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1295202&dswid=5356 (accessed on 6 November 2023).
  155. Kuehn, K.M. Framing mass surveillance: Analyzing New Zealand’s media coverage of the early Snowden files. Journalism 2018, 19, 402–419. [Google Scholar] [CrossRef]
  156. Patman, R.G.; Southgate, L. National security and surveillance: The public impact of the GCSB amendment bill and the Snowden revelations in New Zealand. Intell. Natl. Secur. 2016, 31, 871–887. [Google Scholar] [CrossRef]
  157. Post, T.H. The Huffington Post (2013) New Zealand Spying Law Passes Allowing Surveillance of Citizens. Available online: https://www.huffingtonpost.co.uk/entry/new-zealand-passes-law-allowing-surveillance-of-citizens_n_5b57c47ee4b0cf38668fb472 (accessed on 1 January 2024).
  158. Stojmenovic, I. The Fog Computing Paradigm: Scenarios and Security Issues. In Proceedings of the Federated Conference on Computer Science and Information Systems, Warsaw, Poland, 7–10 September 2014; Volume 2, pp. 1–8. [Google Scholar] [CrossRef]
  159. National Audit Office. Investigation into Verify; Technical Report; HC 1926, Session 2017–2019 (5 March 2019); National Audit Office: London, UK, 2019.
  160. House of Commons Committee of Public Accounts. Accessing Public Services Through the Government’s Verify Digital System; Technical Report; Ninety-Fifth Report of Session 2017-19, HC 1748 (8 May 2019); House of Commons: London, UK, 2019. [Google Scholar]
  161. Sabha, R. Failure Rate of Biometric Authentication, 2018. Available online: https://uidai.gov.in/images/rajyasabha/RSPQ400(Unstarred).pdf (accessed on 1 January 2026).
  162. Abraham, R.; Bennett, E.S.; Sen, N.; Shah, N.B. State of Aadhaar Report 2016–17, 2017. Available online: https://www.idinsight.org/wp-content/uploads/2021/10/State-of-Aadhaar-Report_2016-2017.pdf (accessed on 1 January 2026).
  163. Sauer, M.; Becker, C.; Kneis, L.; Oberweis, A.; Pfeifer, S.; Stark, A.; Sürmeli, J. A case study of the MEUSec method to enhance user experience and information security of digital identity wallets. I-Com 2025, 24, 125–143. [Google Scholar] [CrossRef]
  164. Bellini, F.; D’Ascenzo, F.; Dulskaia, I.; Savastano, M. Digital Identity: A Case Study of the ProCIDA Project. In Exploring Digital Ecosystems; Springer: Cham, Switzerland, 2020; Volume 33, pp. 315–327. [Google Scholar] [CrossRef]
  165. Hilowle, M.; Yeoh, W.; Grobler, M.; Pye, G.; Jiang, F. Improving National Digital Identity Systems Usage: Human-Centric Cybersecurity Survey. J. Comput. Inf. Syst. 2024, 64, 820–834. [Google Scholar] [CrossRef]
  166. Feher, K. Digital identity and the online self: Footprint strategies - An exploratory and comparative research study. J. Inf. Sci. 2021, 47, 192–205. [Google Scholar] [CrossRef]
  167. New Zealand’s Government Digital Services. Strategy for a Digital Public Service; Technical Report; New Zealand Government: Wellington, New Zealand, 2019.
  168. New Zealand’s Government Digital Services. Developing Options for a New Approach to Digital Identity; Technical Report; New Zealand Government: Wellington, New Zealand, 2018.
  169. Beduschi, A. Rethinking digital identity for post-COVID-19 societies: Data privacy and human rights considerations. Data Policy 2021, 3, e15. [Google Scholar] [CrossRef]
  170. Faber, B.; Michelet, G.; Weidmann, N.; Mukkamala, R.R.; Vatrapu, R. BPDIMS: A Blockchain-Based Personal Data and Identity Management System; 2019; pp. 6855–6864. Available online: https://aisel.aisnet.org/hicss-52/os/impact_of_blockchain/3/ (accessed on 28 December 2023).
  171. Buccafurri, F.; Lax, G.; Nicolazzo, S.; Nocera, A. eIDas public digital identity systems: Beyond online authentication to support urban security. In Proceedings of the Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST; Springer: Berlin/Heidelberg, Germany, 2018; Volume 189, pp. 58–65. [Google Scholar] [CrossRef]
  172. Sindi, A. Adoption Factors of a Blockchain Digital Identity Management System in Higher Education: Diffusing a Disruptive Innovation; California State University: Sacramento, CA, USA, 2019. [Google Scholar]
  173. Rivera, R.; Robledo, J.; Larios, V.; Avalos, J. How digital identity on blockchain can contribute in a smart city environment. In Proceedings of the International Smart Cities Conference, Wuxi, China, 14–17 September 2017. [Google Scholar] [CrossRef]
  174. Takemiya, M.; Vanieiev, B. Sora identity: Secure, digital identity on the blockchain. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018. [Google Scholar]
  175. Weaver, A. Digital Identity New Zealand, 2021. Available online: https://www.ubisecure.com/podcast/digital-identity-trust-framework-andrew-weaver-dinz-new-zealand/ (accessed on 4 April 2021).
Figure 1. Verifiable Credentials.
Figure 1. Verifiable Credentials.
Digital 06 00004 g001
Figure 2. Digital Identity Research Dimensions.
Figure 2. Digital Identity Research Dimensions.
Digital 06 00004 g002
Table 1. Open Identity Exchange Report—Identity Scheme Success Factors [81].
Table 1. Open Identity Exchange Report—Identity Scheme Success Factors [81].
#Success FactorsFailure Indicators
F1Private sector involvement (particularly banking sector) in the scheme design and deliveryGovernment-led with little or no private sector involvement
F2A shared vision for delivery between government and industry, and clarity of rolesNo shared vision, competing roles
F3Wide range and availability of services utilising the IDLimited service availability, lack of ubiquity
F4Banking services are accessible using the IDPublic services access only
F5Frequent use for low-authentication tasks (like website log-ins, age verification) and less frequent for high-authentication needs (such as opening bank accounts)Low frequency uses only
F6Existence of a mandatory ID for all citizens—e.g., social security number, national ID cardVoluntary or no national ID scheme in existence
F7An accepted history of national identity schemesPublic distrust of identity schemes/‘big brother’
F8Available to be used via a variety of channels including smart phonee-Card-based only—particularly if a card-reader is also required
F9Low population statesLarge population states
F10Using existing KYC data (particularly bank data) to actively enrol customers with an ID‘Organic’ enrolment only
F11Public trust in the security of the schemeSecurity breaches/questions
F12Public trust in how data will be usedLack of trust in privacy rules
F13Liability model and trust framework addressedThe lack of clarity on liability
F14A clear business case-operational savings demonstrated and agreed by industryLittle economic benefit or higher costs than existing processes
F15Regulatory clarity/confidenceRegulatory ambiguity or barriers
F16Passive customer enrolment—a smooth customer journeyDifficult/long enrolment process—a poor customer journey
F17Well-connected/interoperable existing government IT and databasesFragmented/unconnected/legacy govt systems
F18Well-connected citizenry (wifi, mobile, broadband adoption and coverage rates)Unconnected societies (not all channels need to be fully developed—e.g., mobile schemes in sub-Saharan Africa)
F19Barriers to access an ID removed or addressedLow inclusion and low access rates
F20Strong public awareness and education—and govt and private sector working together to achieve thisLow awareness and or education level, or lack of joined up promotion
F21A national residential registerNo central residential register
F22Government/civil servicesPrivate sector services
Table 2. Digital Identity Trust Frameworks by Jurisdiction.
Table 2. Digital Identity Trust Frameworks by Jurisdiction.
Jurisdiction Instrument (Illustrative)Regulatory Orientation (High-Level)
EUeIDAS/European Digital Identity RegulationCross-border mutual recognition and wallet-based interoperability requirements [105].
UKDigital Identity and Attributes Trust FrameworkStandards-based trust framework with independent certification [106].
NZDigital Identity Services Trust Framework Act/RegulationsStatutory trust framework with implementing regulations and oversight [107,108].
AUDigital ID Act 2024 (building on TDIF)Economy-wide accreditation scheme for digital ID services [109,110].
Table 3. VSD execution summary for this pilot cycle.
Table 3. VSD execution summary for this pilot cycle.
VSD StageImplemented in This Study (This Cycle)Not Implemented/Planned Next Iteration
Conceptual investigationStakeholder identification and grouping; identification of core values and initial value-tension hypotheses; mapping of constructs to survey questions and success factors (Table 1 and Table 4).Refine stakeholder granularity (e.g., additional roles and cross-jurisdiction contexts); iterate value/tension hypotheses with broader advisory input.
Empirical investigationOnline survey across stakeholder strata; stratified descriptive analysis and alignment tables; explicit synthesis of salient value tensions with candidate mitigation approaches (Section 6.8).Strengthen representativeness via quota/stratified and multi-jurisdiction sampling; add qualitative follow-up (e.g., interviews/focus groups) to explain drivers of tensions and to validate interpretations.
Technical investigationA minimal prototype is included as a proof-of-concept to illustrate how stakeholder-aligned findings can be operationalised as layered metadata constraints for RAG (Section 5.3), without formal evaluation.Implement and evaluate the technical stage (e.g., artefact schema, recovery/exception pathways, governance controls); run scenario-based evaluations and iterate with stakeholder feedback.
Table 4. Questionnaire.
Table 4. Questionnaire.
#Survey QuestionFactorResearch Question Link
Q1As a user of digital identity technology, which is the first group you identify with? [Government, Business, Academia, Consumer] Establishes stakeholder categorization to compare perspectives, linked to overall stakeholder alignment (all questions).
Q2How important is it that the following groups NOT be able to link identity across different contexts? [Government, Business, Law]F12Stakeholder perspectives on privacy.
Q3How important are the following to you in the context of your digital identity? [Privacy, Usability]F12Stakeholder prioritization of privacy, usability, and trust (RQ1).
Q4Should your digital identity work in other countries?F14Perspectives on usability and global alignment (RQ2).
Q5Do you prefer to enter account information each time you sign up to an online service?F16Stakeholder alignment on usability concerns (RQ1).
Q6Is it feasible that digital identity be implemented globally?F14Perspectives on alignment for global implementation (RQ2).
Q7How important is the involvement of the following groups in establishing a vision for global digital identity? [Government, Business, Academia, Consumer]F2Importance of stakeholder roles in alignment (RQ2).
Q8To what degree has the public been educated about digital identity?F20Usability and trust-related concerns (RQ3).
Q9To what degree do you understand the motivation of various parties to establish digital identity solutions?F2Alignment of roles and responsibilities (RQ2).
Q10How important is the involvement of the following groups in educating the public about digital identity? [Government, Business, Academia, Consumer/End User]F20Stakeholder trust and usability (RQ3).
Q11How much do you agree that the Government should run a national identity register?F6Alignment on roles and privacy concerns (RQ1, RQ2).
Q12How much do you agree that a digital identity be mandatory for all citizens?F6Diverging views on usability and privacy (RQ1, RQ2).
Q13How well do you understand the history of your nation’s identity schemes?F7Stakeholder awareness of roles and historical context (RQ2).
Q14How important is it for these groups to ISSUE a credential (certify something true about the identity)? [Government, Business, Academia, Consumer]F12Perspectives on roles and alignment (RQ2).
Q15How important is it that the following groups HOLD/STORE a credential? [Government, Business, Academia, Consumer]F12Stakeholder perspectives on data control and privacy (RQ1, RQ2).
Q16How important is it that the following groups CONTROL/MANAGE digital identity? [Government, Business, Academia, Consumer]F12Stakeholder roles and alignment (RQ2).
Q17Which of the following services should digital identity be used with/for? [Passport, Driver’s Licence, Credit Card, Social Media Login, Health Services, Transportation, Other]F3–F5Scope of stakeholder alignment/divergence.
Q18How easy is it for you to connect to the Internet (is the Internet available at all times)?F18Usability concerns in digital identity ecosystems (RQ3).
Q19How easy is it for you to access and manage your digital identity?F19Perspectives on usability and alignment (RQ1, RQ3).
Q20Should existing customer data be used to auto-enrol customers in digital identity schemes?F10Trust and usability alignment (RQ3).
Q21How important is the involvement of the following groups in establishing the standards, rules and regulations that underpin digital identity infrastructure? [Government, Business, Academia, Consumer]F15Alignment of roles and regulatory frameworks (RQ2).
Q22How important is the involvement of the following groups in developing the technology used to deliver digital identity infrastructure? [Government, Business, Academia, Consumer]F8Perspectives on roles and alignment (RQ2).
Q23Do you think a digital identity network should be regulated?F15Stakeholder alignment on regulatory frameworks and trust (RQ2, RQ3).
Q24Who should regulate digital identity? [Unregulated, Government, Business, Self-Regulation]F15Stakeholder alignment on roles and responsibilities (RQ2).
Q25Who should be liable for financial loss if a digital-identity-based transaction goes wrong? [Government, Business, Academia, Consumer]F13Stakeholder trust and liability concerns (RQ3).
Q26How often should information be shared between government departments?F17Trust, privacy, and alignment (RQ1, RQ3).
Q27To what degree do you believe the following groups will use digital identity to track your activities online? [Government, Business, Academia, Consumer]F12Stakeholder trust concerns (RQ3).
Q28To what degree do you trust that your personal data will remain private in a digital identity network?F11Trust as a barrier to alignment (RQ3).
Q29Please rank the following groups in the order you trust them with the responsibility of managing your digital identity and ensuring your personal data will remain private? [Government, Business, Academia, Consumer]F12Stakeholder trust concerns (RQ3).
Q30How available are the tools and technologies for managing your digital identity?F8Usability and technology availability (RQ1, RQ3).
Q31Do you have any additional thoughts on digital identity and how it may be managed in the future? Open-ended feedback for broader alignment on trust, privacy, and usability (all res).
Table 5. Vision Alignment Table.
Table 5. Vision Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
How important is the involvement of the following groups in establishing a vision for global digital identity?Government3.81.24.20.84.30.93.91.2
Business3.31.14.00.93.41.23.21.2
Academia3.11.23.71.23.81.23.41.2
Consumer3.71.34.20.94.01.13.81.1
To what degree do you understand the motivation of various parties to establish a digital identity solution? 3.41.23.40.93.11.02.80.9
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 6. Education Alignment Table.
Table 6. Education Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
How important is the involvement of the following groups in educating the public about digital identity?Government3.91.04.30.84.20.94.21.1
Business3.61.24.10.93.70.83.51.2
Academia3.91.04.11.03.71.13.91.1
Consumer3.80.84.20.93.70.93.91.1
To what degree has the public been educated about digital identity? 2.61.13.40.92.40.92.10.8
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 7. Usability Alignment Table.
Table 7. Usability Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ M σ μ σ
How important are the following to you in the context of your digital identity?Privacy4.31.14.40.84.30.84.50.8
Usability3.91.14.30.84.20.74.10.8
Is it feasible that digital identity be implemented globally?Yes57.9 79.4 72.0 73.1
Do you prefer to enter account information each time you sign up to an online service?Yes47.4 41.2 32.0 52.6
How important is it that the following groups ISSUE a credential (certify something true about the identity)?Government4.30.94.40.74.30.84.21.1
Business3.31.23.71.03.01.23.21.3
Academia3.11.33.31.33.31.33.21.3
Consumer3.21.43.41.43.31.13.21.4
How important is it that the following groups HOLD/STORE a credential?Government3.61.33.91.13.81.13.51.3
Business3.41.23.31.23.21.12.91.4
Academia3.31.53.01.33.31.42.71.3
Consumer3.11.43.41.43.61.03.31.5
How important is it that the following groups CONTROL/MANAGE digital identity?Government3.71.23.81.13.81.23.41.5
Business3.01.33.41.23.11.22.71.5
Academia2.71.53.01.43.01.32.81.4
Consumer3.21.54.01.13.80.93.71.4
To what degree do you believe the necessary tools and technologies are available to manage your digital identity?Available3.11.23.11.23.30.93.11.1
Is it feasible that digital identity be implemented globally?Yes68.4 66.2 44.0 47.4
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 8. National Identity Alignment Table.
Table 8. National Identity Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
Question μ σ μ σ μ σ μ σ
How much do you agree that the Government should run a national identity register?3.61.43.81.23.71.13.41.2
How much do you agree that a digital identity be mandatory for all citizens?3.41.33.21.43.21.12.91.3
How well do you understand the history of your nation’s identity schemes?2.71.52.91.32.41.12.21.1
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 9. Design Alignment Table.
Table 9. Design Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
How important is the involvement of the following groups in establishing the standards, rules and regulations that underpin digital identity infrastructure?Government4.21.04.60.84.50.74.31.1
Business3.21.13.91.03.11.33.21.4
Academia3.41.23.91.14.20.73.61.2
Consumer3.21.34.21.03.81.24.11.1
How important is the involvement of the following groups in developing the technology used to deliver digital identity infrastructure?Government4.20.93.91.24.01.03.91.2
Business3.91.24.40.84.01.23.71.2
Academia3.61.33.91.03.81.23.81.3
Consumer3.01.33.61.43.21.23.61.3
Do you think a digital identity network should be regulated? 94.7 100 92.0 95.5
Who should regulate digital identity?Not regulated0 0 4.2 2.6
Government94.7 67.7 62.5 66.7
Business0 11.8 4.2 5.1
Self-regulated5.3 16.2 20.8 18.0
None of above0 4.4 8.3 7.7
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 10. Data Alignment Table.
Table 10. Data Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
How often should information be shared between government departments? 3.71.23.71.23.61.13.31.2
To what degree do you believe the following groups will use digital identity to track your activities online?Government4.01.24.20.93.80.84.11.0
Business4.60.54.30.94.41.04.40.8
Law Enforcement4.30.84.11.03.91.04.20.9
Should existing customer data be used to auto-enrol customers in digital identity schemes? 3.11.22.91.32.51.12.31.2
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral.
Table 11. Trust Alignment Table.
Table 11. Trust Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
Trust your personal data remains private 3.11.42.71.32.51.02.21.0
Trust in managing digital identity and ensuring personal data remains private?Government2.01.02.21.02.11.22.41.2
Business3.30.73.11.03.01.13.10.9
Academia2.50.92.51.12.61.02.40.9
Consumer2.31.32.31.22.31.02.11.2
Importance of NOT being able to link identity across different contexts?Government2.71.52.61.42.61.22.31.2
Business2.41.52.41.32.20.91.91.0
Law Enforcement3.61.53.21.43.01.33.11.3
Answers to Likert questions use a 5-point scale ranging from 1 to 5, with 3 being neutral. The ranking question is inverted.
Table 12. Liability Alignment Table.
Table 12. Liability Alignment Table.
Stakeholder Group
GovernmentBusinessAcademiaConsumer
(n = 19)(n = 65)(n = 22)(n = 137)
QuestionOption μ σ μ σ μ σ μ σ
Who should be liable for financial loss if a digital-identity-based transaction goes wrong?Government52.6 29.4 40.0 40.4
Business42.1 47.1 28.0 41.0
Academia0.0 4.4 8.0 0.0
Consumer0.0 1.5 8.0 7.1
None of above5.3 17.7 16.0 11.5
Values shown are percentages selecting each option.
Table 13. Tensions.
Table 13. Tensions.
Value TensionEvidence in This StudyCandidate Mitigation Approaches
Governance leadership vs privacy/surveillanceSupport for government leadership/registry alongside high concern about tracking and low privacy confidence (Table 3, Table 6, Table 9 and Table 10).Purpose limitation; independent oversight; auditability; separation of identity proofing, issuance, and service access to reduce behavioural data aggregation.
Convenience/data reuse vs. consent/controlModerate acceptance of intra-government sharing but low support (outside government) for auto-enrolment using existing data (Table 9).Opt-in and revocable consent; staged enrolment; data minimisation defaults; clear disclosure of data flows.
Interoperability/private-sector roles vs. low trust in businessBusiness perceived as likely to track activity and ranked least trusted to preserve privacy, yet still expected to contribute to technology and integration (Table 7, Table 8, Table 9 and Table 10).Accreditation/compliance; liability/redress (Table 11); privacy-preserving interoperability (context-specific identifiers, minimal disclosure).
Security/robustness vs. usability/inclusionHigh importance of privacy and usability (Table 8) and adoption sensitivity to onboarding friction.Step-up authentication; inclusive recovery/exception handling; multi-channel access; education interventions (Table 5).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Comb, M.; Martin, A. Universal Digital Identity Stakeholder Alignment: Toward Context-Layered RAG Architectures for Ecosystem-Aware AI. Digital 2026, 6, 4. https://doi.org/10.3390/digital6010004

AMA Style

Comb M, Martin A. Universal Digital Identity Stakeholder Alignment: Toward Context-Layered RAG Architectures for Ecosystem-Aware AI. Digital. 2026; 6(1):4. https://doi.org/10.3390/digital6010004

Chicago/Turabian Style

Comb, Matthew, and Andrew Martin. 2026. "Universal Digital Identity Stakeholder Alignment: Toward Context-Layered RAG Architectures for Ecosystem-Aware AI" Digital 6, no. 1: 4. https://doi.org/10.3390/digital6010004

APA Style

Comb, M., & Martin, A. (2026). Universal Digital Identity Stakeholder Alignment: Toward Context-Layered RAG Architectures for Ecosystem-Aware AI. Digital, 6(1), 4. https://doi.org/10.3390/digital6010004

Article Metrics

Back to TopTop