Next Article in Journal
Sacred Service, Cultural Transformation, and Sustainable Religious Tourism in Labuan Bajo
Previous Article in Journal
Ghanaian Girls’ Lives Beyond the Frame: Using Photovoice to Disrupt the Single Story of African Girlhood
Previous Article in Special Issue
Reducing Administrative Burden Through Simplification and Document Management in Local Governments: Evidence from a District-Level Public Organization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Digital Identities and the Social Realm: How AI-Driven Platforms Reshape Participation, Recognition, and Group Dynamics

by
Oluwaseyi B. Ayeni
1,*,
Isabella Musinguzi-Karamukyo
2,
Oluwakemi T. Onibalusi
3 and
Oluwajuwon M. Omigbodun
4
1
Department of Law, The University of Buckingham, Hunter St., Buckingham MK18 1EG, UK
2
Department of Management and Entrepreneurship, Imperial College Business School, Imperial College London, London SW7 2AZ, UK
3
Department of Government and Public Policy, School of Humanities and Social Sciences, University of Strathclyde, Glasgow G1 1XQ, UK
4
Department of Law, Afe Babalola University, Ado Ekiti 362101, Nigeria
*
Author to whom correspondence should be addressed.
Societies 2026, 16(3), 96; https://doi.org/10.3390/soc16030096
Submission received: 8 January 2026 / Revised: 27 February 2026 / Accepted: 6 March 2026 / Published: 17 March 2026
(This article belongs to the Special Issue Societal Challenges, Opportunities and Achievement)

Abstract

This paper argues that digital identity in AI-mediated environments has become a central mechanism through which contemporary societies organise recognition, participation, and belonging. Digital identity is no longer simply a technical representation of the individual. It is produced through infrastructural processes of classification, ranking, and credibility signalling that determine who becomes visible, who is treated as legitimate, and who is able to participate meaningfully in social and civic life. The paper develops a conceptual framework that treats AI-driven platforms as social infrastructures rather than neutral intermediaries. It shows how identity is inferred through data-driven systems rather than negotiated through social interaction, how recognition is operationalised through visibility and credibility metrics rather than ethical judgement, and how participation becomes conditional on algorithmic allocation of attention rather than guaranteed by access alone. Visibility is identified as the key conversion point through which inferred identity becomes social consequence. Drawing on interdisciplinary literature, the analysis demonstrates that misrecognition, exclusion, and inequality in platform environments are not primarily the result of isolated error or intentional bias. They are patterned outcomes of ordinary optimisation processes that distribute legitimacy and opportunity unevenly across social groups. These dynamics reshape group formation, harden social boundaries, and concentrate risk among populations that are already more vulnerable to misrecognition and reduced contestability. The paper concludes that governing digital identity is a societal challenge rather than a purely technical one. As platforms increasingly perform institutional functions without equivalent accountability, digital identity governance becomes a critical site of social ordering. Addressing this challenge requires public standards for how visibility, recognition, and participation are allocated, meaningful avenues for contestation, and protections against the normalisation of stratified belonging in AI-mediated societies.

1. Introduction: Digital Identity, Power, and the Reconfiguration of the Social Realm

Digital identity is still commonly described as a technical wrapper around the person: a profile, a username, a verification badge, or a login credential. In AI-driven platform environments, this description no longer captures what digital identity actually does. On major platforms, identity is increasingly produced through infrastructural processes that classify people, infer attributes, and regulate visibility at scale. It is shaped not only by what individuals claim about themselves, but by what systems predict, stabilise, and circulate about them, often in the form of behavioural profiles and credibility cues that influence how others respond [1,2]. Once these inferences shape who is seen, who is trusted, and who is treated as socially legitimate, identity ceases to function as neutral representation and becomes a mechanism of social ordering.
This paper starts from an observation that has become unremarkable precisely because it is now routine. Many of the spaces in which people form relationships, build reputations, seek opportunity, and participate in public life are organised by systems that decide what becomes visible. Platforms do not simply distribute information. They allocate attention and credibility through ranking, recommendation, and enforcement mechanisms. This allocation constitutes a form of governance, even when it is framed as personalisation, optimisation, or user choice [1]. The governing effect is strongest where identity and visibility intersect, because what is rendered visible becomes socially real, while what remains unseen becomes socially marginal.
The central argument developed here is that AI-driven platforms increasingly operate as social infrastructures. They perform functions traditionally associated with institutional power: classifying social actors, setting behavioural boundaries through moderation, and structuring participation through exposure rules [1]. Digital identity is where these functions converge. It is the interface through which individuals encounter the platform, but also the channel through which platforms sort people into categories that carry durable social consequences. These consequences are rarely experienced as explicit decisions. They appear instead as reach, traction, reputational standing, friction, or quiet disappearance. Yet their cumulative effects are political in a precise sense: they distribute legitimacy, opportunity, and voice unevenly across society [3,4]. Figure 1 visualises this conceptual pathway, tracing how AI-driven identity construction feeds into differential visibility, participation, recognition, and downstream societal outcomes.
Before proceeding, it is necessary to clarify how four core constructs are used throughout this paper, since they can blur conceptually despite being analytically distinct. Identity, as used here, refers not to self-understanding or subjective experience, but to the socially consequential classificatory profile that AI-driven systems infer, assign, and stabilise around individuals—the form in which a person becomes legible and actionable to the platform. Visibility refers to the degree to which a classified identity is surfaced, amplified, and circulated within the platform environment; it is the primary mechanism through which inferred identity acquires social reality and consequence. Recognition refers to the social acknowledgement of an individual as a legitimate participant in a given domain—distinct from visibility in that it involves the ascription of credibility, authority, and worth, not merely presence or reach. Participation refers to meaningful engagement in social, civic, or public life, distinguished from formal access or technical presence by the capacity to influence outcomes and be taken seriously by others. These four constructs form a sequential but interconnected logic that structures the paper’s argument: identity is inferred by the system, visibility is allocated through ranking and recommendation, recognition is conferred or withheld on the basis of visibility signals, and participation is enabled or constrained accordingly.
Three closely linked societal concerns follow from this framing. The first concerns participation. Platforms can offer access widely while still rationing meaningful voice through ranking systems and credibility signals. Participation becomes conditional when visibility is unevenly allocated and when certain identities systematically face higher friction or lower amplification as a routine outcome of optimisation [4,5]. The second concern is recognition. In AI-mediated environments, misrecognition is not limited to occasional technical error. It can become structural when identity is inferred through proxies and translated into persistent credibility and visibility outcomes. Research on recommender systems and group dynamics shows how algorithmic processes can produce patterned differences between in-groups and out-groups in who is surfaced, legitimised, and taken seriously [3]. The third concern relates to group formation and social cohesion. When platforms shape who encounters whom through link recommendation and clustering, they can amplify segmentation and polarisation not only through content, but through the architecture of exposure itself [6].
The core contribution of this paper is to bring these dynamics into a single analytical frame by treating visibility as a form of social power. Visibility is the conversion point where classification becomes consequence. It is the mechanism through which inferred identity is translated into attention, legitimacy, opportunity, or, conversely, marginality and social erasure. This matters for governance because a narrow focus on accuracy or isolated bias fails to capture how systems can be technically competent while still producing patterned disadvantage across social groups. A susceptibility lens helps clarify why harms concentrate rather than distribute randomly, since some identities and social positions are systematically more exposed to algorithmic disadvantage and less able to contest its outcomes [5].
What is novel in this framework is not the individual concepts it draws upon, but the analytical integration it proposes. Prior scholarship has treated platform governance, recognition theory, and participation as largely separate domains. This paper argues that visibility is the structural hinge that connects them—the mechanism through which AI-driven platforms exercise social power and through which identity classification is converted into durable social consequence. By locating visibility within the broader logic of infrastructural governance, the framework moves beyond descriptive accounts of algorithmic bias to offer an explanatory account of how social inequality is produced through ordinary platform operations. This reframing carries direct implications for governance: rather than asking only whether a system is accurate, it invites the question of how visibility is distributed, contested, and made publicly accountable.
A clarification of scope and analytical boundaries is warranted before proceeding. This paper does not aim to resolve empirical questions about the prevalence or magnitude of the phenomena it describes, nor does it seek to evaluate specific platforms or adjudicate between competing governance architectures. It does not claim that all platforms produce identical effects, or that the dynamics described apply uniformly across all social contexts and political settings. Its purpose is conceptual: to construct an analytical frame that renders the structural dimensions of digital identity governance visible and to identify the questions that empirical and normative research must address. Readers approaching from empirical traditions should treat the argument as a generative resource for research design rather than as a set of directly testable propositions.
This paper is, more precisely, a conceptual reconstruction. It does not report original data, test hypotheses, or evaluate specific platform cases. Instead, it works by systematically integrating dispersed insights from platform governance, recognition theory, digital identity scholarship, and participation research in order to construct a unified analytical framework. The method is theoretical synthesis and conceptual elaboration: existing findings are reinterpreted and reorganised under a new explanatory logic rather than reproduced. The result is a framework that makes explicit the structural connections between phenomena that existing literature tends to treat as separate, and that identifies the governance implications that follow from treating those connections as central. Readers should therefore engage with the paper as an analytical contribution—one intended to generate research questions, inform empirical inquiry, and reframe policy debate—rather than as a report of findings.
The sections that follow develop this argument cumulatively. The paper first situates the analysis within theories of identity, recognition, and participation, before reframing AI-driven platforms as governing social infrastructures. It then examines algorithmic identity production through the politics of visibility, the conditionality of participation, the structural nature of misrecognition and social harm, and the re-engineering of group relations through curation and clustering. The final sections synthesise these strands to identify the societal risks of concentrated power and unevenly distributed harm, and to outline principles for governing digital identity as a societal challenge rather than a purely technical issue.

2. Theoretical Anchors: Identity, Recognition, and Social Participation

Classical and contemporary theories of identity converge on a foundational assumption: identity is socially produced through interaction, interpretation, and recognition within institutional and relational contexts. Whether framed as stable, fluid, or performative, identity is assumed to emerge through human judgement operating within shared norms that make social meaning intelligible. Abramson’s account of identity in the information society situates identity within systems of identification that link individuals to social meaning and institutional recognition, rather than reducing identity to technical classification [6,7,8]. Beduschi’s normative treatment similarly emphasises recognition as an ethical and social process through which individuals acquire dignity, legitimacy, and standing within a community [9,10,11,12,13]. Across these perspectives, identity is understood as something negotiated, contested, and socially validated. Table 1 distils these assumptions and highlights where they begin to break down under AI-mediated conditions.
This interpretive foundation becomes increasingly unstable in platform environments. Smith’s analysis of algorithmic filtering demonstrates that digital identities on major platforms are no longer primarily the result of intentional self-presentation. Instead, they are assembled through predictive categorisation, behavioural inference, and commercial optimisation [4]. Identity is decomposed into datafied signals that are actionable for ranking, targeting, and monetisation, rather than intelligible as social meaning. Joseph extends this diagnosis by examining the reflexive effects of algorithmic feedback loops. Platform-generated signals of relevance, popularity, and value do not merely reflect identity; they reshape self-understanding itself, as individuals internalise system outputs as cues about worth, legitimacy, and belonging [5]. These accounts document transformation, but their deeper implication is more radical: identity formation is no longer merely socially mediated. It is increasingly infrastructurally produced.
Social identity theory offers an important but incomplete response to this shift. Whelan’s networked social identity theory explicitly acknowledges that identity formation now occurs within digitally mediated networks, where technological architectures influence alignment, affiliation, and group belonging [1]. This move is analytically significant because it recognises that identity is shaped by mediated environments rather than confined to face-to-face interaction. However, the framework remains conceptually optimistic. Networks are treated primarily as amplifiers of social processes, not as sites where identity categories and group boundaries are pre-structured through automated classification. Empirical research complicates this position. Carrasco-Farré et al. show that recommender systems systematically privilege in-group content while suppressing out-group visibility through optimisation dynamics embedded in platform design [14]. Their findings indicate that group identity is not merely reinforced through interaction but actively produced through algorithmic sorting mechanisms that operate prior to, and independently of, user intent. This exposes a critical theoretical gap: existing social identity theory lacks an account of algorithmic pre-grouping, in which group boundaries are inferred rather than chosen.
Recognition theory faces a deeper rupture still. Normative accounts of recognition presuppose a recogniser capable of moral evaluation and accountability. Recognition is understood as an ethically charged act tied to dignity, legitimacy, and mutual respect. In algorithmic environments, this assumption no longer holds. Akpinar et al. demonstrate that algorithmic visibility regimes reward epistemic conformity and penalise deviation, producing systematic exclusion without deliberate intent [15]. Recognition becomes a statistical outcome of ranking and amplification rather than a moral judgement. Masiero sharpens this critique by conceptualising digital identity as platform-mediated surveillance, where recognition is inseparable from traceability, compliance, and control [3]. Under these conditions, misrecognition is not an isolated error that can be corrected through appeal. It is often opaque, persistent, and embedded in data infrastructures that diffuse responsibility across technical pipelines. Classical recognition theory offers limited resources for analysing this structural displacement of judgement.
The literature on participation reveals a parallel theoretical lag. Traditional accounts link participation to agency, deliberation, and influence within civic and social institutions. Papa and Ioannidis show how algorithmic curation shapes civic engagement on platforms such as Facebook by selectively amplifying certain forms of participation while marginalising others [16]. Yet participation is still often treated as a downstream effect of exposure. Cardullo and Kitchin push further, arguing that platform citizenship frequently substitutes substantive participation with procedural interaction, offering visibility without influence and engagement without power [17,18]. Selcan, provide empirical evidence that engagement-driven recommendation systems can intensify polarisation rather than support collective deliberation, fragmenting publics into antagonistic clusters [19]. These studies indicate that participation under AI governance is no longer a pathway to agency, but a conditional outcome of algorithmic visibility.
These strands reveal a consistent pattern. Classical and contemporary theories conceptualise identity, recognition, and participation as social achievements situated within institutional contexts. Recent empirical research shows that AI-driven platforms reorder these processes by intervening upstream, before social interaction occurs. Identity is inferred through data correlations rather than negotiated meaning. Recognition is operationalised through ranking, visibility, and credibility metrics rather than ethical judgement. Participation is filtered through algorithmic gatekeeping rather than guaranteed by access. Table 1 synthesises these shifts and clarifies why existing theoretical frameworks struggle to account for them.
The central theoretical contribution of this paper is to argue that these transformations cannot be addressed by simply extending existing frameworks into digital contexts. They require a reconceptualisation of identity, recognition, and participation as processes governed by algorithmic social infrastructures. Rather than merely mediating social life, AI-driven platforms increasingly define its conditions of possibility. This reframing moves beyond descriptive accounts of digital influence and provides a foundation for analysing how power, exclusion, and social ordering are produced in contemporary AI-mediated societies.
In terms of its specific contributions to platform governance scholarship, this paper does three things that, taken together, are not replicated by existing work. First, it proposes visibility as the analytical pivot that connects identity production, recognition, and participation—a connection that current accounts address only in partial or domain-specific ways. By treating visibility as the conversion point between classification and social consequence, the framework makes legible a mechanism that cuts across the studies on algorithmic bias, recognition, and civic participation simultaneously. Second, it develops a structural susceptibility lens—drawing on Lopez [20,21]—that explains why algorithmic disadvantage concentrates predictably among certain social positions rather than distributing as random error. This moves the governance conversation beyond individual incident response toward systemic risk management. Third, the paper reframes digital identity governance as an institutional responsibility requiring public standards for visibility allocation, contestability, and structural fairness—rather than a technical matter of model accuracy or user privacy. These three contributions jointly provide a foundation for empirical and policy work that existing platform governance frameworks, focused primarily on content moderation, consent, or transparency, do not yet supply.
Table 1. Theoretical perspectives on identity and participation and their limitations in AI-mediated contexts.
Table 1. Theoretical perspectives on identity and participation and their limitations in AI-mediated contexts.
Analytical AnchorCore Assumption in Pre-AI Social TheoryHow AI-Driven Platforms Reconfigure the ProcessKey Literature SignalsConceptual Gap This Paper Addresses
Identity formationIdentity is produced through social interaction, interpretation, and institutional recognition. It is negotiated and made meaningful through human judgement and shared norms.Identity is inferred upstream from behavioural data and predictive proxies, stabilised through platform categories, and rendered actionable through ranking systems. Identity becomes an infrastructural output rather than a negotiated social achievement.Abramson frames identity within systems of social identification and meaning [6]. Beduschi ties identity to dignity and ethical recognition [9]. Smith shows algorithmic filtering fragments and commercialises self-presentation [4]. Joseph shows feedback loops reshape self-understanding [5].Existing theory cannot explain identity formation when classification precedes interaction. This paper reframes identity as infrastructurally produced through algorithmic classification and visibility allocation.
RecognitionRecognition is a moral and social act through which individuals acquire legitimacy and dignity. Misrecognition can be contested because a recogniser and judgement process exist.Recognition is operationalised as visibility, ranking, and credibility cues. Misrecognition becomes structural, persistent, and opaque, embedded in automated pipelines rather than discrete decisions.Beduschi treats recognition as dignity-bearing [9]. Akpinar et al. show exclusion emerges through routine visibility dynamics [15]. Masiero shows recognition is tied to surveillance, traceability, and control [3].Recognition theory breaks down when recognition is automated and responsibility is diffused. This paper reconceptualises recognition as platform-mediated visibility governance rather than interpersonal validation.
Visibility as social powerVisibility emerges through social norms, institutional practices, and collective attention. Being seen is socially negotiated.Visibility is allocated through ranking and recommendation systems that convert classification into consequence. Being seen becomes a governed resource rather than a social outcome.Akpinar et al. model visibility as a mechanism of exclusion [15]. Carrasco-Farré et al. show systematic in-group visibility advantage in recommender systems [14]. Özmen et al. link personalisation to social identity harm [22].The literature treats visibility as an outcome. This paper makes visibility the central explanatory mechanism linking identity inference to social stratification and exclusion.
Group membership and social identityGroups form through shared identification, interaction, and meaning-making. Boundaries are socially constructed through comparison and affiliation.Platforms infer and pre-structure group boundaries through clustering, recommendation, and categorisation, shaping affiliation before conscious choice.Whelan theorises networked social identity [1]. Carrasco-Farré et al. show algorithmic production of in-group/out-group dynamics [14]. Selcan Burcu show recommendation systems intensify clustering and polarisation [19].Social identity theory under-theorises algorithmic pre-grouping. This paper treats group boundaries as partially engineered by visibility and exposure infrastructures.
ParticipationParticipation is tied to agency, deliberation, and influence within social and civic institutions. Access implies voice.Participation becomes conditional on visibility and amplification. Users may engage, but influence is rationed through algorithmic gatekeeping.Papa and Ioannidis show civic participation is shaped by algorithmic curation [16]. Cardullo and Kitchin show platform citizenship substitutes interaction for influence [17]. Selcan Burcu show engagement optimisation fragments publics [19].Participation theory over-relies on access. This paper reframes participation as visibility-dependent and recognition-conditioned, not guaranteed by platform entry.
Misrecognition and social harmMisrecognition is an error or injustice that can be identified and remedied through institutional processes.Misrecognition becomes a patterned condition produced by ordinary optimisation, with harms concentrating on predictable groups.Akpinar et al. show exclusion without intent [15]. Lopez conceptualises susceptibility to algorithmic disadvantage [21]. Masiero shows identity infrastructures normalise control [3].Existing accounts treat harm as exceptional. This paper conceptualises misrecognition as a structural feature of algorithmic identity governance.
Power and governanceSocial power is exercised through identifiable institutions with formal accountability mechanisms.Power is exercised through opaque infrastructures that allocate visibility, legitimacy, and participation while diffusing responsibility.Törnberg shows platforms govern through design and optimisation [2]. Lu highlights accountability gaps in algorithmic governance [23].This paper reframes AI governance as a societal issue of infrastructural power, not a narrow technical or compliance problem.

3. AI-Driven Platforms as Social Infrastructures

A persistent error in public and policy debate is to treat AI-driven platforms as tools that operate outside society and merely act upon it. That framing is no longer adequate. Contemporary platforms increasingly function as social infrastructures: systems that organise everyday life by structuring visibility, participation, and legitimacy in ways comparable to older institutions. What distinguishes them is not simply that they are digital or privately owned, but that they govern through computational processes that translate social relations into technical outputs while presenting those outputs as neutral, personalised, or convenient.
Törnberg’s account of platform governance is central to this shift because it rejects the idea of platforms as passive containers for social interaction. Platforms do not merely host social life; they regulate it by embedding rules into design choices, performance metrics, and automated enforcement systems [2,24]. These forms of regulation are not peripheral. They shape what counts as acceptable speech, credible identity, and legitimate participation. Platforms therefore function as institutions, not because they resemble legislatures or courts, but because they perform core institutional functions: norm setting, boundary drawing, and behavioural steering at scale [2,24]. Their infrastructural power lies in becoming backgrounded, normalised, and difficult to exit without significant social cost.
This institutional capacity rests on specific governing mechanisms. Platform governance is enacted primarily through algorithmic classification, ranking, and moderation. Classification renders continuous social life into discrete categories that can be managed. Ranking transforms those categories into hierarchies of attention, opportunity, and credibility. Moderation enforces the boundaries of permissible participation. Together, these mechanisms establish the conditions under which identity is recognised, and participation becomes visible at all. This is where the theoretical argument of Section 2 becomes concrete: when recognition and participation depend on ranking and classification, the platform ceases to be a medium. It becomes an authority.
Masiero sharpens this point by showing how digital identity infrastructures operate through surveillance and compliance rather than mutual recognition [3]. In platform environments, recognition is not only social approval; it is legibility to the system. Individuals become recognisable through traceability, stable identifiers, and behavioural consistency. This form of recognition is structurally different from the dignity-based recognition assumed in normative theory. It functions as governance and carries sanctions. Reduced visibility, shadow banning, limited reach, account restrictions, and reputational downgrades are not merely user experiences. They are enforcement mechanisms that regulate social participation through infrastructural power [3].
Smith’s analysis of algorithmic filtering adds a complementary dimension by exposing the cultural and economic logic underpinning this governance. Algorithmic filtering commercialises self-presentation by translating identity into monetisable signals and segments [4]. What appears as personal expression is reformatted for prediction, targeting, and optimisation. Identity is no longer simply performed to others; it is shaped for system legibility and platform value extraction. This is why treating platforms as neutral intermediaries is analytically insufficient. They actively organise how identities are rendered visible, which identities become salient, and which are treated as commercially or socially valuable [4]. This is institutional work because it reshapes the social meaning of identity, not merely the distribution of content.
The institutional role of platforms becomes even clearer when participation is examined. In classical accounts, participation is linked to agency, influence, and deliberation. In platform environments, participation becomes conditional on visibility and amplification. Papa and Ioannidis show that algorithmic curation mediates civic participation by shaping what people encounter and what becomes socially actionable, thereby altering who participates, how, and to what effect [16]. Participation is no longer a direct expression of civic intent. It is an outcome of platform architecture.
Cardullo and Kitchin reach a similar conclusion through their critique of platform citizenship. They show that civic platforms often offer procedural participation without substantive influence, producing engagement without power [17]. This gap between interaction and impact is a classic feature of institutional authority. It signals that platforms do not simply enable participation; they stage it within boundaries that preserve control over outcomes. Legitimacy is produced through the appearance of openness and responsiveness, while decision-making power remains concentrated and opaque [17].
Authority and legitimacy are therefore embedded in technical operations rather than explicit decisions. The most consequential governance acts occur through ranking, moderation, and exposure rather than formal rulemaking. Scholarship on participatory governance in a datafied society underscores the importance of oversight precisely because data-driven participation can shift from democratic input to managed compliance when platforms control the channels through which participation is expressed and evaluated [25]. Oversight is difficult because these mechanisms are proprietary, continuously evolving, and framed as product features rather than governance instruments.
These effects extend beyond individual participation to collective social life. Selcan show that link recommendation algorithms can produce polarisation dynamics in online networks [19]. This demonstrates that platforms do not simply reflect social divisions; they can amplify and stabilise them. Polarisation is not only an outcome of content or ideology. It can be an infrastructural effect of optimisation systems that reward engagement, clustering, and repeated exposure. In institutional terms, this constitutes a form of social ordering that reshapes group boundaries, intergroup recognition, and the distribution of trust [19].
A further defining feature of platform governance is the diffusion of accountability. Traditional institutions, however imperfect, typically have identifiable offices, procedures, and lines of responsibility. Platform governance disperses responsibility across automated systems, policy teams, content moderators, and machine learning pipelines. Lu’s analysis of regulating algorithmic harms highlights the difficulty of locating responsibility when decisions emerge from layered systems rather than discrete acts [23]. As a result, individuals experience governance without a clearly identifiable governor, undermining accountability and weakening the possibility of contestation.
This accountability problem is especially visible where platform identity infrastructures intersect with public services and development contexts. Masiero and Bailur argue that digital identity systems must be analysed through justice and power rather than efficiency alone [26]. Krishna’s study of Aadhaar use among informal workers shows how datafied identity systems can reshape access, vulnerability, and social position [27]. Although these cases differ from social media platforms, they reveal a shared structural dynamic: when identity becomes infrastructural, it becomes a gate through which rights, opportunities, and recognition flow. That gate is governed by design choices, data practices, and classification logics that may not align with dignity, fairness, or social inclusion [26,27].
These studies support a clear conclusion. AI-driven platforms function as social infrastructures because they control the conditions of legibility, visibility, and participation. Their authority is exercised through classification, ranking, and moderation. Their legitimacy is sustained through narratives of neutrality, personalisation, and user choice, even when those narratives obscure asymmetries of power and limited avenues for contestation [2,3,4]. Accountability is weakened because governance is embedded in systems rather than expressed through transparent decisions [23,25]. The result is a reconfiguration of the social realm in which platform architectures operate as governing arrangements.
This section sets up the next analytical step. If platforms are social infrastructures, then digital identity is not merely a profile or credential. It is the output of infrastructural processes that allocate visibility and social value. The following section examines this mechanism directly by tracing how algorithmic identity production operates through visibility regimes and how those regimes shape recognition, participation, and exclusion in practice.

4. Algorithmic Identity Production and the Politics of Visibility

Where earlier social theory treats identity as something negotiated through interaction and recognition, identity in AI-mediated platform environments is increasingly produced through a quieter but more consequential mechanism: automated inference. What users experience as self-expression is only one input into a broader identity pipeline in which platforms capture behavioural traces, infer probabilistic attributes, assign category membership, and circulate these outputs through ranking systems that determine who is seen, by whom, and under what conditions. Algorithmic identity, in this sense, is not simply a profile. It is a governance artefact that is continuously updated, operationalised, and monetised through the allocation of visibility [4,5]. Figure 2 illustrates this pipeline by tracing the movement from data capture and classification to visibility distribution and downstream social consequences.
Smith’s analysis of algorithmic filtering offers an early but still incisive account of this transformation. He shows that platforms commercialise self-presentation by routing identity through systems optimised for engagement and marketability, translating complex social selves into legible, tradable segments rather than preserving them as lived meaning [4]. Joseph extends this analysis by focusing on reflexive effects. The algorithmic self is not only assigned by platforms; it is learned and inhabited by users themselves. Signals of relevance, popularity, and value generated by ranking systems shape how individuals interpret their own worth, legitimacy, and social standing [5]. Algorithmic identity production is therefore not merely representational. It is formative. It alters how people construct and regulate the self in response to what platforms reward and penalise.
This is where the politics of visibility becomes central. Visibility is often framed as a neutral outcome of reach or engagement. In practice, it functions as a form of social power because it governs access to recognition, participation, and basic social legibility. Visibility is not simply accumulated through effort or merit; it is allocated through ranking systems that decide what content is surfaced, whose identities are amplified, and which groups become socially salient. Once visibility is understood as an allocative mechanism rather than a passive outcome, ranking itself becomes political, even when framed as personalisation or relevance optimisation.
Akpinar et al. make this mechanism explicit by modelling how algorithmic systems shape visibility within epistemic communities. Their simulation study shows that visibility is systematically uneven and that exclusion can be produced through ordinary ranking dynamics rather than overt censorship or removal [15,28]. The significance of this finding lies not in its specific setting, but in what it reveals about platform governance more generally. Visibility operates as an infrastructural resource that platforms ration, and exclusion can arise as a cumulative effect of optimisation rather than as an exceptional intervention.
Carrasco-Farré et al. deepen this analysis by demonstrating how recommender systems generate patterned identity outcomes through in-group and out-group visibility dynamics [14]. Their work shows that recommendation architectures can systematically privilege in-group content while marginalising out-groups, even in the absence of explicit discriminatory intent. This challenges a common assumption that identity harms are primarily caused by prejudiced users or malicious actors. Instead, the architecture of recommendation itself can reproduce group advantage and disadvantage by shaping what becomes visible and therefore socially legitimate [14]. When visibility is unevenly distributed, recognition becomes uneven. When recognition becomes uneven, social stratification follows.
Research on personalisation and identity harm reinforces this conclusion by linking visibility allocation to the stability of self-understanding. Özmen et al. explicitly examine whether personalised recommendations can harm social identity, treating identity outcomes as a dependent variable rather than an assumed constant [22]. Jawad et al. similarly show that AI personalisation influences self-perception, group identity, and online social interaction [29]. These studies indicate that identity harms are not limited to misclassification or isolated errors. They include longer-term effects on belonging, self-worth, and group alignment that emerge through repeated exposure to algorithmic cues about what is valued, normal, and socially rewarded.
This literature points to two tightly coupled processes at the core of algorithmic identity production. The first is classification and inference. Platforms translate complex social life into categories and predicted attributes because categories are governable. The second is circulation through ranking. These categories become consequential because they are used, directly or indirectly, to allocate attention, structure recommendations, and determine social legibility. The political stakes lie in the fact that both processes embed normative judgments about relevance, credibility, and authenticity, even when they are presented as technical or neutral operations.
Lopez’s framework of susceptibility to algorithmic disadvantage is particularly useful here because it shifts attention from isolated bias incidents to patterned vulnerability [21]. Some individuals and groups are more exposed to algorithmic disadvantage because their social position, behavioural signals, or data patterns are more likely to be misread, discounted, or treated as risky. This reframes identity harm as a structural phenomenon rather than an accidental one. Algorithmic identity production does not affect all users equally, and the resulting inequalities are not random. They are patterned and predictable [21].
These dynamics become even more pronounced when identity is tied to traceability and compliance. Masiero’s analysis of digital identity as platform-mediated surveillance shows how identity infrastructures operate through monitoring and control, making legibility to the system a condition for access and recognition [3]. When this logic enters platform environments, visibility and legitimacy become conditional on behavioural conformity, data completeness, and stable identifiers. Individuals who cannot or will not meet these expectations may remain technically present while becoming socially marginal, less visible, or less credible. The result is a subtle but powerful form of stratification that operates through legibility itself.
Visibility allocation also reshapes group relations. Recommendation systems influence what publics perceive as normal, representative, or popular. When clustering is intensified and cross-group exposure is reduced, group boundaries harden. Santos et al. show that link recommendation algorithms can shape polarisation dynamics by reorganising network structures through repeated, patterned exposure [19]. This connects directly back to identity. Uneven group visibility and engineered interaction patterns can stabilise antagonistic identities, render out-groups more distant or threatening, and distribute recognition along increasingly polarised lines.
The central implication is that algorithmic identity production functions as a mechanism of social stratification because it allocates three interdependent resources: legibility, visibility, and recognition. Legibility determines whether the system can classify an individual in actionable terms. Visibility determines whether that individual appears in the social field. Recognition determines whether they are treated as legitimate, credible, and worth engaging. These resources are allocated through systems that are opaque, proprietary, and optimised for platform objectives rather than public values [4,14]. As a result, platforms make consequential decisions about social standing and inclusion while diffusing responsibility across automated pipelines that are difficult to contest.
This section advances a core claim that the subsequent sections develop further. The politics of visibility is not an accessory to digital identity. It is the mechanism through which digital identities become socially real. AI-driven platforms produce, stabilise, and circulate identities through feedback loops and ranking systems that determine who counts, who is seen, and who is ignored [14,15,21]—reshaping recognition, participation, and social order.

5. Participation Reconfigured: Inclusion, Exclusion, and Conditional Belonging

Participation on AI-driven platforms is often framed as inherently democratic. Access is typically open, and users are encouraged to post, comment, mobilise, and organise. Yet the social reality of platform participation is far more selective. Participation has become increasingly conditional, not because access is restricted, but because platforms determine what participation counts as meaningful through algorithmic gatekeeping, ranking, and performance metrics. The ability to speak is not equivalent to the ability to be heard, and visibility is neither neutral nor evenly distributed. This section argues that platform participation is best understood as a managed social process in which inclusion is offered widely, while influence and belonging are allocated unevenly. Table 2 summarises the principal mechanisms through which this conditionality is produced and the typical pathways through which exclusion is experienced.
A useful starting point is the distinction between formal access and effective participation. Work in the International Journal of Communication on participatory governance in a datafied society captures a broader shift from participation as input toward participation as oversight. Participation cannot be assessed solely by whether individuals are able to submit views, but by whether systems respond to those views in ways that enable accountability and influence [25]. In platform environments, this responsiveness is often absent because the central levers of participation are not deliberation or representation, but algorithmic amplification and suppression. Participation thus becomes an engineered output of visibility rules rather than a straightforward expression of civic or social intent.
Algorithmic curation plays a central role in this transformation. Papa and Ioannidis show that on platforms such as Facebook, civic participation is mediated through algorithmic curation and platform affordances, meaning the platform does not merely host participation but actively shapes what forms of civic activity users encounter and engage with [16]. This creates a participation environment in which some voices and issues gain traction while others are effectively buried, even when formal access is equal. Participation is increasingly organised through exposure architectures rather than through collective priorities or deliberative processes.
Cardullo and Kitchin extend this critique through their analysis of platform citizenship. They argue that platforms frequently substitute substantive participation with procedural interaction, offering engagement opportunities that generate data and legitimacy without transferring meaningful influence [17]. Feedback tools, reporting interfaces, and engagement prompts create the appearance of inclusion, while decision-making authority remains tightly controlled. When participation is reduced to measurable engagement, metrics become the governing language of belonging. Users learn that visibility and influence depend on performing in ways that align with platform optimisation logics, rather than on the civic or social value of their contributions.
Empirical work in institutional contexts reinforces this pattern. Richards’ study of local councils’ use of digital platforms for citizen engagement shows that platforms can widen the surface area of participation while narrowing its substantive impact through design, moderation, and agenda-setting choices [30]. Even where participatory intentions exist, platform architectures tend to prioritise efficiency, risk management, and reputational control over deliberative depth. Participation persists, but its capacity to shape outcomes is constrained.
Once participation is routed through algorithmic systems, it becomes conditional in at least three structurally distinct ways.
First, participation becomes conditional on visibility. Users may speak, but algorithmic ranking systems determine whether that speech enters the social field at all. Akpinar et al. show that exclusion can occur through ordinary ranking dynamics rather than explicit bans or removals [15]. Individuals can remain technically present while becoming socially absent because their contributions are not surfaced to others. This form of exclusion is particularly powerful because it is difficult to recognise and even harder to contest. It is experienced as silence rather than sanction.
Second, participation becomes conditional on legibility. Platforms reward identities and behaviours that are easy to classify, predict, and monetise. Masiero’s analysis of digital identity as platform-mediated surveillance helps clarify why: legibility to the system is tied to traceability, stability, and behavioural conformity [3]. Individuals who do not fit clean categories, resist stable identifiers, or are flagged as risky through profiling systems may find their participation constrained through subtle frictions framed as safety or policy enforcement. Inclusion thus becomes selective without appearing overtly discriminatory.
Third, participation becomes conditional on alignment with optimisation goals. Selcan show that recommendation and link algorithms can drive polarisation dynamics in online networks [19]. From a participation perspective, this indicates that certain styles of engagement—particularly those that are emotionally charged, conflictual, or identity-signalling—are structurally advantaged because they perform well under engagement-based ranking. Other forms of participation, including deliberative, contextual, or minority-oriented engagement, are systematically disadvantaged regardless of their social or civic value.
These conditions do not affect all users equally. Vulnerability to algorithmic disadvantage is patterned rather than random. Lopez’s framework of susceptibility to algorithmic disadvantage is instructive here because it shifts attention away from isolated bias incidents and toward structural exposure [21]. Some individuals and groups are more likely to experience misclassification, downranking, or credibility discounting because of how their behaviours are interpreted, the data proxies applied to them, or their lack of network advantages that ranking systems implicitly reward. Participation thus becomes a mechanism through which existing inequalities are reproduced, even in environments that appear formally open.
The central conclusion is difficult to avoid. Platform participation is not a neutral opportunity space. It is a stratified environment in which belonging is conditional on how platforms render identities visible, legible, and valuable. Access is inexpensive. Influence is scarce. That scarcity is managed through algorithmic systems that shift responsibility away from governance structures and onto individual performance. Participation persists, but on governed terms.
Table 2. Modes of Participation and Exclusion in AI-Mediated Social Environments.
Table 2. Modes of Participation and Exclusion in AI-Mediated Social Environments.
Mode of ParticipationUser’s IntentWhat the Platform OperationalisesVisibility Gatekeeping MechanismTypical Mode of ExclusionHow Exclusion Is ExperiencedGroups More Exposed (Patterned Susceptibility)
Expressive participation (posting, commenting)“I am speaking or contributing”Predicted engagement, policy risk, account reputationFeed ranking and moderation triageDownranking, limited distribution, soft friction (warnings, reduced reach) [15,25]Being ignored rather than banned; silence without explanationNew users; low-network users; stigmatised identities subject to elevated risk inference [3,21]
Networked participation (audience-building, publics)“I am building community”Network centrality, retention value, virality likelihoodRecommendation systems and follow suggestionsCumulative advantage and preferential attachment [17]Persistent invisibility despite effort; dependence on platform signalsUsers lacking social capital; minority language communities; peripheral regions [21]
Civic participation (advocacy, mobilisation)“I am engaging politically”Engagement intensity, controversy level, safety thresholdsAlgorithmic curation of political contentSelective exposure, throttling of sensitive or marginal issues [16]Certain causes never gain traction; issue visibility feels arbitraryActivists; marginalised groups; dissenting or minority viewpoints treated as risky [3,16]
Procedural participation (reporting, feedback, complaints)“I am holding platforms accountable”Compliance signals, efficiency metrics, liability exposureModeration workflows and policy enforcement systemsPerformative responsiveness without substantive change [17,25]“I reported it, nothing happened”Users with low visibility or status; targets of harassment bearing reporting burdens [21,25]
Deliberative participation (reasoned dialogue, debate)“I am discussing to persuade or learn”Engagement velocity, emotional response, watch timeEngagement-optimised rankingDisplacement by high-arousal or polarising content [19]Nuance fails to travel; conversation feels distortedBridge-builders; experts; minority viewpoints generating lower engagement [19]
Identity-based participation (speaking from lived identity)“I am being seen as myself”Identity proxies, credibility cues, inferred trust or riskProfiling systems and credibility signallingMisrecognition, stereotyping, increased friction [3,14,21]Being misread; feeling unsafe; self-censorshipMarginalised groups; non-normative identities; users vulnerable to misclassification [3,21]
Economic participation (creator labour, monetisation)“I am earning through the platform”Revenue prediction, brand safety, retention performanceMonetisation eligibility rules and rankingDemonetisation, opaque thresholds, unequal exposure [2,3]Income volatility; rules feel arbitraryCreators outside dominant categories; politically sensitive or precarious workers [3,21]
Institution-linked participation (services via platforms)“I am accessing services or rights”Identity assurance, fraud risk, compliance statusIdentity verification and eligibility scoringAutomated exclusion via documentation or traceability [15,26,27]Blocked access; forced compliance; bureaucratic opacityMigrants, refugees, informal workers, those with unstable documentation [15,27]
Reputational participation (credibility, searchability)“I am building trust and standing”Reputational proxies, identity labels, verification statusSearch ranking and badges/labelsStatus stratification and reputational sorting [3,4]Diminished credibility; reduced opportunityNewcomers; stigmatised communities; users denied verification or legibility [3,4]

6. Recognition, Misrecognition, and Social Harm in Algorithmic Environments

Recognition is often treated as a secondary social good: desirable, but analytically subordinate to rights, access, or participation. In AI-mediated environments, this hierarchy reverses. Recognition becomes a structural resource because it determines who is rendered legible, credible, and socially real within platform publics. Misrecognition, correspondingly, is not an occasional technical failure. It is a patterned outcome of systems that translate people into data proxies, optimise those proxies for platform objectives, and distribute visibility and credibility unevenly. The central claim of this section is that recognition and misrecognition must be analysed as infrastructural conditions of social life rather than as interpersonal acts or isolated errors.
Classical and normative theories of recognition presuppose an identifiable recogniser capable of moral evaluation and accountability. Beduschi, for example, frames recognition as dignity-bearing and constitutive of personhood, grounded in ethical judgement and social validation [9]. This baseline is analytically useful because it clarifies what changes under algorithmic governance. Platforms do not primarily recognise persons as moral subjects. They recognise signals. Recognition is operationalised through ranking, labelling, verification cues, and credibility indicators generated by layered technical pipelines rather than discrete human judgements. As a result, misrecognition acquires a distinctive character. It is experienced by individuals as personal and consequential yet produced by systems in which no single recogniser exists and no clear point of appeal is available.
Once recognition is operationalised in this way, misrecognition becomes structural. Akpinar et al. demonstrate how algorithmic visibility regimes can systematically shape who becomes seen within epistemic communities, producing exclusion through ordinary ranking dynamics rather than explicit censorship or removal [15]. The significance of this finding is not limited to reduced reach. It shows that exclusion can occur without any identifiable sanctioning decision, making misrecognition cumulative, normalised, and difficult to contest. Individuals may remain formally present on a platform while becoming socially absent, not because they are silenced, but because they are not surfaced.
Recommender systems further entrench this dynamic. Carrasco-Farré et al. show that algorithmic recommendation architectures can generate patterned in-group and out-group visibility effects, systematically privileging some identities while marginalising others through optimisation logics rather than explicit discrimination [14]. In recognition terms, this matters because repeated amplification functions as legitimacy. Identities that are consistently surfaced come to appear representative, credible, and normal. Those that are consistently downranked become peripheral, suspicious, or invisible. Misrecognition thus operates as a hierarchy of attention that reshapes social standing and group boundaries, not merely as a problem of incorrect labelling.
The harms that follow are concrete and unevenly distributed. In platform environments, recognition is tightly coupled to participation and opportunity. Misrecognition can reduce reach, trigger credibility discounting, increase friction, or subject individuals to heightened scrutiny, all of which constrain effective participation. Lopez’s framework of susceptibility to algorithmic disadvantage is particularly useful here because it shifts the analysis from isolated bias incidents to patterned exposure to harm [21]. Certain groups face higher risks of misrecognition because profiling proxies map poorly onto their lived realities or because system priors treat them as higher risk or lower credibility by default. The result is not simply unequal outcomes, but a predictable distribution of vulnerability across social positions.
Masiero’s analysis of digital identity as platform-mediated surveillance extends this harm analysis beyond visibility alone. Recognition, in these systems, becomes inseparable from traceability and compliance rather than dignity [3]. Legitimacy is awarded through system legibility: stable identifiers, behavioural predictability, and data completeness. Those who cannot or will not conform to these expectations are more likely to be treated as suspicious, ineligible, or less worthy of trust. The harm is not only exclusion from visibility or opportunity. It is coercion through infrastructural conditions that make belonging contingent on being rendered governable.
Personalisation intensifies these dynamics through feedback loops that stabilise misrecognition over time. Özmen et al. demonstrate that personalised recommendations can have measurable negative effects on social identity, treating identity outcomes as an empirical variable rather than a neutral background condition [22]. Jawad et al. similarly show how AI-driven personalisation reshapes self-perception, group identity, and social interaction, reinforcing the point that ranking systems do not simply distribute content; they structure interpretive environments [29]. As individuals adapt to what platforms reward, misrecognition becomes internalised. Users self-censor, narrow identity expression, or strategically perform identities that secure visibility. Misrecognition thus operates simultaneously as an external classification and an internal constraint.
These harms scale beyond individual experience. Misrecognition undermines trust along two dimensions. First, individuals lose trust in the fairness and accountability of the environment when outcomes appear arbitrary and difficult to challenge. Second, groups lose trust in one another because visibility regimes shape distorted perceptions of who “people like us” are and what “they” are doing. Santos et al. show that recommendation and link algorithms can intensify polarisation dynamics, suggesting that optimisation-driven exposure patterns can harden group boundaries and fragment publics [19]. Misrecognition contributes directly to this fragmentation when out-groups are persistently rendered as noise, extremity, or threat.
What makes algorithmic misrecognition particularly corrosive is its insulation from accountability. In conventional institutional settings, recognition failures can at least be contested through identifiable procedures and decision-makers. In platform governance, contestation is structurally weakened because recognition outcomes emerge from proprietary, adaptive, and distributed systems. Lu’s analysis of regulating algorithmic harms captures this difficulty: responsibility is diffused across technical pipelines, making remedy and redress hard to locate and enforce [23]. Users experience governance without clear governors, and misrecognition becomes a durable form of power that resists meaningful challenge.
The conclusion, therefore, is not that recognition theory is obsolete, but that it requires recalibration. Recognition and misrecognition in AI-mediated environments are no longer primarily interpersonal achievements. They are infrastructural outcomes produced through classification, ranking, and credibility signalling. Social harm follows because these infrastructures allocate visibility, legitimacy, and belonging in patterned ways, with disproportionate costs borne by marginalised groups and those most susceptible to algorithmic disadvantage [21]. Once recognition is understood as a mode of governance rather than a matter of courtesy or representation, the political stakes become clear. Algorithmic misrecognition is not merely a representational error. It is a mechanism through which inequality, mistrust, and social fragmentation are reproduced in contemporary AI-mediated societies.

7. Group Dynamics and the Re-Engineering of Social Relations

Group life has always been shaped by institutions such as schools, workplaces, religious organisations, neighbourhoods, and political parties. What changes in AI-mediated environments is not the existence of groups, but the mechanism through which group boundaries form, stabilise, and acquire social force. Platforms do not merely mirror existing affiliations. They actively organise social exposure, recommend connections, and structure attention. In doing so, they participate in the re-engineering of social relations by shaping who encounters whom, which differences become salient, and which identities are rewarded with visibility and legitimacy.
A key analytical shift is that group formation increasingly moves from shared social experience to structured exposure. Classical social identity dynamics emphasise interaction, comparison, and meaning-making as the basis of group belonging. In platform environments, however, individuals can be routed into group-like publics without deliberate choice and repeatedly exposed to in-group cues without sustained interaction or reflection. Whelan’s account of networked social identity is helpful insofar as it recognises that identity formation now occurs within digitally mediated networks rather than exclusively face-to-face contexts [1]. Yet the more consequential implication is that networks themselves are increasingly engineered. Recommendation and ranking systems shape which social proximities are likely to form in the first place, pre-structuring group alignment before conscious affiliation occurs.
This is where curation functions as a group-making technology. Recommendation systems do not simply surface content; they propose affiliation. They suggest who to follow, which communities are relevant, which issues deserve attention, and which cultural signals are worth adopting. Santos et al. demonstrate that link recommendation algorithms can drive polarisation dynamics in online networks by intensifying clustering over time [19]. The critical point is not that individuals naturally prefer like-minded others. It is that platforms can convert modest preferences into durable segmentation by repeatedly rewarding homophilous connections and limiting cross-cutting exposure. In effect, algorithmic optimisation scales a social tendency into a social architecture.
Carrasco-Farré et al. clarify why this matters for group boundaries and social recognition. Their work shows that recommender systems can generate systematic in-group and out-group visibility patterns, amplifying in-group salience while marginalising out-groups through optimisation dynamics rather than explicit discrimination [14]. Once visibility and legitimacy are unevenly distributed across groups, intergroup relations shift. In-groups appear more representative and credible, while out-groups become peripheral, stereotyped, or socially distant. This is not simply polarisation as disagreement. It is polarisation as an infrastructural condition in which the social field itself is organised to normalise some identities and marginalise others.
These dynamics are reinforced by engagement-driven incentives. Platforms optimise for behavioural signals such as clicks, watch time, rapid reactions, and repeat engagement. Törnberg’s analysis of platform governance is relevant here because it highlights how platforms regulate social life through design choices embedded in product architectures rather than explicit rules [2]. When engagement becomes the dominant success metric, content that provokes strong affective responses often receives preferential distribution. This does not require intentional promotion of hostility. It is sufficient that high-arousal group signalling performs well under optimisation. Over time, identity performances that fit the engagement economy are rewarded with visibility, shaping both the tone of group interaction and the boundaries of acceptable belonging.
The implications for diversity and pluralism are significant. Liberal conceptions of pluralism depend on cross-cutting encounter and the capacity to recognise difference without construing it as threat. Algorithmic curation can weaken these conditions by reducing the frequency and quality of intergroup contact. Papa and Ioannidis show that algorithmic curation shapes civic participation and political exposure, implying that what individuals come to treat as relevant or urgent is partly a function of platform selection [16]. If civic agendas are curated through ranking and recommendation, publics are curated as well. Exposure narrows, interpretive frames diverge, and citizens increasingly inhabit distinct informational and moral environments reinforced by group-specific visibility cues.
At the micro-level, these processes reconfigure everyday social relations. Group membership becomes less about stable affiliation and more about dynamic clustering around content, affect, and identity signals. Individuals may participate in multiple overlapping publics, yet algorithmic systems can still produce coherent segmentation by aligning content streams around predicted preferences. Joseph’s analysis of the algorithmic self is instructive here, showing how users adapt self-presentation based on what secures visibility and affirmation [5]. In group contexts, this encourages identity performances calibrated to audience expectations rather than grounded in durable social commitments. Authenticity becomes strategically negotiated, and conformity pressures intensify even in spaces that ostensibly celebrate individual expression.
The harms become more pronounced when group dynamics intersect with unequal visibility. Akpinar et al. show how algorithmic visibility regimes can systematically exclude certain participants within epistemic communities [15]. This exclusion is not merely interpersonal. It alters who shapes group norms, whose knowledge is treated as legitimate, and whose contributions become part of collective memory. When out-group voices are persistently downranked or de-amplified, groups become epistemically closed. The result is social hardening: communities with strong internal coherence but diminished capacity for dialogue, correction, or mutual recognition.
From a governance perspective, what is most striking is that these transformations occur without the accountability mechanisms typically associated with institutions that shape social cohesion. Research on participatory governance in a datafied society highlights the need for oversight when digital systems structure participation and public life [25]. Yet platform group-making largely occurs through opaque ranking systems and design choices that are difficult to observe, contest, or deliberate over publicly. Lu’s analysis of regulating algorithmic harms captures this challenge: responsibility is dispersed across complex technical systems, making social harms easier to produce than to remedy [23]. In group contexts, this translates into diffuse but persistent shifts in solidarity and trust that no actor has explicitly authorised, yet many people experience.
The analytical conclusion is that AI-driven curation reshapes social relations through three interlinked processes. First, it structures exposure, determining who becomes socially proximate. Second, it rewards group signalling, steering identity expression toward performances that secure visibility. Third, it stabilises segmentation through feedback loops that convert small preferences into durable group boundaries. These processes can weaken pluralism by narrowing intergroup contact, amplify polarisation by hardening group identities, and erode social cohesion by reducing mutual recognition across difference [14,19]. The point is not that algorithms create division from nothing, but that they systematically reconfigure the conditions under which division becomes more likely and cohesion becomes harder to sustain.
This prepares the next step of the argument. If platforms function as social infrastructures that allocate visibility and engineer group proximity, then questions of fairness, accountability, and democratic life cannot be treated as external regulation applied after the fact. They are internal to the way contemporary social relations are being organised.

8. Risk, Power, and Inequality in AI-Mediated Societies

The preceding sections have shown that AI-driven platforms do not merely influence social life; they shape the conditions under which identity becomes legible, participation becomes consequential, and group boundaries stabilise. The question that follows is therefore not simply whether these systems generate harm, but how power is organised through them and why risk is distributed unevenly across social groups. This section argues that AI-mediated societies are structured by a political economy of visibility and control in which platforms concentrate governing power through technical infrastructures while displacing social risk onto users, communities, and institutions with limited capacity to contest or redesign the system. Table 3 synthesises the main concentrations of infrastructural power identified across the paper and maps their associated societal risks.
Power in AI-mediated environments is often described as “influence,” but this language understates what is at stake. Platforms exercise governing power because they control the architectures through which social standing, credibility, and participation are allocated. Authority is thus relocated from publicly contestable decision-making into product design and model pipelines that are privately managed, rapidly iterated, and largely opaque. When ranking and recommendation systems determine who becomes visible, credible, or marginal, platforms exercise a form of institutional power without corresponding transparency or accountability obligations.
This power is intensified by the central role of digital identity governance. Masiero’s analysis of digital identity as platform-mediated surveillance shows how identity infrastructures prioritise traceability, predictability, and control, making legibility to the system a condition of access and legitimacy [3]. In such arrangements, power operates through differential governability. Individuals who are easily classified, verifiable, and behaviourally predictable are treated as lower risk and granted smoother participation. Those who are harder to classify or stabilise are treated as higher risk and subjected to friction, exclusion, or credibility discounting. This is a structural inequality, not a moral one. It arises from how uncertainty is operationalised as risk within automated systems and how risk becomes a basis for unequal treatment.
Over time, this logic produces a quiet but durable form of stratification. Lopez’s framework of susceptibility to algorithmic disadvantage helps explain why harms cluster rather than distribute randomly [21,31,32]. Algorithmic systems interact with existing social inequalities. Individuals with weaker networks, less institutional protection, lower digital literacy, or marginalised social identities are more likely to be misrecognised, downranked, or excluded, and they are less able to contest those outcomes. Reduced visibility limits opportunity; reduced opportunity weakens signals of credibility; weakened credibility further reduces visibility. What appears as neutral optimisation thus hardens social hierarchies while preserving a narrative of technical objectivity.
The risks do not remain at the level of individual disadvantage. They scale into collective vulnerabilities, including erosion of trust, fragmentation of publics, and weakened social cohesion. Santos et al. show that recommendation systems can intensify polarisation dynamics through the structured organisation of exposure [19]. This matters for inequality because polarisation is unevenly experienced. Groups already vulnerable to misrecognition are more likely to become targets of amplified hostility or exclusion, while groups favoured by visibility and legitimacy cues gain disproportionate agenda-setting power. Carrasco-Farré et al.’s findings on in-group and out-group visibility patterns reinforce this point: when algorithms systematically amplify in-group salience while marginalising out-groups, they shape not only discourse but the distribution of social legitimacy itself [14]. As summarised in Table 3, control over visibility and group proximity functions as a mechanism of power with long-term consequences for pluralism and social trust.
A further source of risk lies in accountability. Platform power is strengthened by the difficulty of contestation. In conventional institutions, harmful decisions can at least be challenged through identifiable procedures and decision-makers. In platform environments, harms emerge through interactions among ranking, moderation, and profiling systems rather than discrete acts. Lu’s analysis of regulating algorithmic harms highlights this accountability gap: when decisions are embedded in complex technical pipelines, responsibility becomes diffuse and remedies difficult to design and enforce [23]. Users and communities experience governance outcomes they can feel but cannot meaningfully interrogate.
These accountability deficits interact with behavioural adaptation to produce deeper social effects. Once individuals internalise that visibility, credibility, and opportunity depend on algorithmic rules, they adapt accordingly. Joseph’s analysis of the algorithmic self captures this reflexive dynamic, showing how users adjust identity performance in response to system feedback [5]. At scale, this produces a society in which conformity to platform incentives becomes a condition of participation. The risk is not only exclusion, but behavioural steering: identity expression, civic engagement, and group affiliation are shaped by incentives that are not democratically defined.
The inequality effects are especially pronounced where platform identity infrastructures intersect with welfare, labour, and access to basic services. Krishna’s study of Aadhaar among informal workers shows how datafied identity systems can reshape vulnerability and justice outcomes, particularly for populations with limited capacity to manage system risk [27]. Masiero and Bailur similarly argue that digital identity for development must be analysed through justice and power rather than efficiency alone [26]. These cases demonstrate that identity governance is not merely a platform issue. It is an emerging mechanism of societal ordering with material consequences for who accesses rights, opportunities, and protection, and who bears the costs of system error, suspicion, or exclusion.
The synthesis is clear. AI-driven platforms concentrate power by controlling infrastructures of visibility, legibility, credibility, and accountability. Risks and harms are unevenly distributed because susceptibility to disadvantage is patterned by existing inequalities and because mechanisms of contestation are weaker than the power these systems exercise. As Table 3 illustrates, this produces a governance environment in which recognition, participation, and legitimacy increasingly flow through opaque, proprietary systems optimised for platform objectives rather than public values [2,3,21]. The following section turns from diagnosis to response.
Table 3. Concentrations of Infrastructural Power and Associated Societal Risks in AI-Mediated Environments.
Table 3. Concentrations of Infrastructural Power and Associated Societal Risks in AI-Mediated Environments.
Concentration of PowerHow the Power Is Exercised (Mechanism)Groups Most Exposed to RiskLong-Term Societal RisksWhy Contestation Is Structurally Difficult
Control of visibilityRanking, recommendation, downranking, amplification [2,19]Low-network users; marginalised identities; dissenting voices [21]Stratification of voice and opportunity; agenda capture; distorted publicsDecision logic is opaque, adaptive, and continuously changing
Control of legibilityProfiling, inference, identity labels, risk scoring [3,21]People with unstable documentation; non-normative identities; precarious workers [27]Exclusion through “risk”; normalisation of surveillance-based legitimacyClassification is probabilistic; errors are difficult to detect or prove
Control of behavioural boundariesModeration pipelines, enforcement rules, friction, bans [2,23]Groups subject to over-policing; targets of coordinated harassmentChilling effects; uneven speech constraints; distrust in fairnessEnforcement is inconsistent; appeal processes lack transparency
Control of group proximityClustering, link recommendation, social graph shaping [19]Groups targeted by misinformation or hostilityPolarisation; weakened pluralism; social fragmentationExposure patterns are indirect and not traceable to single decisions
Control of credibility cuesVerification badges, labels, reputational ranking [3,14]Newcomers; marginalised groups; users denied verificationLegitimacy hierarchies; reputational inequalityCues appear neutral while encoding platform priorities
Control of identity infrastructure in public lifeID requirements, eligibility gates, data-sharing ecosystems [26,27]Migrants, refugees, informal workers, low-capacity communitiesExclusion from services; rights conditional on legibilityInstitutional dependence on identity systems limits alternatives
Control of accountability conditionsProprietary systems, limited audits, weak transparency [23]The public at large; especially harmed groups with limited resourcesGovernance without accountability; erosion of trustResponsibility is diffused across complex technical pipelines

9. Governing Digital Identity as a Societal Challenge

If the preceding analysis is correct, then the core governance problem is not occasional algorithmic error or isolated bias. It is that digital identity has become a primary mechanism through which legitimacy, visibility, and belonging are allocated in contemporary social life, while the infrastructures performing this allocation are privately designed, weakly contestable, and largely shielded from public accountability [1,2,23,33]. In this context, treating AI governance as a narrow technical exercise focused on model performance or compliance misses the institutional nature of the challenge. The central question is not whether systems function as intended, but who defines the rules of social visibility, under what standards, and with what obligations to those whose lives are being classified, ranked, and rendered legible.
What is being governed must therefore be specified clearly. Digital identity governance is not limited to credentials, verification, or authentication. It is governance of classification, ranking, and credibility signalling. These mechanisms determine who is legible to systems, who is amplified within publics, and who is treated as trustworthy or marginal. They are not neutral because they structure participation and recognition at scale. Törnberg’s account of platforms as governing systems is instructive here because it makes clear that regulation already occurs within infrastructures through design, optimisation, and enforcement, regardless of whether it is formally acknowledged as governance [1]. Masiero extends this argument by showing how identity infrastructures often fuse recognition with traceability and control, making legitimacy conditional on being monitorable and compliant rather than on being treated as a rights-bearing subject [2]. When recognition is defined in these terms, the distribution of dignity becomes a system outcome rather than a moral commitment.
A recognition-aware governance approach therefore requires a shift in evaluative focus. Instead of asking only whether systems are accurate or efficient, governance must ask how recognition is distributed and whether misrecognition concentrates on particular groups in predictable ways [5,15]. Akpinar et al. are critical here because they demonstrate that exclusion can be produced through routine visibility dynamics rather than exceptional censorship [15]. This is precisely why individual appeals and case-by-case remedies are insufficient. When misrecognition is generated by ordinary ranking and amplification, it is structural by design, even in the absence of malicious intent.
Visibility governance must therefore sit at the centre of any serious framework. Platforms frequently describe ranking as personalisation, but ranking allocates social presence. Carrasco-Farré et al. show how recommender systems can privilege in-group content and suppress out-group visibility through optimisation dynamics, reproducing group advantage without explicit discrimination [14]. Santos et al. further demonstrate that recommendation and link systems can reshape network structure in ways that intensify polarisation, producing societal harms that cannot be reduced to user preference alone [19]. If visibility allocation can predictably amplify some groups, marginalise others, and fragment publics, then ranking is not merely a product feature. It is a public ordering mechanism that requires public standards [1,19].
From this diagnosis, governance can be articulated as four interlinked obligations that correspond to the social power platforms exercise.
The first obligation is visibility accountability. Platforms should be required to provide meaningful, comparable information about how visibility is distributed and what trade-offs shape that distribution. This does not require the disclosure of proprietary code. It requires evidence about outcomes. As Papa and Ioannidis show, algorithmic curation shapes which forms of civic participation become visible in the first place, meaning governance that ignores ranking effectively ignores participation itself [16]. At a minimum, platforms should be required to produce ranking impact statements in high-salience domains, documenting the signals optimised, the categories affected, and observed distributional effects across groups [16,19]. Without such reporting, public debate remains focused on symptoms while the allocation mechanism remains hidden.
The second obligation is contestability with due process. Recognition failures are most damaging when they cannot be challenged. Current platform remedies often treat redress as a customer service issue, but research on platform citizenship shows how procedural interaction can substitute for meaningful influence, particularly in complaint and feedback systems that absorb voice without altering outcomes [17]. Contestability must be understood as a governance requirement, not a discretionary support feature. Individuals should be able to know when identity-related inference or credibility cues affect their visibility, challenge those inferences, and obtain timely review through processes not designed to exhaust or deter them [23]. Lu’s work underscores the importance of enforceable procedural rights in contexts where responsibility is diffused across complex systems [23].
The third obligation is structural risk management rather than incident response. Lopez’s susceptibility framework explains why inequality persists even when explicit bias is denied: disadvantage concentrates on those whose social position and data patterns are more likely to be misread or discounted [5]. Governance should therefore require ongoing evaluation of distributional effects, including recurrent audits for patterned disadvantage, stress testing of marginal cases, and enforceable corrective constraints when unequal exposure is detected [5,14]. Treating inequality as a system property changes the evidentiary threshold. Intent is no longer the standard. Persistent, patterned harm is sufficient to trigger redesign [14].
The fourth obligation is public interest alignment where identity infrastructures intersect with rights and basic access. Research on digital identity in development contexts is instructive because it shows how exclusion can arise when legitimacy is defined through system legibility rather than social rights [26,27]. That insight applies directly to platform governance where access to civic participation, work, or public services is mediated through platform identity systems. In such cases, governance standards must be higher. Public institutions should not treat platform infrastructures as neutral delivery channels. They should require alternatives, ensure that access to rights is not conditional on platform legibility, and prevent private identity systems from becoming de facto gates to public life [26,27].
These obligations imply differentiated responsibilities.
Public institutions must set minimum standards for recognition, contestability, and distributional fairness where identity and visibility governance shape civic life and public services [25,26]. Crucially, they must also create enforcement capacity so that obligations are not voluntary [23].
Platforms bear institutional responsibility proportional to their infrastructural power. Because they govern through design and ranking, they must justify how their systems align with inclusion and pluralism and must redesign when visibility allocation produces predictable stratification [1,14,19].
Civil society provides essential social intelligence. Many harms first appear as lived patterns of persistent invisibility, repeated friction, and credibility discounting. Since exclusion can emerge without explicit sanctions, the capacity to detect and aggregate such patterns is itself a governance resource, requiring protections for researchers and meaningful access for independent scrutiny [15,23].
Citizens are not merely consumers who can opt out. Platform infrastructures are embedded in opportunity structures and public life. The relevant question is whether people can shape the conditions of participation rather than simply exit them. Participatory governance approaches are therefore essential because they shift attention from symbolic inclusion to genuine oversight and responsiveness [25].
Taken together, the conclusion is clear. Governing digital identity is a societal challenge because it is governance of recognition, participation, and belonging through infrastructures that allocate visibility and credibility. When these allocations are opaque, they can reproduce inequality through routine optimisation, harden group boundaries, and erode public trust even when formal access remains open [14,19]. A serious governance agenda cannot stop at accuracy, privacy, or user choice. It must regulate visibility allocation, guarantee contestability, manage patterned vulnerability, and align identity infrastructures with public values wherever they intersect with civic life and social rights [5,23,26].
While the present paper is conceptual in orientation, the framework lends itself to empirical operationalisation in future research. Visibility can be examined through comparative audit studies that measure differential reach and ranking outcomes across demographically defined user groups, holding content characteristics constant. Legibility can be approached by testing how variations in profile completeness, identity stability, or behavioural predictability affect algorithmic treatment and exposure rates. Recognition can be assessed through analysis of credibility signal distribution—verification rates, recommendation frequencies, and moderation patterns—across social groups and content types. These strategies illustrate, without exhausting, how the conceptual triad of visibility, legibility, and recognition can anchor empirical inquiry into the infrastructural politics of digital identity, without requiring the present paper to become an empirical study.

10. Discussion: Implications for Contemporary Societies

The preceding analysis has established that digital identities are produced through infrastructural processes that allocate visibility, credibility, and belonging in patterned and contested ways. This supports the claim that platforms govern through embedded design and optimisation logics rather than through formal public rules [1]. The broader implication is that contemporary societies are drifting toward an institutional arrangement in which recognition, participation, group formation, and public trust are increasingly organised through privately owned infrastructures that exercise institutional power without equivalent institutional accountability [1,23,34].
The first implication is institutional. Sociological theories of social order typically assume that authority is exercised through recognisable institutions, explicit norms, and contestable procedures. AI-driven identity systems introduce a parallel ordering mechanism: governance through classification, ranking, and moderation architectures that operate below the level of public reasoning. This shift is not simply about efficiency or scale. It represents a relocation of rule-making into optimisation pipelines that are difficult to scrutinise and easy to normalise as technical necessity [1,23]. When identity becomes infrastructural, societies are reorganised around legibility, traceability, and scalable categorisation, with consequences for who is recognised as a full social subject and on what terms [2,35].
A second implication concerns citizenship as a lived condition rather than a formal legal status. Citizenship depends on recognition, voice, and the ability to participate without persistent friction. The analysis suggests that platform environments increasingly produce conditional belonging: individuals may be formally present while being substantively marginalised through downranking, misrecognition, and reputational sorting. This dynamic becomes more acute when platform identity systems intersect with public services. Rodríguez and Núñez’s work on digital identity and human rights underscores that trust and security in identity infrastructures are normative commitments tied to dignity and legitimacy, not merely technical goals [36]. Sociologically, this means that identity governance becomes a site where inclusion is either protected as a right or redefined as a privilege contingent on system legibility [35,36].
Participation is therefore also reconfigured. Platforms can widen access while narrowing influence. Research on citizen engagement through public entity platforms shows that participation is shaped not only by willingness to engage but by how input is filtered, prioritised, and acted upon within platform architectures [34]. Esteves similarly argues that when participation is mediated through digital systems, legitimacy depends on responsiveness and accountability rather than on interaction alone [37]. This reinforces a core claim of the paper: access is inexpensive, but influence is scarce, and scarcity is managed through infrastructural rules rather than democratic contestation [1,34,37].
A fourth implication concerns democratic representation. The paper has argued that ranking and visibility allocation function as governance because they organise what becomes publicly present. Rymon’s analysis of AI and democratic representation is useful here because it highlights a shift from representation as institutional mediation to representation as algorithmic mediation, where the appearance of “the public” is constructed through platform curation and system-generated salience [38]. When platforms serve as default arenas for civic attention, visibility regimes shape which issues appear urgent, which groups appear representative, and which claims acquire legitimacy. Algorithmic identity governance thus participates in shaping not only individual standing but the perceived composition and priorities of the public itself [37,38].
The implications extend to the changing geography of participation and identity. As civic interaction increasingly occurs in virtual and immersive environments, identity and participation become even more dependent on platform design. Nosikov’s work on metaverses and public policy signals that future forms of civic life may unfold in spaces where identity is deeply system-mediated and governance questions of inclusion and legitimacy become more acute [39]. This development amplifies the paper’s central concern: as participation migrates into deeper platform infrastructures, belonging risks becoming conditional on compliance with proprietary identity architectures rather than grounded in publicly defined standards [35,39].
Inequality emerges throughout the analysis as a durable feature of AI-mediated citizenship rather than an accidental outcome. By framing disadvantage as patterned susceptibility, the paper shows how identity infrastructures can embed unfairness structurally. Masiero’s Unfair ID is particularly resonant here, demonstrating how identity systems can reproduce exclusion as an operating condition when legitimacy is tied to system legibility [35]. The sociological implication is that those most exposed to misrecognition and visibility constraints are often those with the least capacity to contest outcomes, making inequality self-reinforcing across digital and institutional domains [35,36].
Public trust must therefore be understood not as an attitudinal problem but as a governance outcome. Trust depends on intelligibility and contestability: people must be able to understand why decisions affect them and have credible avenues for challenge. Scholarship on algorithmic harms shows that diffused responsibility in complex systems weakens accountability and remedy [23]. When public institutions rely on platform infrastructures for engagement, legitimacy risks spill over. Declining trust is not only a product of misinformation or polarisation; it can also arise from routine experiences of being unheard, downranked, or treated as risky without explanation or recourse [23,34,35].
These implications suggest that digital citizenship is becoming a central site of social stratification. Selvakumar et al.’s work on the global impacts of AI on digital citizenship reinforces the need to treat citizenship as technologically mediated and uneven across contexts rather than as a uniform condition [40]. The paper’s broader contribution lies here: digital identity governance is not an auxiliary policy concern. It is an emerging institutional layer through which societies organise recognition and the public sphere. If left to private optimisation logics, it risks producing a stratified visibility order in which some groups experience frictionless participation and credibility, while others encounter persistent marginality as the default condition of social life [1,35,36].
The discussion therefore points toward a clear directional conclusion. AI-driven digital identities reveal that contemporary societies are being re-institutionalised through platform infrastructures. This re-institutionalisation reshapes citizenship as a system-mediated lived condition, reshapes representation through algorithmic salience, and reshapes trust through gaps in contestability and responsiveness [23,37,38]. Whether this trajectory deepens inequality and weakens cohesion or supports more inclusive and accountable participation depends on whether societies insist on public standards for identity governance, visibility allocation, and the conditions of meaningful participation wherever platforms function as social and civic infrastructures [34,36,37].

11. Conclusions

This paper has argued that digital identity in AI-driven environments has moved beyond representation and become a mechanism of social organisation. Identity is no longer only something people claim or negotiate with others. It is increasingly produced through infrastructures that classify, rank, and signal credibility, and that decide who becomes visible, who is treated as legitimate, and who is able to participate without friction. Once identity is produced in this way, it functions as a form of governance.
Treating platforms as social infrastructures makes this shift visible. Through ordinary design choices and optimisation processes, platforms allocate recognition and participation at scale. These allocations are rarely neutral. They translate data-driven inferences into real social consequences, shaping reputations, opportunities, and group boundaries. Exclusion and misrecognition do not need to be intentional to be consequential. They emerge through routine system operations that reward some identities, marginalise others, and stabilise inequality through repeated exposure and feedback.
The broader significance of this analysis is that societies are being reorganised around infrastructures that exercise institutional power without institutional accountability. Functions that once depended on public norms and contestable procedures are increasingly performed by private systems whose governing effects are difficult to see and harder to challenge. As a result, belonging becomes conditional, participation becomes uneven, and legitimacy flows through systems that are optimised for platform objectives rather than public values.
The framework developed here advances existing debates in a specific way. It does not simply synthesise scholarship on algorithmic bias, platform governance, and recognition—it proposes an integrative analytical structure in which visibility serves as the pivot connecting these fields. Existing accounts have demonstrated that algorithmic systems can produce bias, that platforms wield power, and that recognition matters for participation. What this paper adds is the argument that these phenomena share a structural logic: identity classification is converted into social consequence through the governing control of visibility, and this conversion currently operates without adequate public accountability. Naming this mechanism precisely is a precondition for governing it.
The contribution of this paper is to reframe digital identity as a societal challenge rather than a technical one. By centring visibility as a form of social power, it shows why governance debates focused narrowly on accuracy, privacy, or user choice are insufficient. What requires attention is how recognition, participation, and credibility are distributed, how misrecognition becomes patterned, and how vulnerability concentrates among those least able to contest system outcomes. The paper’s distinctive move is to show that these three dimensions—recognition, participation, and belonging—are not separate concerns but expressions of a single governing logic through which platforms allocate the social resource of visibility.
The stakes are not abstract. As digital identity infrastructures become embedded in civic life, work, and access to public services, they shape who counts and on what terms. Whether this produces deeper inequality and weaker social cohesion, or more inclusive and accountable forms of participation, depends on whether societies treat identity governance as an institutional responsibility. If visibility and legitimacy continue to be allocated through opaque optimisation, marginality risks becoming a normal condition of social life. If these infrastructures are governed with public standards and meaningful accountability, they can be made compatible with dignity, pluralism, and democratic participation.

Author Contributions

Conceptualization, O.B.A.; Methodology, O.B.A. and O.T.O.; Investigation, O.B.A.; Resources, O.B.A., I.M.-K. and O.M.O.; Data curation, O.B.A., I.M.-K. and O.M.O.; Writing—original draft, O.B.A., I.M.-K., O.T.O. and O.M.O.; Writing—review and editing, O.B.A., I.M.-K., O.T.O. and O.M.O.; Visualisation, O.B.A.; Supervision, O.B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Whelan, B. Toward networked social identity theory: Reconceptualizing social identity in the digital age. AMS Rev. 2025, 15, 502–518. [Google Scholar] [CrossRef]
  2. Törnberg, P. How platforms govern: Social regulation in digital capitalism. Big Data Soc. 2023, 10, 20539517231153808. [Google Scholar] [CrossRef]
  3. Masiero, S. Digital identity as platform-mediated surveillance. Big Data Soc. 2023, 10, 20539517221135176. [Google Scholar] [CrossRef]
  4. Smith, C.H. Corporatised identities ≠ digital identities: Algorithmic filtering on social media and the commercialisation of presentations of self. Philos. Stud. Ser. 2020, 140, 55–80. [Google Scholar] [CrossRef]
  5. Joseph, J. The algorithmic self: How AI is reshaping human identity, introspection, and agency. Front. Psychol. 2025, 16, 1645795. [Google Scholar] [CrossRef]
  6. Abramson, W. Identity and Identification in an Information Society. 2022. Available online: https://napier-repository.worktribe.com/ (accessed on 14 December 2025).
  7. Beduschi, A. Personal identity. In A Companion to Digital Ethics; Wiley: New York, NY, USA, 2025; pp. 35–46. [Google Scholar] [CrossRef]
  8. Jennings, B.; Finkelstein, A. Digital identity and reputation in the context of a bounded social ecosystem. In Lecture Notes in Business Information Processing; Springer: Berlin/Heidelberg, Germany, 2009; Volume 17, pp. 687–697. [Google Scholar] [CrossRef]
  9. Deh, D.; Glođović, D. The construction of identity in digital space. Aesthet. Media Stud. 2018, 16, 101–111. [Google Scholar] [CrossRef]
  10. Hildebrandt, K.; Couros, A. Digital selves, digital scholars: Theorising academic identity in online spaces. J. Appl. Soc. Theory. 2016. Available online: https://socialtheoryapplied.com/journal/jast/article/view/16/10/ (accessed on 13 December 2025).
  11. Davison, C. Presentation of Digital Self in Everyday Life: Towards a Theory of Digital Identity; RMIT University: Melbourne, Australia, 2024. [Google Scholar]
  12. Lu, P.; Zhou, L.; Fang, X. Platform governance and sociological participation. J. Chin. Sociol. 2023, 10, 3. [Google Scholar] [CrossRef] [PubMed]
  13. Masiero, S.; Bailur, S. Digital identity for development: The quest for justice and a research agenda. Inf. Technol. Dev. 2021, 27, 1–12. [Google Scholar] [CrossRef]
  14. Mir, U.; Kar, A.K.; Gupta, M.P. AI-enabled digital identity—Inputs for stakeholders and policymakers. J. Sci. Technol. Policy Manag. 2022, 13, 514–541. [Google Scholar] [CrossRef]
  15. Madon, S.; Schoemaker, E. Digital identity as a platform for improving refugee management. Inf. Syst. J. 2021, 31, 929–953. [Google Scholar] [CrossRef]
  16. Lian, Y. Digital identity. In Sovereignty Blockchain 2.0; Springer: Berlin/Heidelberg, Germany, 2022; pp. 87–125. [Google Scholar] [CrossRef]
  17. Akpinar, N.-J.; Fazelpour, S. Authenticity and exclusion: A simulation study of how social media algorithms shape visibility in epistemic communities. Synthese 2025, 206, 205. [Google Scholar] [CrossRef]
  18. Carrasco-Farré, C.; Grimaldi, D.; Torrens, M.; Longobuco, E. Social identity theory and algorithmic bias: Ingroup and outgroup acrophily in recommender systems. J. Manag. Inf. Syst. 2025, 42, 1017–1054. [Google Scholar] [CrossRef]
  19. Burcu, S. Do Specific Personalized Recommendations Cause More Harm Than Good to Social Identity? A Moderated Mediation Model. 2025. Available online: https://openaccess.bilgi.edu.tr/items/5909af66-e1c6-4590-a8f3-a52c2b9facb9 (accessed on 17 December 2025).
  20. Jawad, M.; Talreja, K.; Bhutto, S.A.; Faizan, K. Investigating how AI personalization algorithms influence self-perception, group identity, and social interactions online. Rev. Appl. Manag. Soc. Sci. 2024, 7, 533–550. [Google Scholar] [CrossRef]
  21. Idowu, A. Algorithmic Bias and Its Impact on Student Identity and Academic Pathways. 2025. Available online: https://www.researchgate.net/ (accessed on 17 December 2025).
  22. Lopez, P. More than the sum of its parts: Susceptibility to algorithmic disadvantage as a conceptual framework. In Proceedings of the ACM Conference Fairness, Accountability, and Transparency (FAccT), Rio de Janeiro, Brazil, 3–6 June 2024; pp. 909–919. [Google Scholar] [CrossRef]
  23. Papa, V.; Ioannidis, N. Reviewing the impact of Facebook on civic participation: The mediating role of algorithmic curation and platform affordances. Mass Commun. Soc. 2023, 26, 277–299. [Google Scholar] [CrossRef]
  24. Dean, R. Participatory governance in the digital age: From input to oversight. Int. J. Commun. 2023, 17, 3562–3581. [Google Scholar]
  25. Cardullo, P.; Kitchin, R. Provincialising platform citizenship: Citizen participation in and through civic platforms. Digit. Geogr. Soc. 2025, 8, 100123. [Google Scholar] [CrossRef]
  26. Paulis, E.; Kies, R.; Östling, A. Public Deliberation in the Digital Age: Platforms, Participation, and Legitimacy; Springer: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  27. Richards, A. Platform Participation: Investigating the Dynamics of Platform Uses for Citizen Engagement and Digital Democracy by Two Local Councils in the UK; Cardiff University: Cardiff, UK, 2023. [Google Scholar]
  28. Di Tore, P.A.; Schiavo, F.; Di Domenico, M.; Mangione, G.R. Algorithmic citizenship: Fostering democracy, inclusion and explainability in the era of artificial intelligence. In Integrated Science; Springer: Berlin/Heidelberg, Germany, 2024; pp. 265–275. [Google Scholar] [CrossRef]
  29. Krishna, S. Digital identity, datafication and social justice: Understanding Aadhaar use among informal workers in South India. Inf. Technol. Dev. 2021, 27, 67–90. [Google Scholar] [CrossRef]
  30. Santos, F.P.; Lelkes, Y.; Levin, S.A. Link recommendation algorithms and dynamics of polarization in online social networks. Proc. Natl. Acad. Sci. USA 2021, 118, e2102141118. [Google Scholar] [CrossRef]
  31. Png, M.T. At the tensions of South and North: Critical roles of Global South stakeholders in AI governance. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022. [Google Scholar] [CrossRef]
  32. Mukabbir, M.N. Predictive algorithms and social inequality: A sociological analysis of bias, governance, and digital surveillance. Br. J. Multidiscip. Stud. 2025, 3, 40–48. [Google Scholar] [CrossRef]
  33. Lu, S. Regulating Algorithmic Harms. University of Michigan Law School: Ann Arbor, MI, USA, 2024. Available online: https://repository.law.umich.edu/ (accessed on 19 December 2025).
  34. Rodríguez, C.; Núñez, G. Digital identity systems and human rights: A legal framework for trust and security. Law Secur. Digit. Age 2023, 2, 49–63. [Google Scholar]
  35. Rymon, Y. Of the people, by the algorithm: How AI transforms democratic representation. arXiv 2025, arXiv:2508.19036. [Google Scholar] [CrossRef]
  36. Esteves, T. Reimagining citizen engagement in the digital age: Platforms, participation, and public trust. Public Adm. Gov. Glob. 2025, 6, 93–115. [Google Scholar] [CrossRef]
  37. Nosikov, A. Metaverses and Public Policy: Prospects for Virtual Civic Participation; Springer: Berlin/Heidelberg, Germany, 2026. [Google Scholar] [CrossRef]
  38. Selvakumar, P.; Sudheer, P.; Kannan, N. The Global Impact of AI on Digital Citizenship; IGI Global: Hershey, PA, USA, 2025. [Google Scholar]
  39. Masiero, S. Unfair ID; SAGE Publications Limited: Thousand Oaks, CA, USA, 2024. [Google Scholar]
  40. Huanca, J.R.R. Digital platforms of public entities and use by citizens: A systematic review. Localis—J. Local Self-Gov. 2025, 23, 3458–3471. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of AI-driven platforms, digital identity construction, participation, recognition, and societal outcomes.
Figure 1. Conceptual framework of AI-driven platforms, digital identity construction, participation, recognition, and societal outcomes.
Societies 16 00096 g001
Figure 2. Algorithmic identity production and the pathways from classification to social visibility and consequence.
Figure 2. Algorithmic identity production and the pathways from classification to social visibility and consequence.
Societies 16 00096 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ayeni, O.B.; Musinguzi-Karamukyo, I.; Onibalusi, O.T.; Omigbodun, O.M. Digital Identities and the Social Realm: How AI-Driven Platforms Reshape Participation, Recognition, and Group Dynamics. Societies 2026, 16, 96. https://doi.org/10.3390/soc16030096

AMA Style

Ayeni OB, Musinguzi-Karamukyo I, Onibalusi OT, Omigbodun OM. Digital Identities and the Social Realm: How AI-Driven Platforms Reshape Participation, Recognition, and Group Dynamics. Societies. 2026; 16(3):96. https://doi.org/10.3390/soc16030096

Chicago/Turabian Style

Ayeni, Oluwaseyi B., Isabella Musinguzi-Karamukyo, Oluwakemi T. Onibalusi, and Oluwajuwon M. Omigbodun. 2026. "Digital Identities and the Social Realm: How AI-Driven Platforms Reshape Participation, Recognition, and Group Dynamics" Societies 16, no. 3: 96. https://doi.org/10.3390/soc16030096

APA Style

Ayeni, O. B., Musinguzi-Karamukyo, I., Onibalusi, O. T., & Omigbodun, O. M. (2026). Digital Identities and the Social Realm: How AI-Driven Platforms Reshape Participation, Recognition, and Group Dynamics. Societies, 16(3), 96. https://doi.org/10.3390/soc16030096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop