Algorithm Awareness: Opportunities, Challenges and Impacts on Society

A special issue of Societies (ISSN 2075-4698).

Deadline for manuscript submissions: 30 September 2026 | Viewed by 67224

Special Issue Editor


E-Mail Website
Guest Editor
Department of Social Sciences, University of Naples Federico II, C.so Umberto I, 40, 80138 Napoli, Italy
Interests: critical studies of algorithms; human and technology interactions; social and digital inequalities; digital research methods; methodology of social sciences

Special Issue Information

Dear Colleagues,

Algorithms have become pervasive in contemporary society, influencing various aspects of our lives, from human interactions and online shopping recommendations to employment decisions and healthcare management. While these algorithms offer unprecedented opportunities for efficiency, personalization, and convenience, they also pose significant risks related to privacy, bias, and autonomy. Moreover, algorithm awareness is key to addressing the digital divide, as it equips individuals with the knowledge to navigate and critically evaluate the digital algorithms that shape their online experiences. Without this awareness, marginalized groups may remain at a disadvantage, unable to fully engage with or benefit from digital technologies. Thus, investigating how users understand algorithms and how they interact with them is both a social and technical issue.

Recently, several studies have focused on operationalizing the concept of algorithmic awareness, examining sense-making processes around algorithms and forms of user engagement, defining research methods to investigate interactions with algorithms, and conducting case studies on specific populations within various social and digital contexts.

The purpose of this Special Issue is to contribute to the expanding social science literature that seeks to better understand algorithm awareness, how people make sense of the opacity of algorithms, and how their assumptions may influence their understanding and daily engagement with algorithmic systems, posing both risks and opportunities. Moreover, it aims to provide insights into methodological advancements in algorithm awareness, including new opportunities and challenges, as well as innovative approaches, methods, and techniques that enhance algorithm awareness research.

Potential topics of the papers include, but are not limited to, the following:

  • People’s perceptions and understandings of algorithms.
  • Human–algorithm interactions.
  • Digital divide and algorithm awareness.
  • Algorithmic literacy.
  • Factors that contribute to users’ algorithm awareness.
  • Case studies and empirical research on algorithm awareness in single or multiple contexts.
  • Innovative methods for studying algorithm awareness.
  • Methodological advancements in algorithm awareness.

Contributions have to follow one of the three categories of papers (article, conceptual paper or review) of the journal and address the topic of the Special Issue.

Dr. Cristiano Felaco
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as conceptual papers are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Societies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • algorithm awareness
  • algorithmic literacy
  • algorithms
  • artificial intelligence
  • agency
  • digital divide
  • user–algorithm interaction method

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

43 pages, 3956 KB  
Article
Meta-Identity and Algorithmic Mediation on Digital Platforms: A Comparative Analysis of AI–Human Content Categorization
by Allan Herison Ferreira, Ana Carolina Trevisan, Carla Maria Baptista, Rubén Ramos-Antón, Álvaro Augusto Comin, Henrique F. Carvalho, Silvestre Vendrell and Valéria Oliveira Sá
Societies 2026, 16(4), 132; https://doi.org/10.3390/soc16040132 - 20 Apr 2026
Viewed by 954
Abstract
This article examines how algorithmic classification systems participate in the production of meta-identities, understood as operational classificatory constructs that mediate the visibility, circulation, and interpretation of digital content and its authors. The study employs a mixed-methods design combining controlled analytical simulation with qualitative [...] Read more.
This article examines how algorithmic classification systems participate in the production of meta-identities, understood as operational classificatory constructs that mediate the visibility, circulation, and interpretation of digital content and its authors. The study employs a mixed-methods design combining controlled analytical simulation with qualitative interpretive analysis, systematic thematic coding, and comparative statistical procedures. Empirical data are derived from the analysis of 150 audiovisual works produced in formative workshops and interpreted by four types of agents: authors, peers, specialized human analysts, and two Large Language Model-based AI systems (ChatGPT and Gemini). Interpretations were analyzed across micro, meso, and macro levels, using a consolidated system of thematic categories with hierarchical weighting and normalization procedures to ensure inter-agent comparability. The results demonstrate a systematic and structural divergence between human and algorithmic classifications. While human agents preserve semantic plurality and contextual anchoring, AI systems tend to reorganize thematic hierarchies through semantic aggregation and stabilization, thereby privileging broad, reusable categories. This process produces recurring, opaque classificatory patterns that serve as infrastructural references for subsequent algorithmic decisions. The article contributes methodologically by offering a replicable framework for comparing human and algorithmic regimes of meaning production in digital environments. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

26 pages, 2728 KB  
Article
Identification of Road Safety Behavior Patterns in Colombia Using Explainable Artificial Intelligence
by Hugo Ordoñez, Cristian Ordoñez, Carlos Cordoba and Luis Revelo
Societies 2026, 16(4), 104; https://doi.org/10.3390/soc16040104 - 24 Mar 2026
Viewed by 503
Abstract
This study identifies and explains road safety behavior patterns in Colombia using explainable artificial intelligence (XAI). Based on 9232 records and 38 variables from the Territorial Survey of Road Safety Behavior, the CRISP-DM methodology was applied, including data cleaning, normalization, encoding, and feature [...] Read more.
This study identifies and explains road safety behavior patterns in Colombia using explainable artificial intelligence (XAI). Based on 9232 records and 38 variables from the Territorial Survey of Road Safety Behavior, the CRISP-DM methodology was applied, including data cleaning, normalization, encoding, and feature selection. XGBoost, Random Forest, Bagging, and AdaBoost models were evaluated, incorporating three domain-specific indices: Distraction Index (DI), Risky Road Interaction Index (RRI), and Normative Compliance Index (NCI). AdaBoost achieved the best overall balance (Precision = 0.78; Recall = 0.75; F1-score = 0.77), simultaneously reducing false positives and false negatives. SHAP analysis revealed that environmental and infrastructure factors (lighting, traffic signals, intersections, congestion, perceived crime) explain more variance than self-reported behaviors (mobile phone use, alcohol consumption, speeding). The complementary indices indicated above-average distraction levels, high exposure to risky interactions, and low compliance in specific segments. These findings enable the prioritization of targeted interventions (improvements in lighting and crossings, focused enforcement, and educational campaigns) and support operation with thresholds adjusted to error costs, providing traceable decision support for public road safety policies. Overall, the proposed approach integrates prediction and explainability to enable actionable decisions and continuous monitoring aimed at reducing traffic accidents. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

30 pages, 625 KB  
Article
AI in Everyday Life: How Algorithmic Systems Shape Social Relations, Opportunity, and Public Trust
by Oluwaseyi B. Ayeni, Isabella Musinguzi-Karamukyo, Oluwakemi T. Onibalusi and Oluwajuwon M. Omigbodun
Societies 2026, 16(2), 59; https://doi.org/10.3390/soc16020059 - 12 Feb 2026
Viewed by 1523
Abstract
Artificial intelligence is often framed as a neutral technical tool that enhances efficiency and consistency in institutional decision-making. This article challenges that framing by showing that automated systems now operate as social and institutional actors that reshape recognition, opportunity, and public trust in [...] Read more.
Artificial intelligence is often framed as a neutral technical tool that enhances efficiency and consistency in institutional decision-making. This article challenges that framing by showing that automated systems now operate as social and institutional actors that reshape recognition, opportunity, and public trust in everyday life. Focusing on employment screening, welfare administration, and digital platforms, the study examines how algorithmic systems mediate social relations and reorganise how individuals are evaluated, classified, and legitimised. Drawing on regulatory and policy materials, platform governance documents, technical disclosures, and composite vignettes synthesised from publicly documented evidence, the article analyses how automated judgement acquires institutional authority. It advances three core contributions. First, it develops a sociological framework explaining how delegated authority, automated classification, and procedural opacity transform institutional power and individual standing. Second, it demonstrates a dual logic of inequality: automated systems both reproduce historical disadvantage through patterned data and generate new forms of exclusion through data abstraction and optimisation practices that detach individuals from familiar legal, social, and moral categories. Third, it shows that automation destabilises procedural justice by eroding relational recognition, producing trust deficits that cannot be resolved through technical fairness or explainability alone. The findings reveal that automated systems do not merely support institutional decisions; they redefine how institutions perceive individuals and how individuals interpret institutional legitimacy. The article concludes by outlining governance reforms aimed at restoring intelligibility, accountability, inclusion, and trust in an era where automated judgement increasingly structures social opportunity and public authority. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

21 pages, 262 KB  
Article
Encountering Generative AI: Narrative Self-Formation and Technologies of the Self Among Young Adults
by Dana Kvietkute and Ingunn Johanne Ness
Societies 2026, 16(1), 26; https://doi.org/10.3390/soc16010026 - 13 Jan 2026
Viewed by 2300
Abstract
This paper examines how young adults integrate generative artificial intelligence chatbots into everyday life and the implications of these engagements for the constitution of selfhood. Whilst existing research on AI-mediated subjectivity has predominantly employed identity frameworks centered on social positioning and role enactment, [...] Read more.
This paper examines how young adults integrate generative artificial intelligence chatbots into everyday life and the implications of these engagements for the constitution of selfhood. Whilst existing research on AI-mediated subjectivity has predominantly employed identity frameworks centered on social positioning and role enactment, this study foregrounds selfhood—understood as the organization of subjective experience through narrative coherence, interpretive authority, and practices of self-governance. Drawing upon Paul Ricœur’s theory of narrative self and Michel Foucault’s concept of technologies of the self, the analysis proceeds through in-depth qualitative interviews with sixteen young adults in Norway to investigate how algorithmic systems participate in autobiographical reasoning and self-formative practices. The findings reveal four dialectical tensions structuring participants’ engagements with ChatGPT: between instrumental efficiency and existential unease; between algorithmic scaffolding and relational displacement; between narrative depth and epistemic superficiality; and between agency and deliberative outsourcing. The analysis demonstrates that AI-mediated practices extend beyond instrumental utility to reconfigure fundamental dimensions of subjectivity, raising questions about interpretive authority, narrative authorship, and the conditions under which selfhood is negotiated in algorithmic environments. These findings contribute to debates on digital subjectivity, algorithmic governance, and the societal implications of AI systems that increasingly function as interlocutors in meaning-making processes. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
18 pages, 249 KB  
Article
Algorithms in Scientific Work: A Qualitative Study of University Research Processes Between Engagement and Critical Reflection
by Maria Carmela Catone
Societies 2025, 15(12), 349; https://doi.org/10.3390/soc15120349 - 12 Dec 2025
Viewed by 958
Abstract
This study examines the role of algorithms—particularly artificial intelligence—in scientific research processes and how automation intersects with expert knowledge and the autonomy of the researcher. Drawing on 25 qualitative interviews with Italian university scholars in the social sciences and humanities, the research explores [...] Read more.
This study examines the role of algorithms—particularly artificial intelligence—in scientific research processes and how automation intersects with expert knowledge and the autonomy of the researcher. Drawing on 25 qualitative interviews with Italian university scholars in the social sciences and humanities, the research explores how academics either incorporate or resist AI at various stages in their scientific work, the strategies they employ to manage the relationship between professional expertise and algorithmic systems and the forms of trust, caution or scepticism that characterise these interactions. The findings reveal diverse patterns of use, non-use and critical engagement, ranging from instrumental and efficiency-oriented adoption to dialogical experimentation and from identity-based resistance to systemic reflexivity regarding the institutional implications of AI. The study also highlights the need to thoroughly examine the characteristics of disciplinary scientific cultures, while highlighting the importance of promoting algorithmic awareness to support scientific rigour in the digital age. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
33 pages, 766 KB  
Article
Algorithmic Burnout and Digital Well-Being: Modelling Young Adults’ Resistance to Personalized Digital Persuasion
by Stefanos Balaskas, Maria Konstantakopoulou, Ioanna Yfantidou and Kyriakos Komis
Societies 2025, 15(8), 232; https://doi.org/10.3390/soc15080232 - 20 Aug 2025
Cited by 8 | Viewed by 9072
Abstract
In an era when AI systems curate increasingly fine-grained aspects of everyday media use, understanding algorithmic fatigue and resistance is essential for safeguarding user agency. Within the horizon of a more algorithmic and hyper-personalized advertising environment, knowing how people resist algorithmic advertising is [...] Read more.
In an era when AI systems curate increasingly fine-grained aspects of everyday media use, understanding algorithmic fatigue and resistance is essential for safeguarding user agency. Within the horizon of a more algorithmic and hyper-personalized advertising environment, knowing how people resist algorithmic advertising is of immediate importance. This research formulates and examines a structural resistance model for algorithmic advertising, combining psychological and cognitive predictors such as perceived ad fatigue (PAF), digital well-being (DWB), advertising literacy (ADL), and perceived relevance (PR). Based on a cross-sectional survey of 637 participants, the research employs Partial Least Squares Structural Equation Modeling (PLS-SEM) and mediation and multi-group analysis to uncover overall processes and group-specific resistance profiles. Findings show that DWB, ADL, and PR are strong positive predictors of resistance to persuasion, while PAF has no direct effect. PAF has significant indirect influences through both PR and ADL, with full mediation providing support for the cognitive filter function of resistance. DWB demonstrates partial mediation, indicating that it has influence both directly and through enhanced literacy and relevance attribution. Multi-group analysis also indicates that there are notable differences in terms of age, gender, education, social media consumption, ad skipping, and occurrence of digital burnout. Interestingly, younger users and those who have higher digital fatigue are more sensitive to cognitive mediators, whereas gender and education level play a moderating role in the effect of well-being and literacy on resistance pathways. The research provides theory-informed, scalable theory to enhance the knowledge of online resistance. Practical implications are outlined for policymakers, marketers, educators, and developers of digital platforms based on the extent to which psychological resilience and media literacy underpin user agency. In charting resistance contours, this article seeks to maintain the voice of the user in a world growing increasingly algorithmic. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

Other

Jump to: Research

17 pages, 1610 KB  
Systematic Review
Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth
by Mukhtar Ahmmad, Khurram Shahzad, Abid Iqbal and Mujahid Latif
Societies 2025, 15(11), 301; https://doi.org/10.3390/soc15110301 - 30 Oct 2025
Cited by 9 | Viewed by 49143
Abstract
This systematic review synthesizes a decade of peer-reviewed research (2015–2025) examining the interplay of filter bubbles, echo chambers, and algorithmic bias in shaping youth engagement within social media. A total of 30 studies were analyzed, using the PRISMA 2020 framework, encompassing computational audits, [...] Read more.
This systematic review synthesizes a decade of peer-reviewed research (2015–2025) examining the interplay of filter bubbles, echo chambers, and algorithmic bias in shaping youth engagement within social media. A total of 30 studies were analyzed, using the PRISMA 2020 framework, encompassing computational audits, simulation modeling, surveys, ethnographic accounts, and mixed-methods designs across diverse platforms, including Facebook, YouTube, Twitter/X, Instagram, TikTok, and Weibo. Results reveal three consistent patterns: (i) algorithmic systems structurally amplify ideological homogeneity, reinforcing selective exposure and limiting viewpoint diversity; (ii) youth demonstrate partial awareness and adaptive strategies to navigate algorithmic feeds, though their agency is constrained by opaque recommender systems and uneven digital literacy; and (iii) echo chambers not only foster ideological polarization but also serve as spaces for identity reinforcement and cultural belonging. Despite these insights, the evidence base suffers from geographic bias toward Western contexts, limited longitudinal research, methodological fragmentation, and conceptual ambiguity in key definitions. This review highlights the need for integrative, cross-cultural, and youth-centered approaches that bridge empirical evidence with lived experiences. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

Back to TopTop