Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge
Abstract
1. Introduction
2. Truth Regimes, Epistemic Authority, and Algorithmic Mediation
2.1. Foucault and Regimes of Veridiction
2.2. Symbolic Capital and Algorithmic Distinction: A Bourdieusian Perspective
2.3. Distributed Agency and Sociotechnical Assemblages: Actor-Network Theory
2.4. Toward an Integrated Framework: Algorithmic Systems as Epistemic Devices
3. Algorithmic Trust and Epistemic Capital in Digital Platforms: A Sociotechnical Transformation of Authority
3.1. Trust, Expertise, and Epistemic Authority: Theoretical Foundations for Digital Analysis
3.2. The Dual Transformation: Institutional Disintermediation and Algorithmic Re-Intermediation
3.3. New Criteria of Legitimation: From Expertise to Engagement, from Distance to Affective Proximity
3.4. Algorithmic Trust as Computational Epistemic Delegation
4. Generative Artificial Intelligence as an Epistemic Infrastructure: Agency, Performativity, and Synthetic Truth
4.1. Generative AI as an Epistemic Actor: Redistribution of Agency and the Performativity of Knowledge
4.2. Synthetic Truth as a Regime of Veridiction: From Correspondence to Statistical Plausibility
4.3. Automation of Epistemic Practices and the Normalization of Machine Authority
4.4. Sociotechnical Conditions of Acceptability: Algorithmic Trust, Automation Bias, and Institutional Legitimation
5. Generative Artificial Intelligence and Synthetic Truth: The Configuration of a New Epistemic Regime
5.1. Introduction: Algorithmic Verification Systems as Epistemic Infrastructures
5.2. Computational Veridiction: From Argumentative Truth to Probabilistic Calculation
5.3. The Political Construction of Ground Truth
6. Artificial Truth, Algorithmic Power, and the Crisis of Democratic Epistemology
6.1. Artificial Truth as Epistemic Governance: A Theoretical Framework
6.2. The Privatization of Epistemic Authority and the Platformization of Knowledge
6.3. The Transformation of Epistemic Citizenship: From Critical Agency to Passive Consumption
6.4. Epistemic Injustice, Data Colonialism, and Geopolitical Asymmetries
6.5. Democratic Implications: Accountability, Depoliticization, and Pluralism
7. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Orwell, G. 1984; Arnoldo Mondadori Editore: Milan, Italy, 1989. [Google Scholar]
- Dick, P.K. How to Build a Universe That Doesn’t Fall Apart Two Days Later; ISOLARII: New York, NY, USA, 2024. [Google Scholar]
- Aïmeur, E.; Amri, S.; Brassard, G. Fake news, disinformation and misinformation in social media: A review. Soc. Netw. Anal. Min. 2023, 13, 30. [Google Scholar] [CrossRef]
- Cvrtila, L. Truth politics and social media. Politička Misao 2024, 61, 7–30. [Google Scholar] [CrossRef]
- Foucault, M. La Volonté de Savoir; Gallimard: Paris, France, 1976. [Google Scholar]
- Foucault, M. Le jeu de Michel Foucault. In Dits et Écrits II, 1976–1988; Gallimard: Paris, France, 1977; pp. 298–329. [Google Scholar]
- Cobbe, J. Algorithmic censorship by social platforms: Power and resistance. Philos. Technol. 2021, 34, 739–766. [Google Scholar] [CrossRef]
- Sahakyan, H.; Gevorgyan, A.; Malkjyan, A. From disciplinary societies to algorithmic control: Rethinking Foucault’s human subject in the digital age. Philosophies 2025, 10, 73. [Google Scholar] [CrossRef]
- Jarke, J.; Prietl, B.; Egbert, S.; Boeva, Y.; Heuer, H. Knowing in algorithmic regimes: An introduction. In Algorithmic Regimes: Methods, Interactions, and Politics; Jarke, J., Prietl, B., Egbert, S., Boeva, Y., Heuer, H., Arnold, M., Eds.; Amsterdam University Press: Amsterdam, The Netherlands, 2024; pp. 7–34. [Google Scholar] [CrossRef]
- Bourdieu, P. Raisons Pratiques: Sur la Théorie de l’Action; Seuil: Paris, France, 1994. [Google Scholar]
- Ling, C.; Shen, Y.; Shanahan, E.A. Algorithmic meta-capital and the narrative policy framework. Policy Stud. J. 2025, 53, 1108–1122. [Google Scholar] [CrossRef]
- Bourdieu, P. Les Règles de l’Art: Genèse et Structure du Champ Littéraire; Seuil: Paris, France, 1992. [Google Scholar]
- Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
- Bucher, T. If…Then: Algorithmic Power and Politics; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Mackenzie, A. Machine Learners: Archaeology of a Data Practice; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
- Seaver, N. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data Soc. 2017, 4, 1–12. [Google Scholar] [CrossRef]
- Seaver, N. Captivating algorithms: Recommender systems as traps. J. Mater. Cult. 2019, 24, 421–436. [Google Scholar] [CrossRef]
- Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
- Luhmann, N. Trust and Power; Wiley: Hoboken, NJ, USA, 1979. [Google Scholar]
- Giddens, A. The Consequences of Modernity; Stanford University Press: Stanford, CA, USA, 1990. [Google Scholar]
- Carlson, M. Journalistic Authority: Legitimating News in the Digital Era; Columbia University Press: New York, NY, USA, 2017. [Google Scholar]
- Broersma, M. Journalism as performative discourse. In Journalism and Meaning-Making: Reading the Newspaper; Rupar, V., Ed.; Hampton Press: Cresskill, NJ, USA, 2010; pp. 15–35. [Google Scholar]
- Kiousis, S. Public trust or mistrust? Perceptions of media credibility in the information age. Mass Commun. Soc. 2001, 4, 381–403. [Google Scholar] [CrossRef]
- Dalen, A.V. Journalism, trust, and credibility. In The Handbook of Journalism Studies, 2nd ed.; Wahl-Jorgensen, K., Hanitzsch, T., Eds.; Routledge: London, UK, 2019; pp. 356–371. [Google Scholar] [CrossRef]
- Neuberger, C.; Bartsch, A.; Fröhlich, R.; Hanitzsch, T.; Reinemann, C.; Schindler, J. The digital transformation of knowledge order: A model for the analysis of the epistemic crisis. Ann. Int. Commun. Assoc. 2023, 47, 180–201. [Google Scholar] [CrossRef]
- Sperber, D.; Clément, F.; Heintz, C.; Mascaro, O.; Mercier, H.; Origgi, G.; Wilson, D. Epistemic vigilance. Mind Lang. 2010, 25, 359–393. [Google Scholar] [CrossRef]
- Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media; Yale University Press: New Haven, CT, USA, 2018. [Google Scholar]
- van Dijck, J.; Poell, T.; de Waal, M. The Platform Society: Public Values in a Connective World; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Carlson, M. Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Inf. Commun. Soc. 2020, 23, 374–388. [Google Scholar] [CrossRef]
- Waisbord, S. Truth is what happens to news: On journalism, fake news, and post-truth. Journal. Stud. 2018, 19, 1866–1878. [Google Scholar] [CrossRef]
- Vos, T.P.; Thomas, R.J. The discursive construction of journalistic authority in a post-truth age. Journal. Stud. 2018, 19, 2001–2010. [Google Scholar] [CrossRef]
- Scheufele, D.A.; Krause, N.M. Science audiences, misinformation, and fake news. Proc. Natl. Acad. Sci. USA 2019, 116, 7662–7669. [Google Scholar] [CrossRef] [PubMed]
- Ross Arguedas, A.; Robertson, C.; Fletcher, R.; Nielsen, R. Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review; Reuters Institute for the Study of Journalism: Oxford, UK, 2022; Available online: https://ora.ox.ac.uk/objects/uuid:6e357e97-7b16-450a-a827-a92c93729a08 (accessed on 19 February 2026).
- Dubey, H.V. The use–trust loop: Reel culture, semi-news narratives, and credibility in hyperlocal journalism. ShodhKosh J. Vis. Perform. Arts 2023, 4, 4719–4725. [Google Scholar] [CrossRef]
- Garajamirli, N. Algorithmic gatekeeping and democratic communication: Who decides what the public sees? Eur. J. Commun. Media Stud. 2025, 4, 54–67. [Google Scholar] [CrossRef]
- van Dijck, J. The Culture of Connectivity: A Critical History of Social Media; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Gillespie, T. The relevance of algorithms. In Media Technologies: Essays on Communication, Distribution, and Difference; Gillespie, T., Boczkowski, P.J., Foot, K.A., Eds.; MIT Press: Cambridge, MA, USA, 2014; pp. 167–193. [Google Scholar]
- Abidin, C. Internet Celebrity: Understanding Fame Online; Emerald Publishing: Bingley, UK, 2018. [Google Scholar]
- Couldry, N.; Hepp, A. The Mediated Construction of Reality; Polity: Cambridge, UK, 2018. [Google Scholar]
- Lewis, J.D.; Weigert, A. The social dynamics of trust: Theoretical and empirical research, 1985–2012. Soc. Forces 2012, 91, 25–31. [Google Scholar] [CrossRef]
- Miller, B.; Record, I. Justified belief in a digital age: On the epistemic implications of secret internet technologies. Episteme 2013, 10, 117–134. [Google Scholar] [CrossRef]
- Coady, D.; Chase, J. (Eds.) The Routledge Handbook of Applied Epistemology; Routledge: London, UK, 2019. [Google Scholar]
- Callon, M.; Law, J. After the individual in society: Lessons on collectivity from science, technology and society. Can. J. Sociol. 1997, 22, 165–182. [Google Scholar] [CrossRef]
- Munn, L.; Magee, L.; Arora, V. Truth machines: Synthesizing veracity in AI language models. AI Soc. 2024, 39, 2759–2773. [Google Scholar] [CrossRef]
- Heersmink, R.; de Rooij, B.; Clavel Vázquez, M.J.; Colombo, M. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness. Ethics Inf. Technol. 2024, 26, 41. [Google Scholar] [CrossRef]
- Haraway, D. Simians, Cyborgs, and Women: The Reinvention of Nature; Routledge: New York, NY, USA, 1991. [Google Scholar]
- Hayles, N.K. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics; University of Chicago Press: Chicago, IL, USA, 1999. [Google Scholar]
- Floridi, L.; Chiriatti, M. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21); Association for Computing Machinery: New York, NY, USA, 2021; pp. 610–623. [Google Scholar] [CrossRef]
- Jakesch, M.; Hancock, J.T.; Naaman, M. Human heuristics for AI-generated language are flawed. Proc. Natl. Acad. Sci. USA 2023, 120, e2208839120. [Google Scholar] [CrossRef] [PubMed]
- Markowitz, D.M. From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science. PNAS Nexus 2024, 3, pgae387. [Google Scholar] [CrossRef]
- Wiesner, F.; Koopman, B.; Gupta, S.; Yang, Y. Hallucinate or memorize? The two sides of probabilistic learning in large language models. arXiv 2025, arXiv:2511.08877. [Google Scholar] [CrossRef]
- Verma, R.K. The code of society: Constructing social theory through large language models. Preprints 2025, preprints202505.1792. [Google Scholar] [CrossRef]
- Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. 2025, 43, 42. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, Y.; Luo, Q.; Parker, A.G.; De Choudhury, M. Synthetic lies: Understanding AI-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23); Association for Computing Machinery: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
- Shanahan, M. Talking about large language models. arXiv 2023, arXiv:2212.03551. [Google Scholar] [CrossRef]
- Li, A.; Sinnamon, L. Generative AI search engines as arbiters of public knowledge: An audit of bias and authority. Proc. Assoc. Inf. Sci. Technol. 2024, 61, 205–217. [Google Scholar] [CrossRef]
- Napoli, P.M. social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommun. Policy 2019, 39, 751–760. [Google Scholar] [CrossRef]
- Gaube, S.; Suresh, H.; Raue, M.; Merritt, A.; Berkowitz, S.J.; Lermer, E.; Coughlin, J.F.; Guttag, J.V.; Colak, E.; Ghassemi, M. Do as AI say: Susceptibility in deployment of clinical decision-aids. NPJ Digit. Med. 2021, 4, 31. [Google Scholar] [CrossRef] [PubMed]
- Vasconcelos, H.; Jörke, M.; Grunde-McLaughlin, M.; Gerstenberg, T.; Bernstein, M.S.; Krishna, R. Explanations can reduce overreliance on AI systems during decision-making. Proc. ACM Hum.-Comput. Interact. 2023, 7, 129. [Google Scholar] [CrossRef]
- Cabitza, F.; Campagner, A.; Balsano, C. Bridging the “last mile” gap between AI implementation and operation: “data awareness” that matters. Ann. Transl. Med. 2020, 8, 501. [Google Scholar] [CrossRef]
- Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
- Star, S.L.; Ruhleder, K. Steps toward an ecology of infrastructure: Design and access for large information spaces. Inf. Syst. Res. 1996, 7, 111–134. [Google Scholar] [CrossRef]
- Kay, J.; Kasirzadeh, A.; Mohamed, S. Epistemic injustice in generative AI. In Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2024); Association for the Advancement of Artificial Intelligence: Washington, DC, USA, 2024; pp. 684–697. [Google Scholar]
- Rieder, B. Engines of Order: A Mechanology of Algorithmic Techniques; Amsterdam University Press: Amsterdam, The Netherlands, 2020. [Google Scholar] [CrossRef]
- Shin, D. Automating epistemology: How AI reconfigures truth, authority, and verification. AI Soc. 2025, 41, 1553–1559. [Google Scholar] [CrossRef]
- Domínguez Hernández, A.; Owen, R.; Nielsen, D.S.; McConville, R. Ethical, political and epistemic implications of machine learning (mis)information classification: Insights from an interdisciplinary collaboration between social and data scientists. J. Responsible Innov. 2023, 10, 2222514. [Google Scholar] [CrossRef]
- Thorne, J.; Vlachos, A. Automated fact checking: Task formulations, methods and future directions. arXiv 2018, arXiv:1806.07687. [Google Scholar] [CrossRef]
- van der Velden, M.A.C.G.; Loecherbach, F.; van Atteveldt, W.; Fokkens, A.; Reuver, M.; Welbers, K. Whose truth is it anyway? An experiment on annotation bias in times of factual opinion polarization. Commun. Methods Meas. 2025, 19, 332–349. [Google Scholar] [CrossRef]
- Qureshi, K.A.; Malick, R.A.S.; Sabih, M. Social media and microblogs credibility: Identification, theory driven framework, and recommendation. IEEE Access 2021, 9, 137744–137781. [Google Scholar] [CrossRef]
- Schuster, T.; Schuster, R.; Shah, D.J.; Barzilay, R. The limitations of stylometry for detecting machine-generated fake news. Comput. Linguist. 2020, 46, 499–530. [Google Scholar] [CrossRef]
- Glockner, M.; Staliūnaitė, I.; Thorne, J.; Vallejo, G.; Vlachos, A.; Gurevych, I. AmbiFC: Fact-checking ambiguous claims with evidence. Trans. Assoc. Comput. Linguist. 2024, 12, 1–18. [Google Scholar] [CrossRef]
- D’Ignazio, C.; Klein, L.F. Data Feminism, Open Access ed.; The MIT Press: Cambridge, MA, USA, 2020. [Google Scholar] [CrossRef]
- Rubin, V.L. Content verification for social media: From deception detection to automated fact-checking. In The SAGE Handbook of Social Media Research Methods; Quan-Haase, A., Sloan, L., Eds.; SAGE Publications: London, UK, 2022; pp. 393–414. [Google Scholar] [CrossRef]
- Thibault, C.; Tian, J.-J.; Péloquin-Skulski, G.; Curtis, T.L.; Zhou, J.; Laflamme, F.; Guan, L.Y.; Rabbany, R.; Godbout, J.-F.; Pelrine, K. A guide to misinformation detection data and evaluation. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining; ACM: New York, NY, USA, 2025; pp. 5801–5809. [Google Scholar] [CrossRef]
- Graves, L. Deciding What’s True: The Rise of Political Fact-Checking in American Journalism; Columbia University Press: New York, NY, USA, 2016. [Google Scholar]
- Amazeen, M.A. Journalistic interventions: The structural factors affecting the global emergence of fact-checking. Journalism 2020, 21, 95–111. [Google Scholar] [CrossRef]
- Gebru, T.; Morgenstern, J.; Vecchione, B.; Wortman Vaughan, J.; Wallach, H.; Daumé, H., III; Crawford, K. Datasheets for datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
- Roberts, S.T. Behind the Screen: Content Moderation in the Shadows of Social Media; Yale University Press: New Haven, CT, USA, 2019. [Google Scholar]
- Couldry, N.; Mejias, U.A. The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism; Stanford University Press: Stanford, CA, USA, 2019. [Google Scholar]
- Park, J.; Ellezhuthil, R.; Arunachalam, R.; Feldman, L.; Singh, V. Toward fairness in misinformation detection algorithms. In Proceedings of the Workshop on News Media and Computational Journalism (MEDIATE), 16th International AAAI Conference on Web and Social Media (ICWSM 2022); Association for the Advancement of Artificial Intelligence: Washington, DC, USA, 2022. [Google Scholar] [CrossRef]
- Neumann, T.; De-Arteaga, M.; Fazelpour, S. Justice in misinformation detection systems: An analysis of algorithms, stakeholders, and potential harms. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22); Association for Computing Machinery: New York, NY, USA, 2022; pp. 1504–1515. [Google Scholar] [CrossRef]
- Gandini, A.; Gerosa, A.; Gobbo, B.; Keeling, S.; Leonini, L.; Mosca, L.; Orofino, M.; Reviglio, U.; Splendore, S. The algorithmic public opinion: A literature review. SocArXiv 2022. [Google Scholar] [CrossRef]
- Schneiders, P.; Stark, B. Ensuring news quality in platformized news ecosystems: Shortcomings and recommendations for an epistemic governance. Media Commun. 2025, 13, 10042. [Google Scholar] [CrossRef]
- Habermas, J. Storia e Critica dell’Opinione Pubblica; Laterza: Bari, Italy, 1971. [Google Scholar]
- Elkin-Koren, N.; Perel, M. Democratic friction in speech governance by AI. In Research Handbook on the Law of Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2023; pp. 1029–1050. [Google Scholar] [CrossRef]
- Nez, E.; Quintana, I.P. Algoritmos e racionalidade pública: Análise da influência dos sistemas automatizados na deliberação democrática à luz da Teoria Habermasiana. Logeion Filos. Inf. 2024, 11, e7383. [Google Scholar] [CrossRef]
- Mendonça, R.F.; Amaral, E.F.L.; Almada, M.P. Algorithms and politics. In The Oxford Handbook of Deliberative Democracy; Bächtiger, A., Dryzek, J.S., Mansbridge, J., Warren, M.D., Eds.; Oxford University Press: Oxford, UK, 2023; pp. 123–138. [Google Scholar] [CrossRef]
- Tobi, A. Towards an epistemic compass for online content moderation. Philos. Technol. 2024, 37, 791–815. [Google Scholar] [CrossRef]
- Romanishyn, S.; Koval, V.; Petrenko, O. AI-driven disinformation: Policy recommendations for democratic resilience. Front. Artif. Intell. 2025, 8, 1569115. [Google Scholar] [CrossRef] [PubMed]
- Aytac, U. Big Tech, algorithmic power, and democratic control. J. Polit. 2024, 86, 1431–1445. [Google Scholar] [CrossRef]
- Tabarés, R. Plataformización, automatización y aceleración en los medios sociales: Amenazas para la deliberación democrática. Daimon Rev. Int. Filosof. 2024, 93, 127–142. [Google Scholar] [CrossRef]
- Öğuç, Ç. Dead internet hypothesis: AI, censorship, and the decline of human-centered digital discourse. İmgelem 2025, 9, 89–112. [Google Scholar] [CrossRef]
- Hyzen, A.; Loosen, W.; Reimer, J. Epistemic welfare and algorithmic recommender systems: Overcoming the epistemic crisis in the digitalized public sphere. Commun. Theory 2025, 35, qtaf018. [Google Scholar] [CrossRef]
- García, C.S.; Calvo, P. Opinión pública masiva: Colonización algorítmica y sintetificación de la esfera pública. Rev. CIDOB Afers Int. 2024, 138, 73–98. [Google Scholar] [CrossRef]
- Radsch, C.C. On the frontlines of the information wars: How algorithmic gatekeepers and national security impact journalism. In National Security, Journalism, and Law in an Age of Information Warfare; Ambinder, M., Henrichsen, J.R., Rosati, C., Eds.; Oxford University Press: Oxford, UK, 2024; pp. 277–300. [Google Scholar] [CrossRef]
- Monteiro, R.L.; Almeida, V.A.F.; Doneda, D. Legitimidade democrática na governança algorítmica: Uma análise tridimensional. Rev. Dir. Fundam. Democr. 2024, 29, 8–32. [Google Scholar] [CrossRef]
- Coeckelbergh, M. Artificial intelligence, the common good, and the democratic deficit in AI governance. AI Ethics 2024, 4, 1089–1098. [Google Scholar] [CrossRef]
- Büscher, B. Artificial intelligence, platform capitalist power, and the impact of the crisis of truth on ethnography. Annu. Rev. Anthropol. 2025, 54, 253–269. [Google Scholar] [CrossRef]
- Zhakin, S.; Mukan, N. Digital technologies and the reformatting of values in the post-truth era. Bull. Karaganda Univ. Hist. Philos. Ser. 2025, 30, 241–247. [Google Scholar] [CrossRef] [PubMed]
| Theoretical Perspective | Key Concept | Algorithmic Operation | Epistemic Effect | Resulting Transformation |
|---|---|---|---|---|
| Foucault: Regimes of truth and governmentality | Regime of truth as the set of rules that distinguish true/false and attach effects of power to truth | Algorithms institute veridictional logics by determining the visibility, credibility, and authoritativeness of content through heterogeneous dispositifs (machine learning, policy, moderation) | Production of “truth effects”: structuring of public reality through the selection, amplification, and marginalization of narratives | Algorithmic governmentality: real-time behavioral modulation, attention orientation, and optimization of visibility rather than explicit censorship |
| Bourdieu: Symbolic capital and fields | Symbolic capital as credit and authority derived from collective recognition of legitimacy | Algorithms reconfigure the bases of capital accumulation, shifting from institutional credentials to “algorithmic meta-capital” (capacity to optimize for visibility) | Redefinition of epistemic authority: from formal expertise toward popularity, emotional resonance, perceived sincerity, and affective performativity | Restructuring of fields: platforms operate as meta-fields mediating across different fields, imposing algorithmic metrics and commercial logics; structural volatility of algorithmic capital |
| Latour: Actor-Network Theory and distributed agency | Knowledge as an emergent product of heterogeneous networks of associations between human and non-human actors | Algorithms act as agentic actants: they inscribe patterns, classify, predict, and recommend, producing systematic “betrayals” along chains of translation | Black-boxing of algorithmic authority: naturalization of computational outputs through the opacification of internal mechanisms | Extended sociotechnical assemblages: global supply chains including data, architectures, interfaces, infrastructures, and policy; power asymmetries in the inscription of interests into technical configurations |
| Integrated framework (Synthesis) | Algorithms as epistemic devices: sociotechnical assemblages that institutionalize regimes of truth while reordering symbolic capital economies | Simultaneous operation across three registers: (1) non-human epistemic agency; (2) operationalization of power/knowledge regimes; (3) restructuring of symbolic capital | Production of “artificial truth”: truths constituted through technical dispositifs with epistemic logics radically different from those of modern knowledge institutions | Reconfiguration of epistemic infrastructures: criteria of authority shift from objectivity to credibility/sincerity, from institutional distance to affective proximity, and from transparent verifiability to computational opacity |
| Transformation | Theoretical Lens (Section 2) | Mechanism Documented | Mechanism of Artificial Truth | Democratic Implication |
|---|---|---|---|---|
| Trust economies and epistemic capital | Bourdieu: symbolic capital and fields | Displacement of institutional expertise by engagement-based and affective forms of platform-native capital | Algorithmic trust as epistemic delegation: credibility detaches from certification and attaches to visibility metrics | Erosion of institutional authority; audiences structurally exposed to non-certified epistemic actors |
| Generative AI as epistemic actant | Latour: Actor-Network Theory and distributed agency | Emergence of generative AI as non-human actant producing synthetic authority through fluency and procedural confidence | Synthetic truth: plausibility competes with correspondence as the dominant criterion of validity | Normalization of machine authority; automation bias; cognitive delegation to opaque infrastructures |
| Computational veridiction | Foucault: regimes of truth and governmentality | Institutionalization of automated fact-checking as a dispositif translating contested epistemic judgments into probabilistic classifications | Computational veridiction: truth constituted through technical procedures that embed power relations and structural asymmetries | Privatization of epistemic authority; epistemic injustice; exclusion of non-hegemonic epistemologies |
| Artificial Truth (synthesis) | Integrated framework | Convergence of the three transformations into a systemic configuration of algorithmic epistemic governance | Artificial Truth: validation of knowledge claims delegated to privatized infrastructures optimized for engagement rather than democratic deliberation | Epistocratic drift; erosion of democratic friction; structural compression of epistemic citizenship |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Palese, R. Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge. Societies 2026, 16, 102. https://doi.org/10.3390/soc16030102
Palese R. Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge. Societies. 2026; 16(3):102. https://doi.org/10.3390/soc16030102
Chicago/Turabian StylePalese, Rosario. 2026. "Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge" Societies 16, no. 3: 102. https://doi.org/10.3390/soc16030102
APA StylePalese, R. (2026). Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge. Societies, 16(3), 102. https://doi.org/10.3390/soc16030102

