Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments
Abstract
1. Introduction
2. Literature Review
2.1. Algorithmic and AI-Mediated Information Environments
2.2. Misinformation, Uncertainty, and Skepticism in Digital Information Environments
2.3. AI Awareness, Algorithmic Literacy, and Discernment
2.4. Positioning the Present Study
3. Methodology
3.1. Research Design
3.2. Data Source and Context
3.3. Data Collection Procedure
Sampling Strategy and Thread Selection
3.4. Data Preparation and Units of Analysis
3.5. Analytical Framework and Coding Procedure
3.5.1. Coding Procedure and Reliability Assessment
3.5.2. Temporal Context and Analytical Scope
3.6. Analytical Procedures
- Stage 1: Comment-Level Classification
- Stage 2: Thread-Level Aggregation
- Stage 3: Contextual Comparison
4. Results
4.1. Distribution of Digital Skepticism and AI Discernment
4.2. Digital Skepticism and AI Discernment by Score-Based Visibility Context
4.3. Digital Skepticism and AI Discernment Across Discourse Contexts
4.4. Consistency Across Levels of Analysis
4.5. Temporal Comparison of Digital Skepticism and AI Discernment
4.6. Low Prevalence of Discursive Skepticism and AI Attribution
5. Discussion
5.1. Interpreting the Observed Distributional Difference Between Digital Skepticism and AI Discernment
5.2. Visibility and Contextual Conditions of Discursive AI Attribution
5.3. Implications for Information Quality and Digital Citizenship
5.4. Contributions to Information Science
6. Conclusions
7. Limitations and Future Research
7.1. Limitations
7.2. Computational and Experimental Research Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Codebook and Rule-Based Indicators for Digital Skepticism and AI-Related Discernment
Appendix A.1. General Skepticism
- Conceptual Definition
- Operational Criteria
- Explicit requests for sources, evidence, or verification
- Direct questioning of factual accuracy or plausibility
- Statements expressing disbelief or uncertainty about whether the content is true
- Accepts the content without questioning credibility
- Expresses emotional reaction or opinion without evaluative doubt
- Criticizes actors, events, or topics without addressing truthfulness
- Illustrative Examples (Anonymized)
- “Is there any reliable source confirming this?”
- “I’m not convinced this actually happened.”
- “Do we have proof for these claims?”
- “This is shocking.”
- “I don’t like what they’re doing.”
- “That politician is terrible.”
Appendix A.2. Structural Suspicion
- Conceptual Definition
- Operational Criteria
- Claims that content is staged, scripted, rehearsed, or artificially arranged
- Statements that content “doesn’t feel real,” “looks fake,” or “seems constructed”
- Allegations of misleading or fabricated format without reference to AI
- Only questions factual accuracy (L1 only)
- Explicitly attributes content to AI or synthetic generation (L3)
- Criticizes bias or ideology without alleging fabrication or staging
- Illustrative Examples (Anonymized)
- “This whole thing looks staged.”
- “It feels scripted, like it was set up.”
- “The video doesn’t seem authentic at all.”
- “Can someone verify this?” (L1 only)
- “This is clearly AI-generated.” (L3)
- “The media is biased.”
Appendix A.3. Explicit AI-Related Discernment
- Conceptual Definition
- Operational Criteria
- Explicit mention of “AI,” “artificial intelligence,” “deepfake,” or “synthetic”
- Direct reference to AI-generated voice, video, image, or narration
- Clear attribution of content to automated or algorithmic generation
- Expresses doubt or suspicion without mentioning AI
- Uses metaphorical or colloquial language unrelated to AI mediation
- Refers to technology in general without attributing content creation to AI
- Illustrative Examples (Anonymized)
- “This sounds like an AI-generated voice.”
- “Another deepfake spreading online.”
- “Pretty sure this video was made by AI.”
- “This looks fake.” (L2)
- “Technology is scary these days.”
- “I don’t trust this video.” (L1 or L2)
Appendix A.4. Coding Rules and Implementation Notes
- Each indicator (L1, L2, L3) was coded as an independent binary variable (TRUE/FALSE).
- Indicators are non-mutually exclusive; a single comment may satisfy multiple criteria.
- Coding prioritized explicit textual cues rather than inferred intent, tone, or assumed author knowledge.
- The identical rule set was implemented in a deterministic Python-based classifier and applied consistently in both automated coding and human reliability validation.
- Illustrative examples above are complemented by the complete lexical pattern specification reported in Appendix A.5.
Appendix A.5. Full Rule-Based Lexical Patterns Implemented in the Classifier
| Indicator | Inclusion Patterns | Exclusion Constraints | Notes |
|---|---|---|---|
| L1 General skepticism | “is this true”, “any source”, “any evidence”, “proof?”, “I don’t buy this”, “I don’t believe this”, “sounds fake”, “no evidence”, “doesn’t add up”, “hard to believe”, “can anyone verify” | Exclude metaphorical usage; exclude emotional reactions without credibility questioning; exclude criticism of actors/events without addressing accuracy | Targets explicit doubt or requests for verification |
| L2 Structural suspicion | “staged”, “scripted”, “set up”, “manufactured”, “doesn’t feel real”, “looks fake”, “seems constructed”, “coordinated”, “propaganda”, “manipulated” | Exclude explicit AI references (L3); exclude generic bias/ideology claims without fabrication | Targets suspicion toward content construction or presentation |
| L3 Explicit AI-related discernment | “AI-generated”, “generated by AI”, “written by AI”, “made by AI”, “deepfake”, “synthetic”, “synthetic voice”, “AI voiceover”, “AI narration”, “artificial intelligence generated” | Exclude metaphorical/colloquial uses; exclude general technology references without attribution | Captures explicit attribution to AI or automated generation |
References
- Ahmmad, M.; Shahzad, K.; Iqbal, A.; Latif, M. Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth. Societies 2025, 15, 301. [Google Scholar] [CrossRef]
- Rieder, B.; Matamoros-Fernández, A.; Coromina, Ò. From ranking algorithms to ‘ranking cultures’:Investigating the modulation of visibility in YouTube search results. Convergence 2018, 24, 50–68. [Google Scholar] [CrossRef]
- Vraga, E.K.; Bode, L. Defining Misinformation and Understanding its Bounded Nature: Using Expertise and Evidence for Describing Misinformation. Political Commun. 2020, 37, 136–144. [Google Scholar] [CrossRef]
- Metzger, M.J.; Flanagin, A.J. Credibility and trust of information in online environments: The use of cognitive heuristics. J. Pragmat. 2013, 59, 210–220. [Google Scholar] [CrossRef]
- Wiggins, W.F.; Tejani, A.S. On the Opportunities and Risks of Foundation Models for Natural Language Processing in Radiology. Radiol. Artif. Intell. 2022, 4, e220119. [Google Scholar] [CrossRef]
- Floridi, L. Content Studies: A New Academic Discipline for Analysing, Evaluating, and Designing Content in a Digital and AI-Driven Age. Philos. Technol. 2025, 38, 41. [Google Scholar] [CrossRef]
- Ferreira, G.B. The Aesthetics of Algorithmic Disinformation: Dewey, Critical Theory, and the Crisis of Public Experience. Journal. Media 2025, 6, 168. [Google Scholar] [CrossRef]
- Skandali, D. Social Media Ethics: Balancing Transparency, AI Marketing, and Misinformation. Encyclopedia 2025, 5, 86. [Google Scholar] [CrossRef]
- Kozyreva, A.; Smillie, L.; Lewandowsky, S. Incorporating Psychological Science into Policy Making: The Case of Misinformation; Hogrefe Publishing: Göttingen, Germany, 2023; Volume 28, pp. 206–224. [Google Scholar]
- Zobel, G. Review of “Algorithms of oppression: How search engines reinforce racism,” by Noble, S.U. (2018). New York, New York: NYU Press. Commun. Des. Q. Rev. 2019, 7, 30–31. [Google Scholar] [CrossRef]
- Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media; Yale University Press: London, UK, 2018. [Google Scholar]
- Hastuti, H.; Maulana, H.F.; Lawelai, H.; Suherman, A. Algorithmic influence and media legitimacy: A systematic review of social media’s impact on news production. Front. Commun. 2025, 10, 1667471. [Google Scholar] [CrossRef]
- Messing, S.; Westwood, S.J. Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation When Selecting News Online. Commun. Res. 2014, 41, 1042–1063. [Google Scholar] [CrossRef]
- Van Dijck, J.; Poell, T.; De Waal, M. The Platform Society: Public Values in a Connective World; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Pariser, E. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think; Penguin: London, UK, 2011. [Google Scholar]
- Southwell, B.G.; Thorson, E.A.; Sheble, L. Introduction: Misinformation among Mass Audiences as a Focus for Inquiry. In Misinformation and Mass Audiences; Brian, G.S., Emily, A.T., Laura, S., Eds.; University of Texas Press: New York, NY, USA, 2018; pp. 1–12. [Google Scholar]
- Nyhan, B.; Reifler, J. When Corrections Fail: The Persistence of Political Misperceptions. Political Behav. 2010, 32, 303–330. [Google Scholar] [CrossRef]
- Paquin, R.S.; Boudewyns, V.; Betts, K.R.; Johnson, M.; O’Donoghue, A.C.; Southwell, B.G. An Empirical Procedure to Evaluate Misinformation Rejection and Deception in Mediated Communication Contexts. Commun. Theory 2022, 32, 25–47. [Google Scholar] [CrossRef]
- Lewandowsky, S.; Ecker, U.K.H.; Seifert, C.M.; Schwarz, N.; Cook, J. Misinformation and Its Correction:Continued Influence and Successful Debiasing. Psychol. Sci. Public Interest 2012, 13, 106–131. [Google Scholar] [CrossRef]
- Ecker, U.K.H.; Lewandowsky, S.; Cook, J.; Schmid, P.; Fazio, L.K.; Brashier, N.; Kendeou, P.; Vraga, E.K.; Amazeen, M.A. The psychological drivers of misinformation belief and its resistance to correction. Nat. Rev. Psychol. 2022, 1, 13–29. [Google Scholar] [CrossRef]
- Pennycook, G.; Rand, D.G. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 2019, 188, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Wardle, C.; Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking; Council of Europe Strasbourg: Strasbourg, France, 2017; Volume 27. [Google Scholar]
- Bucher, T. If… Then: Algorithmic Power and Politics; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Dogruel, L.; Masur, P.; Joeckel, S. Development and Validation of an Algorithm Literacy Scale for Internet Users. Commun. Methods Meas. 2022, 16, 115–133. [Google Scholar] [CrossRef]
- Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar]
- Cox, A. Algorithmic Literacy, AI Literacy and Responsible Generative AI Literacy. J. Web Librariansh. 2024, 18, 93–110. [Google Scholar] [CrossRef]
- Eder, M.; Sehl, A. Being aware of algorithmic personalization? Insights from three European Countries. Inf. Commun. Soc. 2025, 1–18. [Google Scholar] [CrossRef]
- Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Inf. Syst. J. 2024, 34, 384–414. [Google Scholar] [CrossRef]
- Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
- Huang, Y.; Liu, L. The impact of algorithm awareness on the acceptance of personalized social media content recommendation based on the technology acceptance model. Acta Psychol. 2025, 259, 105383. [Google Scholar] [CrossRef]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
- Pennycook, G.; Bear, A.; Collins, E.T.; Rand, D.G. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Manag. Sci. 2020, 66, 4944–4957. [Google Scholar] [CrossRef]
- Cho, Y.Y.; Woo, H. Heuristic and Systematic Processing on Social Media: Pathways from Literacy to Fact-Checking Behavior. J. Media 2025, 6, 198. [Google Scholar] [CrossRef]
- Gillespie, T.; Boczkowski, P.J.; Foot, K.A. Media Technologies: Essays on Communication, Materiality, and Society; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Gehrmann, S.; Strobelt, H.; Rush, A. GLTR: Statistical Detection and Visualization of Generated Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Florence, Italy, 28 July–2 August 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 111–116. [Google Scholar]
- Kirchenbauer, J.; Geiping, J.; Wen, Y.; Katz, J.; Miers, I.; Goldstein, T. A Watermark for Large Language Models. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 17061–17084. [Google Scholar]
- Mitchell, E.; Lee, Y.; Khazatsky, A.; Manning, C.D.; Finn, C. DetectGPT: Zero-shot machine-generated text detection using probability curvature. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; p. 1038. [Google Scholar]
- Zellers, R.; Holtzman, A.; Rashkin, H.; Bisk, Y.; Farhadi, A.; Roesner, F.; Choi, Y. Defending Against Neural Fake News. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Krippendorff, K. Content Analysis: An Introduction to Its Methodology, 4th ed.; Sage Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar]
- Glenski, M.; Pennycuff, C.; Weninger, T. Consumers and Curators: Browsing and Voting Patterns on Reddit. IEEE Trans. Comput. Soc. Syst. 2017, 4, 196–206. [Google Scholar] [CrossRef]
- Eslami, M.; Rickman, A.; Vaccaro, K.; Aleyasen, A.; Vuong, A.; Karahalios, K.; Hamilton, K.; Sandvig, C. “I always assumed that I wasn’t really that close to [her]”: Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 153–162. [Google Scholar]


| Item | Description |
|---|---|
| Data source | Reddit (public subreddits) |
| Data access method | Official Reddit API (read-only) |
| Discourse type | News-related public online discussions |
| Platform characteristics | Topic-based communities, threaded comments, engagement-based visibility |
| Number of discussion threads | 305 |
| Number of comments | 6065 |
| Primary unit of analysis | Comment-level textual content |
| Secondary unit of analysis | Thread-level aggregation |
| Engagement indicators | Thread-level mean comment score (visibility/valuation proxy under fixed-cap sampling) |
| Contextual identifiers | Subreddit affiliation |
| Language | English |
| Study design | Exploratory, observational |
| Time frame | August 2010–August 2025 |
| Category | Code | Operational Definition | Example Indicators * |
|---|---|---|---|
| General Skepticism | L1 | Expressions of doubt about credibility, accuracy, or authenticity without reference to AI or content-generation mechanisms | Requests for sources; questioning plausibility; expressions of disbelief |
| Structural Suspicion | L2 | Suspicion toward structure, presentation, or framing without explicit attribution to AI | References to scripted, staged, misleading, or propagandistic formats |
| Explicit AI Discernment | L3 | Direct attribution of content to AI mediation or generation | Mentions of AI-generated text, synthetic narration, AI voiceovers, deepfakes |
| Label | TP | FP | FN | TN | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|---|---|---|---|
| L1 | 130 | 6 | 14 | 150 | 0.956 | 0.903 | 0.929 | 144 |
| L2 | 16 | 0 | 7 | 277 | 1 | 0.696 | 0.821 | 23 |
| L3 | 23 | 0 | 4 | 273 | 1 | 0.852 | 0.92 | 27 |
| Visibility Context | Threads | Comments (N) | L1 n (%) | L2 n (%) | L3 n (%) |
|---|---|---|---|---|---|
| Higher-score threads | 152 | 3040 | 48 (1.58) | 6 (0.20) | 10 (0.33) |
| Lower-score threads | 153 | 3025 | 88 (2.91) | 10 (0.33) | 13 (0.43) |
| Subreddit | Threads | Comments (N) | L1 n (%) | L2 n (%) | L3 n (%) |
|---|---|---|---|---|---|
| Futurology | 62 | 1227 | 41 (3.33) | 0 (0.00) | 16 (1.30) |
| Technology | 45 | 896 | 10 (1.11) | 1 (0.11) | 5 (0.56) |
| Politics | 101 | 2009 | 47 (2.33) | 3 (0.15) | 2 (0.10) |
| Worldnews | 40 | 795 | 11 (1.38) | 9 (1.13) | 0 (0.00) |
| News | 57 | 1138 | 27 (2.37) | 3 (0.26) | 0 (0.00) |
| Category | Early Period (2019–2021) % | Recent Period (2023–2025) % | Absolute Difference (%) (Recent–Early) |
|---|---|---|---|
| L1 General skepticism | 1.43 | 1.98 | 0.55 |
| L2 Structural suspicion | 0.95 | 0.15 | −0.81 |
| L3 Explicit AI-related discernment | 0.00 | 0.59 | 0.59 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Phothong, L.; Sukprasert, A.; Shutimarrungson, N.; Obthong, M. Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments. Information 2026, 17, 295. https://doi.org/10.3390/info17030295
Phothong L, Sukprasert A, Shutimarrungson N, Obthong M. Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments. Information. 2026; 17(3):295. https://doi.org/10.3390/info17030295
Chicago/Turabian StylePhothong, Lersak, Anupong Sukprasert, Nattakarn Shutimarrungson, and Mehtabhorn Obthong. 2026. "Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments" Information 17, no. 3: 295. https://doi.org/10.3390/info17030295
APA StylePhothong, L., Sukprasert, A., Shutimarrungson, N., & Obthong, M. (2026). Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments. Information, 17(3), 295. https://doi.org/10.3390/info17030295

