Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos
Abstract
1. Introduction
- (RQ1) What do ordinary users say they think about AI-generated videos on major public platforms?
- (RQ2) Can these natural responses be grouped into a small set of recurring thematic categories?
- (RQ3) Which themes dominate everyday discourse and which themes are comparatively rare?
2. Literature Review
| Author(s) | Research Focus | Methodology & Data | Key Findings |
|---|---|---|---|
| [6] | Mapping public discourse and sentiment polarity regarding deepfakes on Reddit. | Method: LDA topic modeling and sentiment analysis. Data: ~17,720 Reddit posts and comments. | Identified a clear thematic split between creative/entertainment uses and ethical/legal concerns. Confirmed a highly polarized public sentiment (~47% positive vs. ~37% negative). |
| [18] | Investigate how AI-generated tourism videos are perceived and whether perceptions affect tourist intentions. | Method: Mixed methods using experimental stimuli and survey responses. Data: Controlled datasets of user reactions to specific prompts. | Highlighted the crucial role of perceived authenticity, trust, and context (e.g., hedonic vs. utilitarian) in shaping user attitudes. Demonstrated that transparency is a key moderator of persuasion. |
| [19] | Compare user reactions to AI-generated vs. human images in service contexts; measure attitudes and behavioral intent. | Topic modeling is used to extract emergent themes from open-ended responses (authors combine LDA with confirmatory tests). Experimental stimuli + open-ended user responses; empirical dataset | AI-generated visuals can be as persuasive as real images in some hedonic, high-involvement contexts; however, transparency and perceived authenticity are decisive moderators. Topic analysis surfaces concerns about trust, esthetics and perceived value. |
| [21] | Methodological demonstration of a pipeline for extracting and labeling topics from YouTube comments using LDA and LLMs. | Method: LDA topic modeling combined with LLM-assisted labeling for interpretation. Data: A YouTube comment dataset on AI-related topics. | Provided a methodological proof-of-concept, showing that coherent topics can be extracted at scale. LDA uncovers coherent word clusters Human validation remains important for short, noisy comments. |
| [22] | Small-scale qualitative analysis of user comments on specific deepfake videos on YouTube. | Method: Manual content analysis and descriptive sentiment scoring. Data: A limited sample of YouTube comments from selected videos. | Identified a core tension between user amusement/admiration and fear/anxiety regarding deepfakes. YouTube as a rich site for analyzing organic user reactions. |
| [20] | Broad mapping of academic literature on generative AI. | Large-scale topic modeling (BERTopic) of scholarly articles. Data: A bibliographic corpus of ~1319 academic records from Scopus. | Provides a macro-level taxonomy of research clusters (e.g., images, text, ethics, detection). Methodological takeaway: while LDA remains common, transformer-based, contextual topic methods and human-in-the-loop labeling are increasingly used to handle short social texts. |
3. Conceptual Model
3.1. ABC Framework of Attitudes
3.1.1. Affective Responses to AI-Generated Videos
3.1.2. Behavioral Consequences and Intentions
3.1.3. Cognitive Evaluations: Authenticity, Trust and Epistemic Concerns
3.2. Socio-Technical Systems and Platforms
3.3. AI-Generated Content and Esthetics
3.4. Societal and Ethical Implications
- The analysis begins at the most immediate level: AI-Generated Content and Esthetics, focusing on direct, personal interactions with the media artifact itself.
- It then progresses to the intermediate level of Socio-Technical Systems and Platforms, which addresses reactions to the technologies and corporate actors behind the content.
- Finally, it culminates at the macro-level with Societal and Ethical Implications, concerning the abstract, wide-ranging impacts of technology on society, culture, and truth.
4. Methodology
4.1. Topic Modeling
4.2. Data Extraction
,’ ID: gwhSPf3S89M); and (d) Journalistic explorations of the technology’s impact by established media outlets (e.g., ‘We Tested Google Veo and Runway…’ by The Wall Street Journal, ID: US2gO7UYEfY). A visual summary of this video dataset is provided in Figure 2.4.3. Data Preparation and Cleaning
- Tokenization: The raw text of each comment was first segmented into individual tokens based on whitespace splitting.
- Text Normalization: A series of deterministic normalization steps were applied to each token. This included converting all text to lowercase, removing punctuation, trimming extraneous whitespace, and eliminating tokens with fewer than two characters.
- Stopword Filtering: We applied a comprehensive stopword removal process using two main resources: a standard English stopword list (e.g., ‘the’, ‘is’, ‘a’, ‘and’) and a second, researcher-constructed list of terms that were frequent but thematically irrelevant to the study’s focus (e.g., ‘youtube’, ‘video’, ‘comment’).
- Numeric Token Conversion: All tokens consisting of digits were converted to their English word equivalents (e.g., “3” became “three”) using a standard number-to-word library. These newly generated word-form numbers were then re-filtered against the stopword lists.
- Content-Based Filtering and Bias Assessment: A critical step was to exclude comments that lacked sufficient semantic content for robust topic modeling. Therefore, any comments with fewer than three remaining tokens after the preceding stages were removed from the dataset. This rule is standard practice to filter out very short, non-substantive comments (e.g., “lol”, “wow”, or single emojis) that contribute noise rather than thematic signal. This filtering process resulted in the exclusion of 4377 comments. Consequently, our final analytic dataset comprised 11,418 comments, representing a retention rate of approximately 72.3% from the initial corpus. While this step is necessary to improve model quality, we acknowledge that it may introduce a bias against purely affective, low-effort reactions and focus the analysis on more deliberative comments.
- Domain-Specific Rooting: To consolidate morphological and lexical variants, tokens were mapped to curated root forms. This process grouped related words to capture their shared semantic core (e.g., mapping ‘generation’, ‘generated’, and ‘generative’ to a common root).
- N-gram Construction: To capture meaningful multi-word phrases (collocations) that often represent a single concept (e.g., “uncanny valley”, “will smith”), the token set for each comment was expanded. Using co-occurrence statistics, we identified and included significant bigram and trigram combinations alongside the original unigrams.
4.4. Determining the Optimal Number of Topics in LDA
5. Findings and Results
- Tokens: Serves as a proxy for a topic’s prevalence or “mass” within the corpus, approximating the total number of words assigned to a topic across all documents [45];
- Exclusivity: Measures how unique a topic’s top words are relative to other topics. High values indicate lexical distinctiveness [94];
- Cosine Distance: Measures the average dissimilarity of a topic from all others in the word probability space [95]. As formalized in the equation below, this metric is calculated by averaging the cosine distance (1—cosine similarity) between the word-probability distribution of a given topic, P_t, and that of every other topic, P_s. Given that LDA topics are represented by high-dimensional and relatively sparse Bag-of-Words (BoW) vectors, a higher cosine distance indicates stronger lexical separation among topics. This is interpreted as a positive outcome, confirming that the model has successfully identified thematically distinct clusters.
5.1. Visualization of Topics
5.2. Hierarchical Classifications of Public Discourse: From Topics to Themes
5.3. The Psychological Dynamics of Public Discourse: An ABC-Integrated Analysis
5.4. Key Findings in Response to Research Questions
5.4.1. (RQ1) What Do Ordinary Users Say They Think About AI-Generated Videos on Major Public Platforms?
- Socio-Technical Systems and Platforms: A significant portion of the discourse centers on the underlying infrastructure that produces and delivers AI content. This includes practical user concerns about accessibility, such as geo-restrictions and the need for specific software (Topic 0), as well as a critical awareness of corporate behavior, as seen in user skepticism toward curated demonstrations (Topic 6). Furthermore, users are keenly aware of the market dynamics, frequently discussing the competitive “arms race” between different AI models (Topic 11) and the broader commercial and business applications of the technology (Topic 14).
- AI-Generated Content and Esthetics: This theme captures users’ immediate and direct reactions to the media artifacts themselves. The commentary is rich with esthetic and affective evaluations, ranging from feelings of unease with the hyper-realistic yet flawed “uncanny valley” effect (Topic 2) to the cognitive struggle to distinguish authentic from synthetic media, a core element of Topic 7 (“The Artificial Nature of AI and User Reactions”). Users also find amusement at the emergent, often accidental humor in technical glitches (Topic 9) and express appreciation for AI’s unique creative capabilities, such as its ability to generate surreal and mythical content (Topic 12). This discourse extends to how AI content is integrated into cultural practices—becoming a memetic benchmark for technological progress (Topic 3) or being applied in creative industries like music, advertising, and film (Topic 5, Topic 10).
- Societal and Ethical Implications: Finally, users grapple with the broad, macro-level consequences of generative AI. These discussions are grounded in deep-seated ethical and normative concerns. Core among them is the technology’s perceived impact on jobs, human creativity, and the very fabric of reality (Topic 1). This theme also reflects profound anxieties about the erosion of trust and the potential for manipulation. Users express significant fear that AI will make it impossible to trust video evidence, leading to a state of “epistemic security” collapse (Topic 8), a concern that broadens to include the collapsing trust in institutional media (Topic 13). These anxieties cohere around the significant ethical questions posed by generative AI, mapping directly onto its power to influence public belief and the potential for widespread societal harm (Topic 4).
5.4.2. (RQ2) Can These Natural Responses Be Grouped into a Small Set of Recurring Thematic Categories?
- Socio-Technical Systems and Platforms: This category consolidates discussions about the technology itself—the tools, the companies that control them, their accessibility, and the competitive market dynamics.
- AI-Generated Content and Esthetics: This groups topics focused on the media artifacts produced by AI, including their esthetic qualities (e.g., uncanny, comical, surreal), cultural impact, and genre applications.
- Societal and Ethical Implications: This theme encompasses the broader societal consequences, including debates on labor displacement, the erosion of truth and trust, and widespread fear about the future of media.
5.4.3. (RQ3) Which Themes Dominate Everyday Discourse and Which Themes Are Comparatively Rare?
- Dominant Themes: The discourse is most heavily dominated by topics centered on specific cultural artifacts and immediate, visceral reactions. The single most discussed topic is “The ‘Will Smith Eating Spaghetti’ Meme Benchmark” (Topic 3) with 1652 comments, indicating that tangible, memetic touchstones are powerful drivers of conversation. This is followed closely by discussions of the “The Uncanny Nature of AI-Generated Content” (Topic 2) with 1240 comments, highlighting the prevalence of esthetic and affective user responses. Practical issues also generate significant engagement, with “Technical Support and Global Access to AI Tools” (Topic 0) attracting 1,064 comments.
- Comparatively Rare Themes: In contrast, topics requiring more abstract or industry-specific knowledge receive significantly less direct engagement. The least discussed topic is “Skepticism Towards AI Demonstrations (Google Veo)” (Topic 6), with only 395 comments, suggesting that critique of specific corporate demos is a niche conversation. Similarly, discussions around “AI’s Role in the Film and Entertainment Industry” (Topic 10) (514 comments) and “Belief, Influence, and the Cost of AI” (Topic 4) (524 comments) represent smaller sub-discourses compared to the dominant themes.
6. Conclusions
6.1. Theoretical Contributions
- An affect-driven cascade defines immediate reactions to AI-generated media artifacts, where visceral feelings (e.g., the unease of the “uncanny valley”) precede and trigger cognitive appraisal (e.g., questioning authenticity) and subsequent behavioral engagement (e.g., memetic sharing).
- A behavior-centric process characterizes interactions with the underlying socio-technical platforms, where practical, goal-oriented actions (e.g., navigating access barriers or comparing tools) shape strategic cognitive judgments about corporate actors and resultant affective states like frustration or excitement.
- A cognitive-affective feedback loop structures the abstract societal and ethical debates, where deeply held cognitive beliefs about AI’s impact (e.g., on jobs and truth) and potent affective responses (e.g., fear and anxiety) mutually reinforce one another, culminating in behavioral intentions such as calls for regulation.
- Esthetic Engagement vs. Epistemic Anxiety: Users are fascinated by the creative frontiers of AI video but are equally fearful that the same technology is eroding the foundations of trust in visual media. The more realistic the content, the greater the anxiety. This tension directly reflects the foundational concept of the ‘uncanny valley’ [33], where hyper-realism triggers affective discomfort, a phenomenon that our findings show is now intrinsically linked to the cognitive fear of pervasive, undetectable deepfakes [24].
- Democratization of Creation vs. Centralization of Power: While AI tools promise to democratize media production, the public discourse reveals an acute awareness that the technology’s development and control are concentrated in the hands of a few powerful corporate actors. This dichotomy speaks directly to the core tenets of platform studies, which interrogate the non-neutral role of corporate actors in shaping public discourse and access [54,93]. While the technology promises democratization, the public is keenly aware that the underlying infrastructure operates within a framework of corporate governance and market logic, echoing concerns raised in the literature about platform politics.
6.2. Practical Contribution
7. Limitation and Future Studies
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| LDA | Latent Dirichlet Allocation |
| ABC | Affective, Behavioral, and Cognitive |
| ITU | International Telecommunication Union |
| C2PA | Coalition for Content Provenance and Authenticity |
| RQ | Research Question |
| NPMI | Normalized Pointwise Mutual Information |
| TAM | Technology Acceptance Model |
| t-SNE | t-Distributed Stochastic Neighbor Embedding |
Appendix A
| Video ID | Title | Channel Title | Published At | View | Like |
|---|---|---|---|---|---|
| 7o3mZuhbse8 | Google’s Veo 3 Comparison | Chrissie | 24 May 2025 | 180,290 | 1206 |
| Iv24AUN8Yd0 | Real Or Google Veo 3 AI? Watch the tutorial! | Adrian Viral AI Marketing | 3 June 2025 | 221,233 | 714 |
| H7GC_qee6E4 | A.I. Has Officially Gone Too Far |Google Veo 3 is INSANE ![]() | Edited By Geo | 1 June 2025 | 3,433,096 | 79,878 |
| j4CT5dZe8ZA | Bigfoot—Born to be Bushy (Official Music Video)|Google Veo 3 | demonflyingfox | 1 June 2025 | 1,738,347 | 19,551 |
| -UW6nMGN2Bw | Impossible Challenges 2 (Google Veo 3) | demonflyingfox | 29 May 2025 | 337,533 | 5646 |
| hqlHrK5SEuc | Google Veo 3 Street Interview | GOD | 21 May 2025 | 223,454 | 839 |
| TmsK_Ym8kD4 | Google Veo 3 Demo|Cinematic Scenes and Character Voices (Honest Filmmaker Review) | Black Mixture | 23 May 2025 | 242,238 | 1545 |
| gcZwE5cM4xs | Google’s new AI video tool Veo 3 is WILD! | Impekable | 29 May 2025 | 314,684 | 1733 |
| j8VGP5pr9OQ | Cinematic Glitches. Veo 3 + Midjourney V7 | VaigueMan | 7 June 2025 | 333,074 | 6575 |
| gwhSPf3S89M | Veo 3 vs. Kling 2.1 Master Cinematic Showdown—Who Wins? ![]() | Aivoxy | 28 May 2025 | 410,472 | 4074 |
| 6j1TqZDn6xM | Made with Google’s Veo 3 model, they look you in the eye and break the fourth wall, | EDUCATION & TECHNOLOGY | 11 June 2025 | 2,995,522 | 40,437 |
| US2gO7UYEfY | We Tested Google Veo and Runway to Create This AI Film. It Was Wild.|WSJ | The Wall Street Journal | 28 May 2025 | 1,025,022 | 23,085 |
| CxX92BBhHBw | Impossible Challenges (Google Veo 3) | demonflyingfox | 27 May 2025 | 853,874 | 14,547 |
| McFChYae6p8 | Google Veo 3 Ai Gives You An Existential Crisis | Wulfranz | 25 May 2025 | 291,317 | 3473 |
| rwUt22HTTx0 | Anchors away. ️ Veo 3 is rolling out in 70+ countries and Google AI Pro subscribers can try it too | 24 May 2025 | 684,430 | 4567 | |
| HQ6BDMoKHcs | Google Veo 3 is sooo cool! #ai #veo3 #prompttheory | World Update 3.0 | 30 May 2025 | 643,198 | 44,048 |
| 01Fm4mqIq08 | Meet Beatboxing Blobfish, made by @AlexanderChen @MathewRayMakes with Veo 3 ![]() ![]() | 31 May 2025 | 1,653,519 | 55,513 | |
| UC_Cw9xqIuE | FREE Veo 3 AI Video Generator: How to Use It WORLDWIDE | How To In 5 Minutes | 2 June 2025 | 496,922 | 7159 |
| XkpGkAa1nCY | Google’s Veo 3 can now generate audio. | The Verge | 20 May 2025 | 302,518 | 3443 |
| 2T-ZiEdMHvw | Veo3 test // non-existent car show | László Gaál | 22 May 2025 | 391,251 | 2573 |
| ODyROOW1dCo | Meet Veo 3, our latest video generation model | 31 May 2025 | 141,807 | 1204 | |
| DY5vnaCx_KE | A Time Traveler’s VLOG|Google VEO 3 AI Short Film + Assets Available | uisato | 4 June 2025 | 172,575 | 3873 |
References
- Kharvi, P.L. Understanding the Impact of AI-Generated Deepfakes on Public Opinion, Political Discourse, and Personal Security in Social Media. IEEE Secur. Priv. 2024, 22, 115–122. [Google Scholar] [CrossRef]
- Sala, A. Standards and Policy Considerations for Multimedia Authenticity. 2025. Available online: https://www.itu.int/hub/2025/07/standards-and-policy-considerations-for-multimedia-authenticity/ (accessed on 10 October 2025).
- Florance, M.S. Survey Reveals Concerns and Adoption Trends Around AI’s Rising Influence; Rutgers Office of Communications: New Brunswick, NJ, USA, 2024; pp. 1–6. Available online: https://www.rutgers.edu/news/survey-reveals-concerns-and-adoption-trends-around-ais-rising-influence (accessed on 10 October 2025).
- Gottfried, J. About Three-Quarters of Americans Favor Steps to Restrict Altered Videos and Images; Pew Research Center: Washington, DC, USA, 2019; pp. 16–18. Available online: https://www.pewresearch.org/short-reads/2019/06/14/about-three-quarters-of-americans-favor-steps-to-restrict-altered-videos-and-images (accessed on 10 October 2025).
- Hynek, N.; Gavurova, B.; Kubak, M. Risks and benefits of artificial intelligence deepfakes: Systematic review and comparison of public attitudes in seven European Countries. J. Innov. Knowl. 2025, 10, 100782. [Google Scholar] [CrossRef]
- Xu, Z.; Wen, X.; Zhong, G.; Fang, Q. Public perception towards deepfake through topic modelling and sentiment analysis of social media data. Soc. Netw. Anal. Min. 2025, 15, 16. [Google Scholar] [CrossRef]
- Henry, A.; Glick, J. WITNESS and MIT Open Documentary Lab, Just joking! Deepfakes, Satire and the Politics of Synthetic Media. 2021. Available online: https://cocreationstudio.mit.edu/just-joking/ (accessed on 10 October 2025).
- Le Poidevin, O. UN Report Urges Stronger Measures to Detect AI-Driven Deepfakes; ReutersCom: London, UK, 2025; Available online: https://www.reuters.com/business/un-report-urges-stronger-measures-detect-ai-driven-deepfakes-2025-07-11/ (accessed on 10 October 2025).
- Capstick, E. Chapter 8: Public Opinion. In Artificial Intelligence Index Report 2025; Stanford Institute for Human-Centered Artificial Intelligence (HAI): Stanford, CA, USA, 2025; pp. 1–21. Available online: https://hai.stanford.edu/assets/files/hai_ai-index-report-2025_chapter8_final.pdf (accessed on 10 October 2025).
- Mcclain, B.Y.C.; Kennedy, B.; Gottfried, J.; Anderson, M.; Pasquini, G. How the U.S. Public and AI Experts View Artificial Intelligence. 2025. Available online: https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ (accessed on 10 October 2025).
- Puntoni, S.; Reczek, R.W.; Giesler, M.; Botti, S. Consumers and Artificial Intelligence: An Experiential Perspective. J. Mark. 2020, 85, 131–151. [Google Scholar] [CrossRef]
- Demmer, T.R.; Kühnapfel, C.; Fingerhut, J.; Pelowski, M. Does an emotional connection to art really require a human artist? Emotion and intentionality responses to AI- versus human-created art and impact on aesthetic experience. Comput. Hum. Behav. 2023, 148, 107875. [Google Scholar] [CrossRef]
- Lazer, D.M.J.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef] [PubMed]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
- Berger, J.; Milkman, K.L. What Makes Online Content Viral? J. Mark. Res. 2012, 49, 192–205. Available online: https://journals.sagepub.com/doi/10.1509/jmr.10.0353 (accessed on 10 October 2025). [CrossRef]
- Chesney, B.; Citron, D. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. Law Rev. 2019, 107, 1753–1820. [Google Scholar] [CrossRef]
- Holleman, G.A.; Hooge, I.T.C.; Kemner, C.; Hessels, R.S. The ‘Real-World Approach’ and Its Problems: A Critique of the Term Ecological Validity. Front. Psychol. 2020, 11, 1–12. [Google Scholar] [CrossRef]
- Seo, I.T.; Liu, H.; Li, H.; Lee, J.S. AI-infused video marketing: Exploring the influence of AI-generated tourism videos on tourist decision-making. Tour Manag. 2025, 110, 105182. [Google Scholar] [CrossRef]
- Belanche, D.; Ibáñez-Sánchez, S.; Jordán, P.; Matas, S. Customer reactions to generative AI vs. real images in high-involvement and hedonic services. Int. J. Inf. Manag. 2025, 85, 102954. [Google Scholar] [CrossRef]
- Gupta, P.; Ding, B.; Guan, C.; Ding, D. Generative AI: A systematic review using topic modelling techniques. Data Inf. Manag. 2024, 8, 100066. [Google Scholar] [CrossRef]
- Sun, Y.; Tsuruta, H.; Kumagai, M.; Kurosaki, K. YouTube-based topic modeling and large language model sentiment analysis of Japanese online discourse on nuclear energy. J. Nucl. Sci. Technol. 2025, 1–13. [Google Scholar] [CrossRef]
- Kaya, S. Investigation of User Comments on Videos Generated by Deepfake Technology. Acta Infologica 2025, 9, 208–222. [Google Scholar] [CrossRef]
- Solomon, M.R.; Bamossy, G.J.; Askegaard, S.T.; Hogg, M.K. Consumer Behaviour: A European Perspective-Pearson Education Limited; Pearson Education: London, UK, 2013; 672p. [Google Scholar]
- PBS News Hour. The Potentially Dangerous Implications of an AI Tool Creating Extremely Realistic Video. 2024. Available online: https://www.pbs.org/newshour/show/the-potentially-dangerous-implications-of-an-ai-tool-creating-extremely-realistic-video (accessed on 14 August 2025).
- Grewal, D.; Guha, A.; Satornino, C.B.; Schweiger, E.B. Artificial intelligence: The light and the darkness. J. Bus. Res. 2021, 136, 229–236. [Google Scholar] [CrossRef]
- Ma, Y.M.; Dai, X.; Deng, Z. Using machine learning to investigate consumers’ emotions: The spillover effect of AI defeating people on consumers’ attitudes toward AI companies. Internet Res. 2024, 34, 1679–1713. [Google Scholar] [CrossRef]
- Mirbabaie, M.; Brünker, F.; Möllmann Frick, N.R.J.; Stieglitz, S. The rise of artificial intelligence—Understanding the AI identity threat at the workplace. Electron Mark. 2022, 32, 73–99. [Google Scholar] [CrossRef]
- Rana, N.P.; Chatterjee, S.; Dwivedi, Y.K.; Akter, S. Understanding dark side of artificial intelligence (AI) integrated business analytics: Assessing firm’s operational inefficiency and competitiveness. Eur. J. Inf. Syst. 2022, 31, 364–387. [Google Scholar] [CrossRef]
- Gligor, D.M.; Pillai, K.G.; Golgeci, I. Theorizing the dark side of business-to-business relationships in the era of AI, big data, and blockchain. J. Bus. Res. 2021, 133, 79–88. [Google Scholar] [CrossRef]
- Sun, Y.; Li, S.; Yu, L. The dark sides of AI personal assistant: Effects of service failure on user continuance intention. Electron Mark. 2022, 32, 17–39. [Google Scholar] [CrossRef]
- Castillo, D.; Canhoto, A.I.; Said, E. The dark side of AI-powered service interactions: Exploring the process of co-destruction from the customer perspective. Serv. Ind. J. 2021, 41, 900–925. [Google Scholar] [CrossRef]
- Tong, S.; Jia, N.; Luo, X.; Fang, Z. The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strateg. Manag. J. 2021, 42, 1600–1631. [Google Scholar] [CrossRef]
- Mori, M. The uncanny valley. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
- Liu, Q.; Lian, Z.; Osman, L.H. Can Artificial Intelligence-Generated Sponsored Vlogs Trigger Online Shopping? Exploring the Impact on Consumer Purchase Intentions. J. Promot. Manag. 2025, 31, 798–830. [Google Scholar] [CrossRef]
- Rahman, A.; Naji, J. The Era of AI-Generated Video Production Exploring Consumers’ Attitudes. 2024. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1868060&dswid=-6145 (accessed on 14 August 2025).
- Arango, L.; Singaraju, S.P.; Niininen, O. Consumer Responses to AI-Generated Charitable Giving Ads. J. Advert. 2023, 52, 486–503. [Google Scholar] [CrossRef]
- Roman, D. AI-Generated Videos: Innovation, Risks & Rewards. 2023. Available online: https://wearebrain.com/blog/era-of-ai-generated-videos/ (accessed on 14 August 2025).
- Madathil, J.C. Generative AI advertisements and Human–AI collaboration: The role of humans as gatekeepers of humanity. J. Retail. Consum. Serv. 2025, 87, 104381. [Google Scholar] [CrossRef]
- Cao, G.; Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation 2021, 106, 102312. [Google Scholar] [CrossRef]
- Kshetri, N.; Dwivedi, Y.K.; Davenport, T.H.; Panteli, N. Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. Int. J. Inf. Manag. 2024, 75, 102716. [Google Scholar] [CrossRef]
- Latikka, R.; Bergdahl, J.; Savela, N.; Oksanen, A. AI as an Artist? A Two-Wave Survey Study on Attitudes Toward Using Artificial Intelligence in Art. Poetics 2023, 101, 101839. [Google Scholar] [CrossRef]
- Yu, T.; Tian, Y.; Chen, Y.; Huang, Y.; Pan, Y.; Jang, W. How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems 2025, 13, 461. [Google Scholar] [CrossRef]
- Lao, Y.; Hirvonen, N.; Larsson, S. AI and authenticity: Young people’s practices of information credibility assessment of AI-generated video content. J. Inf. Sci. 2025. [CrossRef]
- Stamkou, C.; Saprikis, V.; Fragulis, G.F.; Antoniadis, I. User Experience and Perceptions of AI-Generated E-Commerce Content: A Survey-Based Evaluation of Functionality, Aesthetics, and Security. Data 2025, 10, 89. [Google Scholar] [CrossRef]
- Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent Dirichlet Allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
- Chang, J.; Boyd-Graber, J.; Gerrish, S.; Wang, C.; Blei, D.M. Reading tea leaves: How humans interpret topic models. In Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada, 7–10 December 2009; Curran Associates Inc.: Red Hook, NY, USA, 2009; pp. 288–296. [Google Scholar]
- Roberts, M.E.; Stewart, B.M.; Tingley, D. Stm: An R package for structural topic models. J. Stat. Softw. 2019, 91, 1–40. [Google Scholar] [CrossRef]
- Laureate, C.D.P.; Buntine, W.; Linger, H. A systematic review of the use of topic models for short text social media analysis. Artif. Intell. Rev. 2023, 56, 14223–14255. [Google Scholar] [CrossRef]
- Muthusami, R.; Mani Kandan, N.; Saritha, K.; Narenthiran, B.; Nagaprasad, N.; Ramaswamy, K. Investigating topic modeling techniques through evaluation of topics discovered in short texts data across diverse domains. Sci. Rep. 2024, 14, 1–13. [Google Scholar] [CrossRef]
- Bernhard-Harrer, J.; Ashour, R.; Eberl, J.M.; Tolochko, P.; Boomgaarden, H. Beyond standardization: A comprehensive review of topic modeling validation methods for computational social science research. Polit. Sci. Res. Methods 2025, 1–19. [Google Scholar] [CrossRef]
- Bertoni, E.; Fontana, M.; Gabrielli, L.; Signorelli, S.; Vespe, M. Handbook of Computational Social Science for Policy; Springer Nature: Berlin/Heidelberg, Germany, 2023; pp. 1–490. [Google Scholar]
- Lundberg, J. Towards a Conceptual Framework for System of Systems. In Proceedings of the Doctoral Consortium Papers Presented at the 35th International Conference on Advanced Information Systems Engineering (CAiSE 2023), CEUR Workshop Proceedings, Zaragoza, Spain, 12–16 June 2023; Volume 3407, pp. 18–24. [Google Scholar]
- Sartori, L.; Bocca, G. Minding the gap(s): Public perceptions of AI and socio-technical imaginaries. AI Soc. 2023, 38, 443–458. [Google Scholar] [CrossRef]
- Gorwa, R.; Binns, R.; Katzenbach, C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc. 2020, 7. [Google Scholar] [CrossRef]
- Cinelli, M.; de Francisci Morales, G.; Galeazzi, A.; Quattrociocchi, W.; Starnini, M. The echo chamber effect on social media. Proc. Natl. Acad. Sci. USA 2021, 118, e2023301118. [Google Scholar] [CrossRef]
- Bakshy, E.; Messing, S.; Adamic, L.A. Exposure to ideologically diverse news and opinion on Facebook. Science 2015, 348, 1130–1132. [Google Scholar] [CrossRef]
- Leitch, A.; Chen, C. Unlimited Editions: Documenting Human Style in AI Art Generation. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; Volume 1. [Google Scholar]
- Becker, C.; Conduit, R.; Chouinard, P.A.; Laycock, R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav. Res. Methods 2024, 56, 7674–7690. [Google Scholar] [CrossRef]
- Eiserbeck, A.; Maier, M.; Baum, J.; Abdel Rahman, R. Deepfake smiles matter less—The psychological and neural impact of presumed AI-generated faces. Sci. Rep. 2023, 13, 16111. [Google Scholar] [CrossRef]
- Brady, W.J.; Wills, J.A.; Jost, J.T.; Tucker, J.A.; Van Bavel, J.J.; Fiske, S.T. Emotion shapes the diffusion of moralized content in social networks. Proc. Natl. Acad. Sci. USA 2017, 114, 7313–7318. [Google Scholar] [CrossRef]
- Diel, A.; Lalgi, T.; Schröter, I.C.; MacDorman, K.F.; Teufel, M.; Bäuerle, A. Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Comput. Hum. Behav. Rep. 2024, 16, 100538. [Google Scholar] [CrossRef]
- Groh, M.; Epstein, Z.; Firestone, C.; Picard, R. Deepfake detection by human crowds, machines, and machine-informed crowds. Proc. Natl. Acad. Sci. USA 2022, 119, e2110013119. [Google Scholar] [CrossRef] [PubMed]
- Pennycook, G.; Cannon, T.D.; Rand, D.G. Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 2018, 147, 1865–1880. [Google Scholar] [CrossRef] [PubMed]
- Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The Role and Limits of Principles in AI Ethics. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 27–28 January 2019; ACM: New York, NY, USA, 2019; pp. 195–200. [Google Scholar] [CrossRef]
- Crawford, K.; Calo, R. There is a blind spot in AI research. Nature 2016, 538, 311–313. [Google Scholar] [CrossRef]
- Singhal, A.; Neveditsin, N.; Tanveer, H.; Mago, V. Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping Review. JMIR Med. Inform. 2024, 12, e50048. [Google Scholar] [CrossRef]
- Jobin, A.; Ienca, M.; Vayena, E. Artificial Intelligence: The global landscape of ethics guidelines. arXiv 2019, arXiv:1906.11668. [Google Scholar] [CrossRef]
- Blei, D.M. Probabilistic topic models. Commun. ACM 2012, 55, 77–84. Available online: https://dl.acm.org/doi/10.1145/2133806.2133826 (accessed on 14 August 2025). [CrossRef]
- Griffiths, T.L.; Steyvers, M. Finding scientific topics. Proc. Natl. Acad. Sci. USA 2004, 101 (Suppl. S1), 5228–5235. [Google Scholar] [CrossRef]
- Blei, D.M.; Lafferty, J.D. Dynamic topic models. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; ACM: New York, NY, USA; Volume 148, pp. 113–120. [Google Scholar]
- Arora, S.; Ge, R.; Halpern, Y.; Mimno, D.; Moitra, A.; Sontag, D.; Wu, Y.; Zhu, M. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the 30th International Conference on Machine Learning, PMLR, Atlanta, GA, USA, 17–19 June 2013; Volume 28, pp. 939–947. [Google Scholar]
- Grimmer, J.; Stewart, B.M. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Polit Anal. 2013, 21, 267–297. [Google Scholar] [CrossRef]
- Bayılmış, O.Ü.; Orhan, S.; Bayılmış, C. Unveiling Gig Economy Trends via Topic Modeling and Big Data. Systems 2025, 13, 553. [Google Scholar] [CrossRef]
- Çallı, L.; Çallı, F. Understanding Airline Passengers during Covid-19 Outbreak to Improve Service Quality: Topic Modeling Approach to Complaints with Latent Dirichlet Allocation Algorithm. Transp. Res. Rec. J. Transp. Res. Board. 2022, 2677, 036119812211120. [Google Scholar] [CrossRef]
- Çallı, L. Exploring mobile banking adoption and service quality features through user-generated content: The application of a topic modeling approach to Google Play Store reviews. Int. J. Bank Mark. 2023, 41, 428–454. [Google Scholar] [CrossRef]
- Alma Çallı, B.; Ediz, Ç. Top concerns of user experiences in Metaverse games: A text-mining based approach. Entertain. Comput. 2023, 46, 100576. [Google Scholar] [CrossRef]
- Çallı, L.; Çallı, B.A. Value-centric analysis of user adoption for sustainable urban micro-mobility transportation through shared e-scooter services. Sustain. Dev. 2024, 32, 6408–6433. [Google Scholar] [CrossRef]
- Blei, D.M.; Lafferty, J.D. Topic models. In Mining Text Data; Srivastava, A., Sahami, M., Eds.; Springer International Publishing: Cham, The Netherlands, 2009. [Google Scholar]
- Lau, J.H.; Newman, D.; Baldwin, T. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL, Gothenburg, Sweden, 26–30 April 2014; pp. 530–539. [Google Scholar]
- Röder, M.; Both, A.; Hinneburg, A. Exploring the space of topic coherence measures. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, Shanghai, China, 2–6 February 2015; pp. 399–408. [Google Scholar]
- Mimno, D.; Wallach, H.M.; Talley, E.; Leenders, M.; McCallum, A. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP, Edinburgh, UK, 27–31 July 2011; pp. 262–272. [Google Scholar]
- Wallach, H.M.; Mimno, D.; McCallum, A. Rethinking LDA: Why priors matter. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 1973–1981. [Google Scholar]
- Roberts, M.E.; Stewart, B.M.; Tingley, D.; Lucas, C.; Leder-Luis, J.; Gadarian, S.K.; Albertson, B.; Rand, D.G. Structural topic models for open-ended survey responses. Am. J. Pol. Sci. 2014, 58, 1064–1082. [Google Scholar] [CrossRef]
- Yin, R.K. Case Study Research and Applications: Design and Methods; Sage: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Bruns, A.; Burgess, J. Twitter hashtags from ad hoc to calculated publics. In Hashtag Publics: The Power and Politics of Discursive Networks; Peter Lang Inc.: New York, NY, USA, 2015; Volume 103. [Google Scholar]
- Newman, D.; Lau, J.H.; Grieser, K.; Baldwin, T. Automatic evaluation of topic coherence. In Proceedings of the Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, CA, USA, 2–4 June 2010; pp. 100–108. [Google Scholar]
- Murakami, R.; Chakraborty, B. Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts. Sensors 2022, 22, 852. [Google Scholar] [CrossRef]
- Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of. Manag. Inf. Syst. Res. Cent. 1989, 13, 319–340. [Google Scholar]
- Acemoglu, D.; Restrepo, P. Automation and new tasks: How technology displaces and reinstates labor. J. Econ. Perspect. 2019, 33, 3–30. [Google Scholar] [CrossRef]
- startuphub.ai. Sora 2: A Glimpse into Generative Video’s Uncanny Valley and Creative Frontier. 2025. Available online: https://www.startuphub.ai/ai-news/ai-video/2025/sora-2-a-glimpse-into-generative-videos-uncanny-valley-and-creative-frontier/ (accessed on 10 October 2025).
- Jenkins, H. Convergence Culture: Where Old and New Media Collid; New York University Press: New York, NY, USA, 2006. [Google Scholar]
- Reddit. Will Smith Eating Spaghetti—2.5 Years Later. 2025. Available online: https://www.reddit.com/r/ChatGPT/comments/1o22zh9/will_smith_eating_spaghetti_25_years_later/ (accessed on 10 October 2025).
- Gillespie, T. The politics of “platforms.”. New Media Soc. 2010, 12, 347–364. [Google Scholar] [CrossRef]
- Bischof, J.M.; Airoldi, E.M. Summarizing topical content with word frequency and exclusivity. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML, Madison, WI, USA, 26 June–1 July 2012; Volume 1, pp. 201–208. [Google Scholar]
- Manning, C.D.; Raghavan, P.; Schütze, H. Introduction to Information Retrieval Choice Reviews Online; Cambridge University Press: New York, NY, USA, 2008; Volume 46. [Google Scholar]
- Maaten Lvan der Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Hagen, L. Content analysis of e-petitions with topic modeling: How to train and evaluate LDA models? Inf. Process. Manag. 2018, 54, 1292–1307. [Google Scholar] [CrossRef]
- Neuendorf, K.A. The Content Analysis Guidebook, 2nd ed.; SAGE Publications, Inc.: Housand Oaks, CA, USA, 2017. [Google Scholar]
- Scott, W.A. Reliability of content analysis: The case of nominal scale coding. Public Opin. Q. 1955, 19, 321. [Google Scholar] [CrossRef]
- Krippendorff, K. Content Analysis: An Introduction to its Methodology; SAGE Publications, Inc.: Housand Oaks, CA, USA, 2013. [Google Scholar]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]






| Category | Definition | What to Look For in Comments? |
|---|---|---|
| Socio-Technical Systems and Platforms | This category is not about the video itself, but about the tool that made it and the company that provides it. What is a Platform? It is the service or website that delivers the technology to us (e.g., Google, YouTube, TikTok). Think of it as the stage where the content is presented. | Comments about the company: “Google is just after money again.” Comments about the tool: “This AI is better than the other one.” Comments about access: “Why doesn’t this work in my country?” |
| AI-Generated Content and Esthetics | This category is about the video itself. How does it look? How does it make you feel? Is it funny, beautiful, or creepy? In other words, it is about our immediate, personal reactions while watching. | Comments about the visuals: “This looks so realistic!” or “The hands are messed up again.” Comments about the feeling: “I love this!” or “This video is so uncanny.” Comments about the humor: “The scene where the car melts is hilarious.” |
| Societal and Ethical Implications | This category is about the “big picture”. It is not about one person, but what this technology will do to all of society, our future, and our sense of truth. “Epistemic anxiety” is the fear that we can no longer trust our own eyes and ears. | Comments about truth/fakes: “We can’t trust any video anymore.” (perfect epistemic comment) Comments about jobs: “Artists are going to lose their jobs.” Comments about safety and rules: “This needs to be banned!” |
| ABC Model of Attitudes | |||
|---|---|---|---|
| Thematic Category | Affective Dimension (Feelings and Emotions) | Behavioral Dimension (Actions and Intentions) | Cognitive Dimension (Beliefs and Judgments) |
| AI-Generated Contentand Esthetics | Instant, image-driven feelings (awe, unease, amusement). (e.g., “That face is so real it gave me chills.”/“This scene was hilarious—I laughed out loud.”) | Active use or creative reuse of content (save, share, remix into memes). (e.g., “I downloaded it and made my own version for my story.”) | Judging quality and authenticity (originality, craftsmanship, realism). (e.g., “Looks great—is it real or generated?”) |
| Socio-Technical SystemsandPlatforms | Emotions toward the tech and companies (hope, suspicion, discomfort). (e.g., “I don’t trust that company; they’re hiding things.”) | Practical steps to access or use the tech (subscribe, use VPN, switch platforms). (e.g., “It’s paywalled—I used a VPN or moved to another site.”) | Strategic beliefs about platform motives and power (monopoly, profit-driven behavior). (e.g., “The algorithm pushes this because it makes money from ads.”) |
| SocietalandEthical Implications | Deep feelings about broader consequences (fear, anxiety, distrust). (e.g., “I’m scared—how can we trust video evidence anymore?”) | Civic or adaptive actions driven by norms (advocate for regulation, change sharing habits). (e.g., “I signed a petition and stopped sharing political clips.”) | Core ethical judgments about society, truth, and labor (fairness, job displacement). (e.g., “Could this tech take creators’ jobs—is that fair?”) |
| Topic | Titles | Keywords | Total Comments |
|---|---|---|---|
| 0 | Technical Support and Global Access to AI Tools | talking, exact, close, twenti, law, text | 1064 |
| 1 | AI’s Impact on Reality, Jobs, and Human Creativity | real, jobs, imagin (imagination), control, creat (create) | 929 |
| 2 | The Uncanny Nature of AI-Generated Content | ai generated, light, nobodi, uncanni, details | 1240 |
| 3 | The ‘Will Smith Eating Spaghetti’ Meme Benchmark | will_smith_spaghetti, prompt, audio, smith eating, dangerous | 1652 |
| 4 | Belief, Influence, and the Cost of AI | believ, eye, influenc, ads, cooked, expens | 524 |
| 5 | AI in Music, Advertising, and Entertainment | music, ad, perfect, song, fun | 681 |
| 6 | Skepticism Towards AI Demonstrations (Google Veo) | googl, veo, pay, tools, access | 395 |
| 7 | The Artificial Nature of AI and User Reactions | crazi (crazy), whats real, voic (voice), artifici (artificial), cant tell | 670 |
| 8 | Fear, Evidence, and Distrust in AI | scare, evidenc (evidence), tell differ, flow, weird | 748 |
| 9 | Absurd and Comical AI-Generated Scenarios | funny, hand, background, war, soldier | 624 |
| 10 | AI’s Role in the Film and Entertainment Industry | film_movi, act, creat, cost, industri (industry) | 514 |
| 11 | The AI Video Generation Arms Race (Veo vs. Kling) | veo, kling, creat, art, generat | 617 |
| 12 | AI’s Depiction of Surreal and Mythical Content | love, bigfoot, enjoy, lose, motion | 585 |
| 13 | Trust, News, and Technological Advancements in Media | camera, cooked, news, trust, technolog | 590 |
| 14 | Commercial and Business Use of AI Software | commerci (commercial), idea, softwar (software), studi (studio), peopl ai | 585 |
| Topic | Titles | Coherence | Tokens | Exclusivity | Cosine_Dist |
|---|---|---|---|---|---|
| 0 | Technical Support and Global Access to AI Tools | 0.3733 | 17,952 | 0.8969 | 0.9791 |
| 1 | AI’s Impact on Reality, Jobs, and Human Creativity | 0.7912 | 21,735 | 0.6754 | 0.9697 |
| 2 | The Uncanny Nature of AI-Generated Content | 0.298 | 16,142 | 0.9336 | 0.9872 |
| 3 | The ‘Will Smith Eating Spaghetti’ Meme Benchmark | 0.4166 | 15,565 | 0.9389 | 0.9862 |
| 4 | Belief, Influence, and the Cost of AI | 0.2609 | 15,620 | 0.9744 | 0.9837 |
| 5 | AI in Music, Advertising, and Entertainment | 0.3188 | 13,953 | 0.9143 | 0.9834 |
| 6 | Skepticism Towards AI Demonstrations (Google Veo) | 0.5436 | 16,484 | 0.7596 | 0.9728 |
| 7 | The Artificial Nature of AI and User Reactions | 0.1379 | 15,286 | 0.8807 | 0.9797 |
| 8 | Fear, Evidence, and Distrust in AI | 0.2216 | 14,859 | 0.8843 | 0.9829 |
| 9 | Absurd and Comical AI-Generated Scenarios | 0.897 | 17,816 | 0.8296 | 0.9756 |
| 10 | AI’s Role in the Film and Entertainment Industry | 0.6488 | 30,320 | 0.6726 | 0.9713 |
| 11 | The AI Video Generation Arms Race (Veo vs. Kling) | 0.7379 | 34,003 | 0.8663 | 0.9553 |
| 12 | AI’s Depiction of Surreal and Mythical Content | 0.3302 | 13,576 | 0.8737 | 0.9851 |
| 13 | Trust, News, and Technological Advancements in Media | 0.515 | 16,259 | 0.9052 | 0.9744 |
| 14 | Commercial and Business Use of AI Software | 0.3179 | 12,942 | 0.943 | 0.9885 |
| Category | Rationale | Included Topics |
|---|---|---|
| Socio-Technical Systems and Platforms | This category consolidates topics focused on the AI technologies themselves—the platforms, tools, and the competitive dynamics of their development and accessibility. | 0, 6, 11, 14 |
| AI-Generated Content and Esthetics | This category groups topics that analyze the AI-generated media artifacts—their esthetic qualities, genre applications, cultural impact, and the audience’s interpretation of them. | 2, 3, 5, 7, 9, 10, 12 |
| Societal and Ethical Implications | This category encompasses the broad, societal consequences of generative AI, including debates on labor, truth, trust, fear, and the future of media. | 1, 4, 8, 13 |
| ABC Model of Attitudes | |||
|---|---|---|---|
| Thematic Category | Affective Dimension (Feelings and Emotions) | Behavioral Dimension (Actions and Intentions) | Cognitive Dimension (Beliefs and Judgments) |
| AI-Generated Contentand Esthetics | Awe and Unease: The visceral, gut-level reaction to the content itself. Topic 2: The emotion of unease from the “uncanny valley. Topic 9: The feeling of amusement from absurd glitches. Topic 12: The emotion of enjoyment from surreal and mythical content. | Curation and Creation: The active engagement with and use of AI media. Topic 3: The act of memetic participation using the ‘Will Smith’ benchmark. Topic 5: The practice of integrating AI into cultural products like music and ads. | Technical Critique: The analytical evaluation of the artifact’s quality and authenticity. Topic 7: The cognitive struggle to determine “what’s real.” Topic 10: The critical judgment on AI’s disruptive role in the film industry. |
| Socio-Technical Systemsand Platforms | Anticipation and Skepticism: The emotional orientation toward the companies and tools. This is a less dominant dimension, often linked to cognitive beliefs about the platform. | Adoption and Navigation: The tangible actions related to using and accessing technology. Topic 0: The practical action of dealing with access barriers (VPNs, etc.). Topic 11: The behavior of comparison in the “arms race” between tools. Topic 14: The intended action of commercial and business use. | Strategic Evaluation: The formation of beliefs about platform motives and market dynamics. Topic 6: The skeptical judgment of curated corporate demos. |
| Societaland Ethical Implications | Fear and Anxiety: The deep-seated emotional response to AI’s potential societal harms. Topic 8: The raw emotion of fear that evidence can no longer be trusted. | Advocacy and Adaptation: The intentions that arise from ethical concerns, driving calls for new behaviors. While no single topic is purely behavioral, the cognitive beliefs below are the direct precursors to actions like calling for regulation or changing media consumption habits. | Worldview Formation: The core beliefs and judgments about AI’s fundamental impact. Topic 1: The belief about AI’s impact on jobs and reality. Topic 4: The judgment on AI’s power to influence and deceive. Topic 13: The cognitive conclusion that trust in news media is collapsing. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Çalli, L.; Alma Çalli, B. Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos. Systems 2025, 13, 925. https://doi.org/10.3390/systems13100925
Çalli L, Alma Çalli B. Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos. Systems. 2025; 13(10):925. https://doi.org/10.3390/systems13100925
Chicago/Turabian StyleÇalli, Levent, and Büşra Alma Çalli. 2025. "Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos" Systems 13, no. 10: 925. https://doi.org/10.3390/systems13100925
APA StyleÇalli, L., & Alma Çalli, B. (2025). Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos. Systems, 13(10), 925. https://doi.org/10.3390/systems13100925







