As artificial intelligence systems proliferate across critical societal domains, understanding the nature, patterns, and evolution of AI-related harms has become essential for effective governance. Despite growing incident repositories, systematic computational analysis of AI incident discourse remains limited, with prior research constrained by small samples, single-method approaches, and absence of temporal analysis spanning major capability advances. This study addresses these gaps through a comprehensive multi-method text analysis of 3494 AI incident records from the OECD AI Policy Observatory, spanning January 2014 through October 2024. Six complementary analytical approaches were applied: Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF) topic modeling to discover thematic structures; K-Means and BERTopic clustering for pattern identification; VADER sentiment analysis for emotional framing assessment; and LIWC psycholinguistic profiling for cognitive and communicative dimension analysis. Cross-method comparison quantified categorization robustness across all four clustering and topic modeling approaches. Key findings reveal dramatic temporal shifts and systematic risk patterns. Incident reporting increased 4.6-fold following ChatGPT’s (5.2) November 2022 release (from 12.0 to 95.9 monthly incidents), accompanied by vocabulary transformation from embodied AI terminology (facial recognition, autonomous vehicles) toward generative AI discourse (ChatGPT, hallucination, jailbreak). Six robust thematic categories emerged consistently across methods: autonomous vehicles (84–89% cross-method alignment), facial recognition (66–68%), deepfakes, ChatGPT/generative AI, social media platforms, and algorithmic bias. Risk concentration is pronounced: 49.7% of incidents fall within two harm categories (system safety 29.1%, physical harms 20.6%); private sector actors account for 70.3%; and 48% occur in the United States. Sentiment analysis reveals physical safety incidents receive notably negative framing (autonomous vehicles: −0.077; child safety: −0.326), while policy and generative AI coverage trend positive (+0.586 to +0.633). These findings have direct governance implications. The thematic concentration supports sector-specific regulatory frameworks—mandatory audit trails for hiring algorithms, simulation testing for autonomous vehicles, transparency requirements for recommender systems, accuracy standards for facial recognition, and output labeling for generative AI. Cross-method validation demonstrates which incident categories are robust enough for standardized regulatory classification versus those requiring context-dependent treatment. The rapid emergence of generative AI incidents underscores the need for governance mechanisms responsive to capability advances within months rather than years.