Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets
Abstract
1. Introduction
2. Literature Review
2.1. AI Ethics Literature Review
2.2. Chance Discovery Theory
- Establishing and uncovering innovative models and variables: Incorporating contextual factors to identify emerging variables in specific situations, thereby improving relevance and accuracy.
- Identifying tail events: Detecting rare but high-impact occurrences through focused observation and analysis.
- Relying on human–AI interaction for interpretation and judgment: Leveraging human expertise to evaluate whether a tail event constitutes a genuine chance, given its rarity and inherent ambiguity.
2.3. Double Helix Model: Human–Machine Collaborative Framework for Chance Discovery
- Human-driven process: Setting analysis parameters by inputting the article dataset and initializing the KeyGraph algorithm. The researcher uses the Polaris visualization tool to configure initial parameters, such as the number of bridging nodes (red nodes) and high-frequency black keyword nodes (Phase 1 in Figure 1).
- Computer-driven process: Conducting data mining and constructing the keyword network. The system runs the KeyGraph algorithm, calculates keyword co-occurrence frequencies and structural relationships, and builds the network graph, thereby extracting latent knowledge structures from large-scale textual datasets (Phase 2 in Figure 1).
- Computer-driven process: The algorithm generates a network graph visualization, producing an interpretable structure that illustrates keyword relationships, with red nodes serving as potential bridges for semantic interpretation (Phase 3 in Figure 1).
- Human-driven process: Interpreting and refining results. The researcher evaluates ChatGPT’s topic detection and semantic interpretations based on the visualized graphs. If illogical results occur (e.g., “I beer”), Polaris parameters—such as the number of black or red nodes—are adjusted until coherent outputs emerge (e.g., “I love to drink beer”) (Phase 4 in Figure 1).
2.4. KeyGraph Algorithm Overview
2.5. Research Gap and Contributions
- A longitudinal, cross-source corpus (2022–2024). Twenty-four authoritative reports support the comparison of stable versus shifting themes across years and sources.
- Operationalization via chance-anchored diffusion. We formalize semantic diffusion paths from chance (bridging) keywords to clustered high-frequency terms, producing cluster-level topic summaries grounded in the source texts.
- Dual-layer reliability checks. We combine expert-informed review (semantic logic, consistency, keyword coverage, inter-rater agreement) with cross-model semantic similarity. Summaries are independently generated by two LLMs, with sentence-level alignment measured using multiple embedding models. We also assess how structural complexity (single versus combined clusters) affects stability.
- From structure to governance. We link detected patterns to actionable AI-governance insights (e.g., bias and privacy risk chains, transparency and explainability needs, responsibility allocation, and implications of generative AI deployment).
2.6. Research Questions
- RQ1 (Effectiveness for topic detection). Can the integrated KeyGraph–LLM workflow deliver reliable topic detection—specifically, coherent and context-faithful cluster-level summaries—without relying on domain experts?Operationalization: Human evaluation of semantic logic, consistency, and keyword coverage with inter-rater agreement (e.g., Cohen’s κ), supplemented by convergence evidence from cross-model semantic similarity.
- RQ2 (Longitudinal thematic evolution). Across 2022–2024, what stable and shifting themes characterize AI ethics discourse, and how do chance (bridging) keywords reveal emerging or cross-cutting issues?Operationalization: Year-over-year analysis of KeyGraph-derived cluster structures and chance-anchored diffusion paths.
- RQ3 (Reliability vs. structural complexity). Given identical inputs and prompts, to what extent do two LLMs produce convergent topic interpretations, and does topic structure complexity (single versus combined clusters) systematically affect cross-model similarity?Operationalization: Sentence-level cosine similarity using multiple embedding models, with statistical tests for differences by cluster configuration.
3. Methodology
3.1. Data Collection
3.2. Data Preprocessing
3.3. Construction of the Keyword Co-Occurence Network
3.3.1. Chance Discovery in AI Ethics Using KeyGraph
- Keyword frequency and co-occurrence calculation: First, the occurrence frequency of all words in the articles is calculated and sorted. The top consecutive high-frequency words are selected as keywords, representing the core foundational concepts of the articles. Using paragraphs or sentences as the calculation units, the co-occurrence relationships between all keywords are computed and applied to establish connections.
- Node role classification and keyword clustering: Based on the frequency of keyword occurrences and their structural positions in the co-occurrence network, nodes are classified into three categories, which lays the foundation for chance discovery.
- ○
- High-frequency keywords: Keywords with high occurrence frequency that are concentrated in specific topic clusters represent the primary concepts of the topics. In this study, these are consistently represented by high-frequency black nodes.
- ○
- Chance keywords: These keywords (known as bridging words) have lower occurrence frequencies but are associated with multiple topic clusters. They typically indicate emerging concepts or interdisciplinary issues and are valuable for discovering latent topics. In this study, they are represented by red nodes.
- ○
- General terms: Keywords lacking structural significance are excluded from the visualization network.
- Keyword co-occurrence network construction and thematic cluster identification: A keyword association graph is constructed with keywords as nodes and the co-occurrence strength as weighted edges. This method aggregates high-frequency terms and forms thematic clusters.
- Keyword network visualization: The nodes and links are visualized using tools (e.g., Polaris) which map co-occurrence relationships between keywords to construct their association network graphs. By adjusting parameters (e.g., frequency thresholds, co-occurrence strength, and the number of nodes), different levels of keyword structures are explored to enhance the understanding of potential keyword clusters and association pathways.
3.3.2. Analysis of Keyword Network Node Density and Topic Detection Accuracy
3.4. Selection of High-Frequency Keyword Clusters
3.5. Employing ChatGPT for Topic Detection
3.5.1. Limitations of Previous Methods
- Topic cluster identification and core concept summarization: KeyGraph identifies high-frequency keywords in articles and designates them as high-frequency nodes (i.e., black nodes) in the keyword network structure. Based on the co-occurrence relationships between these keywords, tightly connected clusters naturally form, reflecting the primary themes or subdomains in articles. Researchers can summarize representative thematic labels based on the characteristics and co-occurrence patterns of keywords in each cluster, producing an initial thematic summary and classification of the core article content.
- Chance keyword identification and pairwise semantic relationship mining: The uniqueness of KeyGraph lies in its ability to identify chance keywords that, despite their low frequency, connect multiple thematic clusters. Although these keywords appear infrequently, they serve as bridging nodes linking thematic clusters in the keyword network. Researchers conduct in-depth analyses of these chance keywords by tracing their contextual usage back to the original articles, manually interpreting their semantic roles and how they connect with multiple thematic clusters. This process facilitates identifying emerging topics, interdisciplinary integration points, or potential trends.
- Topic summarization heavily relies on manual interpretation, resulting in subjectivity and inconsistency: Although traditional keyword network graphs can visually present co-occurrence relationships between high-frequency keywords, their semantic connections often lack systematic explanatory mechanisms, typically relying on researchers’ expertise and experience for semantic interpretation and topic detection. This process is time-consuming, labor-intensive, and prone to inconsistencies due to variations in interpreters’ knowledge, affecting the objectivity of topic summarization. These problems become pronounced when analyzing multiple articles or conducting comparative analyses over time.
- Limited ability to identify low-frequency, high-value keywords, making latent topic detection difficult: Traditional text mining methods using statistical frequency focus on topic clusters formed by high-frequency keywords, often overlooking low-frequency keywords and chance nodes that play bridging or transitional roles in the keyword structure. These low-frequency keywords often represent emerging concepts, topic intersections, or contextual shifts, holding significant value for uncovering latent research topics and policy chance information. However, traditional methods struggle to identify and interpret their semantic roles systematically, limiting the efficiency and usefulness of topic exploration.
- Difficulty tracking dynamic contexts hinders automating topic-evolution pattern analysis: When managing cross-temporal texts, such as AI ethics articles from 2022 to 2024, traditional keyword network analysis often requires a manual comparison of keyword structural changes at various time points and cannot effectively or automatically track how topic keywords undergo semantic shifts or experience topic merging and splitting as the context evolves. This limitation hinders researchers’ understanding and forecasting of topic evolution trajectories, resulting in analyses without the capacity to present temporal and dynamic characteristics.
- Visualization maps are challenging to convert into structured data for inference: Although keyword network graphs offer a high degree of visual intuitiveness and help reveal thematic contexts and lexical and relational structures in texts, their results are often presented as images. When the number of keyword nodes in topic clusters is high, the clarity and readability of these visuals significantly decrease, leading to blurred outcomes or difficulty in interpretation during advanced analyses (e.g., topic classification, semantic comparison, or cross-validation).
3.5.2. Technical Background: Semantic Comprehension and Topic Extraction in ChatGPT
- Comprehension of keyword network structures and semantic interpretation: ChatGPT tokenizes the input text, including the original AI ethics articles and translated descriptions of the KeyGraph keyword network structure, and processes it via its multi-layer transformer model for deep syntactic and semantic analyses. The built-in attention mechanism in ChatGPT accurately captures complex relationships between tokens and their contextual meaning, constructing a comprehensive, detailed semantic representation. This approach enables the model to understand the meaning of individual tokens and their positions and roles in the keyword network.
- Topic identification: The model identifies frequently recurring keywords and their semantic relationships in the text, grouping them into coherent thematic clusters. Notably, ChatGPT applies its strong contextual reasoning to generate semantically complete and representative thematic descriptions, facilitating the discovery of core concepts in the network structure.
- Semantic interpretation and text summarization: ChatGPT extracts critical insight from text based on semantic logic and generates contextually coherent and concise summaries. Researchers can control the content and length of these summaries using precise prompt engineering (e.g., restricting the summary to the imported text) to meet specific analytical requirements. This control considerably enhances the efficiency of extracting insight from complex network graphs [75,76].
3.5.3. Method: Integrating KeyGraph and ChatGPT for Topic Detection
3.6. R1 Semantic Diffusion Path
4. Result Analysis
4.1. Yearly Analysis of Topic Evolution and Keyword Structures (2022–2024)
- Cluster A-1: The semantic cluster around the red node automaker focuses on the implementation of autonomous driving technology and the ethical challenges faced by AI in automotive applications. This red node extends through its connection to self to include the keywords based, car, driver, vehicles, and autonomous, outlining application scenarios involving HCI. The keywords driver, task, and autonomous intertwine, reflecting issues of responsibility allocation and control authority. In situations where automated and manual control are combined, the attribution of responsibility for accidents (whether borne by the driver or system) requires further clarification via regulatory frameworks and technical design. Furthermore, task transparency and the interface design are also critical. For example, whether drivers can quickly grasp the operational status and decision rationale of the system directly affects their safety judgments and behavioral responses. Establishing trust and risk perception cannot be overlooked. An insufficient HCI design and information transmission may cause excessive driver trust or erroneous reliance, increasing safety risks. Overall, the keyword structure emphasizes several topics, including the behavior prediction of autonomous technologies, system safety, and user responsibility attribution.
- Cluster A-2: The red node behavior forms a keyword network related to AI risk prediction, system deployment, and ethical practices. Through its connections to consequences, the network gradually expands to include the keywords risk, privacy, discrimination, design, and capabilities, reflecting the multifaceted and uncertain outcomes of AI system behavior. Notably, discrimination is intertwined with risk, indicating that failure to address data sources and algorithmic bias properly in real-world applications may reinforce existing societal inequalities and trigger ethical crises of systemic discrimination. The association between design and foundational highlights the need to judiciously consider fundamental principles and ethical values during the initial stages of AI development. Overall, this cluster maps the potential externalities that may arise during AI deployment, emphasizing that developers must assume the corresponding responsibility for the potential social and ethical consequences of system behavior.
- Cluster A-3: The keyword network extended from the red node statistical focuses on the computational logic and algorithmic architecture of AI systems. The strong co-occurrence relationships, with the keywords computational, learning, machine, critical, and implementation, reveal core problems, including statistical biases, risk governance, and explainability in current AI technology. The direct and indirect connections between the keywords issues, concerns, ethical, implementation, and critical reflect that AI ethics is not merely a conceptual discussion but is involved in the development, design, and deployment stages of AI systems. Furthermore, the connections emphasize that the realization of AI ethics must integrate value judgments and ethical norms as essential foundations for technical practice. This cluster demonstrates the role of ethical issues in institutional frameworks, industrial applications, and technical design, indicating that ethical practice has become a critical factor that cannot be overlooked in the development of responsible technology.
- Cluster A-4: The keyword network constructed around the red node dignity focuses on human rights protection and ethical principles. This red node displays high co-occurrence frequencies with the keyword responsibility, trust, justice, transparency, principles, and ethics, reflecting that current AI technology developers should assume the corresponding moral responsibilities to avoid problems (e.g., bias, discrimination, and structural inequality). Ensuring the transparency of algorithms and data processing allows users to understand the decision-making logic and behavioral patterns of AI systems, safeguarding human dignity and fundamental rights. The connections between justice, guidelines, and harm highlight the necessity of designing AI ethical frameworks and indicate that the lack of appropriate ethical judgment and operational guidance may harm individuals or society, causing discrimination or unfairness. Overall, this cluster focuses on protecting human rights and strengthening ethical norms and institutional justice as core principles, constructing an AI governance mechanism characterized by social legitimacy and long-term trust.
- Combined cluster of A-2 and A-3: The keyword network reveals a significant intersection and complementary structure, highlighting the dual technological and societal dimensions of AI ethics issues. Through red chance nodes, including bias, risk, understand, issues, and implementation, cross-cluster bridging nodes emerge, uncovering a risk propagation chain that spans from statistical logic to behavioral consequences. Bias often originates from flaws in algorithm design and training data and further permeates the societal domain after system deployment, leading to concrete and potentially escalating ethical consequences. This analysis indicates that AI ethics challenges must be examined from an integrated, multi-layered perspective spanning technical construction and societal influence. Accordingly, ethical practice in AI should focus on identifying and mitigating potential ethical risks during the early stages of technological development (e.g., data preprocessing and model training). A comprehensive ethics governance framework encompassing bias detection, transparency enhancement, and regulatory mechanisms must be promoted to ensure responsible and sustainable AI applications.
- Combined cluster of A-2 and A-4: The analysis reveals that AI behavior must be guided and constrained by ethical principles to prevent harm to human dignity and privacy, enabling the deployment of trustworthy and responsible AI. The behavioral logic of AI systems should be grounded in human rights protection and ethical values, with corresponding regulations (e.g., bias detection and privacy protection standards) introduced during the early design stages to ensure legitimacy and credibility during deployment. The consequences of AI behavior (e.g., bias and privacy infringement) must be directed by ethical principles and implemented through technical practice. This interactive relationship emphasizes that ethics should not be treated as an external constraint to technology but as an internal structure embedded throughout the life cycle of AI design, development, and application, advancing responsible and human-centered AI development. This perspective aligns with the discourse in the 2022 AI ethics articles, including The 2022 AI Index: AI’s Ethical Growing Pains and AI Ethics and AI Law: Grappling with Overlapping and Conflicting Ethical Factors Within AI, and identifies the integration of bias management and privacy protection into a unified ethical framework as an emerging research chance.
- Combined cluster of A-2, A-3, and A-4: The semantic co-construction of these three clusters reveals that AI ethics challenges cannot be viewed as problems confined to a single level. The behavioral risks of AI systems (e.g., technical bias, discriminatory outcomes, and privacy infringement) are closely linked to their underlying statistical construction logic, indicating that once deployed, AI may produce irreversible and substantive ethical consequences. If such consequences are not addressed through institutionalized ethical safeguards that ensure prevention and accountability, AI technology risks losing social trust and legitimacy. Moreover, ethical AI practice must adopt a cross-level integration approach to address these challenges, spanning from model training and system deployment to institutional regulation, constructing a full-process ethical governance framework based on the triad of technology, behavior, and values. This structure is critical for preserving human dignity and developing trustworthy and responsible AI.
- Cluster B-1: With trained as the primary node, the network extends to the keyword data and further expands to the keyword models, privacy, and customer, reflecting early-stage concerns in AI development regarding the legitimacy of data sources and the protection of user information. The node models branch out to include intelligence, ChatGPT, generative, and bias, indicating attention to the algorithmic biases embedded in generative AI models (e.g., ChatGPT). The bidirectional links between privacy, customer, and system highlight ethical considerations regarding user privacy and data security in AI application contexts. The connections between system and the keywords customer, create, and generative reveal the interplay between system design and generative technology in practice, raising concerns about technological transparency and ethical accountability. The keyword artificial is linked to intelligence, lead, and ChatGPT, forming a semantic structure centered on AI model generation and leadership in application. This cluster reveals deep ethical concerns related to the legitimacy of data usage, model bias, privacy protection, and user participation during the training and deployment phases of AI systems.
- Cluster B-2: The primary node develop connects with systems and human, revealing the bidirectional relationship of HCI in technological construction. Systems further expands to make and decisions, reflecting the role AI systems play in decision-making processes. Decisions links to making, humans, and believe, forming a cluster centered on how AI decision making influences human beliefs. Technology co-occurs with the terms ethics and concerns, indicating heightened attention to ethical regulations and institutional policies during AI development. Through the node concerns, the keyword ethics connects to potential, business, and responsibility, outlining the importance businesses place on ethical risks and responsibilities when applying AI technology. Overall, this semantic group illustrates the institutional and ethical challenges faced during AI development, emphasizing the importance of bias governance, technical regulation, and establishing user trust.
- Cluster B-3: With misuse as the red node, the initial connection to government further extends to industry and society, forming a semantic cluster focusing on institutional roles. The node industry links to insurance, which connects to using, policy, and responsible, highlighting an ethical discourse focused on risk transfer mechanisms and institutional responsibility. The keyword policy is a central node connecting responsible, insurance, and using, indicating that policy should address AI misuse risks via clear responsibility allocation, technical application guidelines, and industry-level risk management, especially concerning privacy protection and social impact. The keywords ethical, ensure, responsible, and using are closely interlinked, underscoring that ethical principles must be embedded in technical usage and institutional regulation. These principles, when supported by accountability structures and protective measures, can mitigate risks of misuse, particularly in areas related to data privacy and societal consequences. The connection between impact and society further indicates the potential and far-reaching effects of technological misuse on social structures. Overall, this semantic cluster illustrates that AI ethical principles should be integrated into institutional design and technological application processes and that clear accountability and regulatory mechanisms are critical for reducing the potential negative influences of AI misuse on societal systems.
- Combined cluster of B-1 and B-2: These two clusters, centered on the red node machine, focus on model training and system development, respectively, revealing, through the lens of practical application, the crucial ethical challenges spanning the AI life cycle, from data training and system development to deployment. Both clusters emphasize data ethics (e.g., privacy and bias) and the governance of potential negative influences of AI systems on society and humanity, including decision-making influence and responsibility attribution. Together, these semantic clusters reveal that the core of AI ethics lies in the technology itself and, more critically, in the processes of interaction between AI, humans, and society, particularly regarding risk management and the realization of accountability. The clusters collectively emphasize that achieving a vision of AI development that balances innovation and responsibility requires the parallel construction of responsible governance mechanisms throughout the innovation process.
- Combined cluster of B-2 and B-3: In the KeyGraph keyword network, Clusters 2 and 3 are centered on the keywords develop and misuse, respectively, illustrating an ethical link from AI technology development to its potential misuse. The keyword structures revealed by these two clusters reflect that AI ethics challenges originate from individual acts of technical development and extend across broader societal institutions and governance dimensions. The ethical risks posed by AI technology can be effectively addressed only by constructing an integrated accountability framework encompassing development, deployment, and misuse prevention, ensuring that advancement contributes to positive and sustainable social value.
- Combined cluster of B-1, B-2, and B-3: These three semantic clusters correspond to three stages of AI ethical risk, model training, system development, and actual misuse along with social impact, respectively, forming a progressive chain from ethical considerations to governance responses. The keyword structures reveal a trajectory that begins with micro-level concerns, including data bias and generative misinformation, and extends to challenges of decision-making and ethical design during the development process. The structures indicate misuse risks and governance responsibility at the societal level. This progression reflects that AI ethics issues are not isolated incidents but constitute a foreseeable and preventable chain of ethical risks. An integrated ethical framework must be established that encompasses data governance, technical design, and misuse prevention, enabling the realization of an AI development vision guided by social values to address multi-level challenges (e.g., bias, manipulation, and misuse).
- Cluster C-1: With media as the red node, the network connects to social, which links to content, genAI, and used, revealing that generative AI technology has been widely integrated into social platforms and public communication spaces. The connection between social and ethical, further extending to risks and then deployment, challenges, technology, and responsible, indicates that societal concerns have shifted beyond technical applications to the ethical risks and responsibility attribution involved in deployment processes, especially regarding misinformation, information manipulation, and bias problems arising in social media environments. The keywords digital, technology, innovation, industry, and development converge at the nodes essential, become, and important, demonstrating that generative AI has become a core driving force behind contemporary digital innovation and industrial transformation, with its ethical challenges escalating into systemic problems. Overall, this semantic cluster highlights that AI ethics attention has moved toward ethical challenges triggered by the application of generative AI in social and media contexts, emphasizing the importance of responsible technological deployment in these settings.
- Cluster C-2: The red node security co-occurs with technologies and extends to data, models, and training, forming a semantic cluster. The keyword biases forms a triangular co-occurrence structure with these three terms, indicating that data sources and processing methods underpin AI system security, and that biases hidden within training data influence model behavior, representing the intersection of ethics and security. Transparency and privacy connect through technologies and further co-occur with regulatory, reflecting that AI ethics discourse has reached institutional dimensions and emphasizing the reliance on and necessity of regulatory mechanisms for system transparency and privacy protection. Via ensure and tools, the keyword decision links to generative and ChatGPT and is associated with businesses and trust, revealing the critical role of explainability and trust mechanisms in generative AI decision-making processes in corporate and societal applications. Overall, this keyword network reveals the interdisciplinary interconnection of AI ethics issues in 2024, providing a structured analytical perspective for technology development, policy regulation, and industry practice.
- Combined cluster of C-1 and C-2: The integration of these two clusters highlights the increasingly multi-layered and interdisciplinary complexity of AI ethics issues in 2024. The AI ethical themes are no longer confined to a single domain but require addressing systemic governance challenges while promoting AI development, especially generative AI, in the fields of digital content and media. These challenges include core concerns, including data privacy, algorithmic bias, social trust, lack of transparency, and regulatory compliance. The importance of achieving trustworthy and responsible AI governance via the collaborative operation of social and technical dimensions is emphasized, with collective responsibility shared by developers, businesses, policymakers, and civil society. The current AI ethical frameworks must be established under risk contexts characterized by uncertainty and the undiscovered and unknown, guiding AI development toward a more legitimate and sustainable future.
4.2. Integrative Analysis and Trend Summary
4.3. Reliability Enhancement and Bias Mitigation via Cross-Model Validation
5. Discussion
5.1. Year 2022: Comparative Interpretation
5.2. Year 2023: Comparative Interpretation
5.3. Year 2024: Comparative Interpretation
5.4. Cross-Year Synthesis
6. Conclusions
7. Limitations
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
HCI | Human-Computer Interaction |
LDA | Latent Dirichlet Allocation |
LLM | Large Language Model |
SDGs | Sustainable Development Goals |
Appendix A. KeyGraph Algorithm
- Data preprocessing:
- 2.
- High-frequency keyword extraction:
- 3.
- Calculation of keyword network co-occurrence:
- 4.
- Co-occurrence between keywords and keyword clusters:
- 5.
- Calculating the co-occurrence potential of all keywords in cluster :
- 6.
- Evaluation of keyword potential across clusters:
References
- Shetty, D.K.; Arjunan, R.V.; Cenitta, D.; Makkithaya, K.; Hegde, N.V.; Bhatta, B.S.R.; Salu, S.; Aishwarya, T.R.; Bhat, P.; Pullela, P.K. Analyzing AI Regulation through Literature and Current Trends. J. Open Innov. Technol. Mark. Complex. 2025, 11, 100508. [Google Scholar] [CrossRef]
- Tallberg, J.; Lundgren, M.; Geith, J. AI Regulation in the European Union: Examining Non-State Actor Preferences. Bus. Politics 2024, 26, 218–239. [Google Scholar] [CrossRef]
- Ong, J.C.L.; Chang, S.Y.; William, W.; Butte, A.J.; Shah, N.H.; Chew, L.S.T.; Liu, N.; Doshi-Velez, F.; Lu, W.; Savulescu, J.; et al. Ethical and Regulatory Challenges of Large Language Models in Medicine. Lancet Digit. Health 2024, 6, e428–e432. [Google Scholar] [CrossRef]
- Huang, C.; Zhang, Z.; Mao, B.; Yao, X. An Overview of Artificial Intelligence Ethics. IEEE Trans. Artif. Intell. 2023, 4, 799–819. [Google Scholar] [CrossRef]
- Tabassum, A.; Elmahjub, E.; Padela, A.I.; Zwitter, A.; Qadir, J. Generative AI and the Metaverse: A Scoping Review of Ethical and Legal Challenges. IEEE Open J. Comput. Soc. 2025, 6, 348–359. [Google Scholar] [CrossRef]
- Taeihagh, A. Governance of Generative AI. Policy Soc. 2025, 44, 1–22. [Google Scholar] [CrossRef]
- Morley, J.; Elhalal, A.; Garcia, F.; Kinsey, L.; Mökander, J.; Floridi, L. Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds Mach. 2021, 31, 239–256. [Google Scholar] [CrossRef]
- Mittelstadt, B.D. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
- Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; Floridi, L. Artificial intelligence and the ‘Good Society’: The US, EU, and UK Approach. Sci. Eng. Ethics 2018, 24, 505–528. [Google Scholar]
- Ohsawa, Y.; McBurney, P. Chance Discovery; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent Dirichlet Allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
- Vayansky, I.; Kumar, S.A.P. A Review of Topic Modeling Methods. Inf. Syst. 2020, 94, 101582. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- Sayyadi, H.; Raschid, L. A Graph Analytical Approach for Topic Detection. ACM Trans. Internet Technol. 2013, 13, 1–23. [Google Scholar] [CrossRef]
- Hayashi, T.; Ohsawa, Y. Information Retrieval System and Knowledge Base on Diseases Using Variables and Contexts in the Texts. Procedia Comput. Sci. 2019, 159, 1662–1669. [Google Scholar] [CrossRef]
- Wang, J.; Lai, J.Y.; Lin, Y.H. Social Media Analytics for Mining Customer Complaints to Explore Product Opportunities. Comput. Ind. Eng. 2023, 178, 109104. [Google Scholar] [CrossRef]
- Guler, N.; Kirshner, S.N.; Vidgen, R. A Literature Review of Artificial Intelligence Research in Business and Management Using Machine Learning and ChatGPT. Data Inf. Manag. 2024, 8, 100076. [Google Scholar] [CrossRef]
- Nissen, H.E. Using Double Helix Relationships to Understand and Change Information Systems. Informing Sci. Int. J. Emerg. Transdiscip. J. 2007, 10, 21–62. [Google Scholar]
- Chechkin, A.; Pleshakova, E.; Gataullin, S. A Hybrid KAN-BiLSTM Transformer with Multi-Domain Dynamic Attention Model for Cybersecurity. Technologies 2025, 13, 223. [Google Scholar] [CrossRef]
- Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Berkman Klein Cent. Res. Publ. 2020, 2020, 39. [Google Scholar] [CrossRef]
- Khan, A.A.; Badshah, S.; Liang, P.; Waseem, M.; Khan, B.; Ahmad, A.; Fahmideh, M.; Niazi, M.; Akbar, M.A. Ethics of AI: A Systematic Literature Review of Principles and Challenges. In Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering (EASE 2022), Gothenburg, Sweden, 13–15 June 2022; pp. 383–392. [Google Scholar]
- Kirova, V.D.; Ku, C.S.; Laracy, J.R.; Marlowe, T.J. The ethics of artificial intelligence in the era of generative AI. J. Syst. Cybern. Inform. 2023, 21, 42–50. [Google Scholar] [CrossRef]
- De Fine Licht, K. Resolving Value Conflicts in Public AI Governance: A Procedural Justice Framework. Gov. Inf. Q. 2025, 42, 102033. [Google Scholar] [CrossRef]
- Fruchter, R.; Ohsawa, Y.; Matsumura, N. Knowledge Reuse through Chance Discovery from an Enterprise Design-Build Enterprise Data Store. New Math. Nat. Comput. 2005, 1, 393–406. [Google Scholar] [CrossRef]
- Kieslich, K.; Keller, B.; Starke, C. Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data Soc. 2022, 9, 20539517221092956. [Google Scholar] [CrossRef]
- Inglada Galiana, L.; Corral Gudino, L.; Miramontes González, P. Ethics and artificial intelligence. Rev. Clin. Esp. 2024, 224, 178–186. [Google Scholar] [CrossRef]
- Buolamwini, J.; Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, 23–24 February 2018; Friedler, S.A., Wilson, C., Eds.; PMLR: New York, NY, USA, 2018; Volume 81, pp. 77–91. [Google Scholar]
- Njiru, D.K.; Mugo, D.M.; Musyoka, F.M. Ethical considerations in AI-based user profiling for knowledge management: A critical review. Telemat. Inform. Rep. 2025, 18, 100205. [Google Scholar] [CrossRef]
- Heilinger, J.C. The Ethics of AI Ethics. A Constructive Critique. Philos. Technol. 2022, 35, 61. [Google Scholar] [CrossRef]
- Luomala, M.; Naarmala, J.; Tuomi, V. Technology-Assisted Literature Reviews with Technology of Artificial Intelligence: Ethical and Credibility Challenges. Procedia Comput. Sci. 2025, 256, 378–387. [Google Scholar] [CrossRef]
- Hermansyah, M.; Najib, A.; Farida, A.; Sacipto, R.; Rintyarna, B.S. Artificial intelligence and ethics: Building an artificial intelligence system that ensures privacy and social justice. Int. J. Sci. Soc. 2023, 5, 154–168. [Google Scholar] [CrossRef]
- Chen, F.; Zhou, J.; Holzinger, A.; Fleischmann, K.R.; Stumpf, S. Artificial Intelligence Ethics and Trust: From Principles to Practice. IEEE Intell. Syst. 2023, 38, 5–8. [Google Scholar] [CrossRef]
- Gupta, A.; Raj, A.; Puri, M.; Gangrade, J. Ethical Considerations in the Deployment of AI. J. Propul. Technol. 2024, 45, 1001–4055. [Google Scholar]
- Ohsawa, Y. Data Crystallization: A Project beyond Chance Discovery for Discovering Unobservable Events. In Proceedings of the 2005 IEEE International Conference on Granular Computing, Beijing, China, 25–27 July 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 51–56. [Google Scholar]
- Holzinger, A. Human-Computer Interaction and Knowledge Discovery (HCI-KDD): What Is the Benefit of Bringing Those Two Fields to Work Together? In Availability, Reliability, and Security in Information Systems and HCI; Cuzzocrea, A., Kittl, C., Simos, D.E., Weippl, E., Xu, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8127. [Google Scholar]
- Ohsawa, Y.; Fukuda, H. Chance discovery by stimulated groups of people: Application to understanding consumption of rare food. J. Contingencies Crisis Manag. 2002, 10, 129–138. [Google Scholar] [CrossRef]
- Ko, N.; Jeong, B.; Choi, S.; Yoon, J. Identifying Product Opportunities Using Social Media Mining: Application of Topic Modeling and Chance Discovery Theory. IEEE Access 2018, 6, 1680–1693. [Google Scholar] [CrossRef]
- Ohsawa, Y. Chance Discoveries for Making Decisions in Complex Real World. New Gener. Comput. 2002, 20, 143–163. [Google Scholar] [CrossRef]
- Ohsawa, Y.; Nishihara, Y. Innovators’ Marketplace: Using Games to Activate and Train Innovators; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Ho, T.B.; Nguyen, D.D. Chance Discovery and Learning Minority Classes. New Gener. Comput. 2003, 21, 149–161. [Google Scholar] [CrossRef]
- Ohsawa, Y.; Tsumoto, S. Chance Discoveries in Real World Decision Making: Data-Based Interaction of Human Intelligence and Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006; Volume 30. [Google Scholar]
- Ohsawa, Y.; Nara, Y. Modeling the Process of Chance Discovery by Chance Discovery on Double Helix. In Proceedings of the AAAI Fall Symposium on Chance Discovery, North Falmouth, MA, USA, 15–17 November 2002; AAAI Press: Arlington, VA, USA, 2002; pp. 33–40. [Google Scholar]
- Wang, H.; Ohsawa, Y.; Nishihara, Y. Innovation Support System for Creative Product Design Based on Chance Discovery. Expert. Syst. Appl. 2012, 39, 4890–4897. [Google Scholar] [CrossRef]
- Wang, H.; Ohsawa, Y. Idea discovery: A Scenario-Based Systematic Approach for Decision Making in Market Innovation. Expert. Syst. Appl. 2013, 40, 429–438. [Google Scholar] [CrossRef]
- Yang, S.; Sun, Q.; Zhou, H.; Gong, Z.; Zhou, Y.; Huang, J. A Topic Detection Method Based on KeyGraph and Community Partition. In Proceedings of the 2018 International Conference on Computing and Artificial Intelligence (ICCAI 2018), Chengdu, China, 12–14 May 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 30–34. [Google Scholar]
- Ohsawa, Y. KeyGraph as Risk Explorer in Earthquake–Sequence. J. Contingencies Crisis Manag. 2002, 10, 119–128. [Google Scholar] [CrossRef]
- Ohsawa, Y.; Benson, N.E.; Yachida, M. KeyGraph: Automatic indexing by co-occurrence graph based on building construction metaphor. In Proceedings of the IEEE International Forum on Research and Technology Advances in Digital Libraries, Santa Barbara, CA, USA, 22–24 April 1998; pp. 12–18. [Google Scholar]
- Wanchia, K.; Yufei, J.; Hsinchun, Y. Discovering Emerging Financial Technological Chances of Investment Management in China via Patent Data. Int. J. Bus. Econ. Aff. 2020, 5, 1–8. [Google Scholar] [CrossRef]
- Geum, Y.; Kim, M. How to Identify Promising Chances for Technological Innovation: Keygraph-Based Patent Analysis. Adv. Eng. Inform. 2020, 46, 101155. [Google Scholar] [CrossRef]
- Sakakibara, T.; Ohsawa, Y. Gradual-Increase Extraction of Target Baskets as Preprocess for Visualizing Simplified Scenario Maps by KeyGraph. Soft Comput. 2007, 11, 783–790. [Google Scholar] [CrossRef]
- Kim, K.-J.; Jung, M.-C.; Cho, S.-B. KeyGraph-Based Chance Discovery for Mobile Contents Management System. Int. J. Knowl. Based Intell. Eng. Syst. 2007, 11, 313–320. [Google Scholar] [CrossRef]
- Perera, K.; Karunarathne, D. KeyGraph and WordNet Hypernyms for Topic Detection. In Proceedings of the 2015 12th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chonburi, Thailand, 22–24 July 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 303–308. [Google Scholar]
- Beliga, S.; Meštrović, A.; Martinčić-Ipšić, S. An overview of graph-based keyword extraction methods and approaches. J. Inf. Organ. Sci. 2015, 39, 1–20. [Google Scholar]
- Pan, R.C.; Hong, C.F.; Huang, N.; Hsu, F.C.; Wang, L.H.; Chi, T.H. One-Scan KeyGraph Implementation. In Proceedings of the 3rd Conference on Evolutionary Computation Applications & 2005 International Workshop on Chance Discovery, Taichung, Taiwan, 3 December 2005. [Google Scholar]
- Nezu, Y.; Miura, Y. Extracting Keywords on SNS by Successive KeyGraph. In Proceedings of the 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), Baltimore, MD, USA, 9–11 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 997–1003. [Google Scholar]
- Manning, C.D.; Raghavan, P.; Schütze, H. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Liu, B. Sentiment Analysis and Opinion Mining. Synth. Lect. Hum. Lang. Technol. 2012, 5, 1–167. [Google Scholar] [CrossRef]
- Jobin, A.; Ienca, M.; Vayena, E. The Global Landscape of AI Ethics Guidelines. Nat. Mach. Intell. 2019, 19, 389–399. [Google Scholar] [CrossRef]
- Mittelstadt, B.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The Ethics of Algorithms: Mapping the Debate. Big Data Soc. 2016, 3, 1–21. [Google Scholar] [CrossRef]
- Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. In Artificial Intelligence: Foundations, Theory, and Algorithms; Springer: Cham, Switzerland, 2019. [Google Scholar]
- Okazaki, N.; Ohsawa, Y. Polaris: An Integrated Data Miner for Chance Discovery. In Proceedings of the Third International Workshop on Chance Discovery and Its Management, Crete, Greece, 22–27 June 2003. [Google Scholar]
- Sayyadi, H.; Hurst, M.; Maykov, A. Event Detection and Tracking in Social Streams. Proc. Int. AAAI Conf. Web Soc. Media 2009, 3, 311–314. [Google Scholar] [CrossRef]
- Jo, Y.; Lagoze, C.; Giles, C.L. Detecting Research Topics via the Correlation between Graphs and Texts. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’07), San Jose, CA, USA, 12–15 August 2007; ACM: New York, NY, USA, 2007; pp. 370–379. [Google Scholar]
- Lozano, S.; Calzada-Infante, L.; Adenso-Díaz, B.; García, S. Complex Network Analysis of Keywords Co-Occurrence in the Recent Efficiency Analysis Literature. Scientometrics 2019, 120, 609–629. [Google Scholar] [CrossRef]
- Zhou, Z.; Zou, X.; Lv, X.; Hu, J. Research on Weighted Complex Network Based Keywords Extraction. In Proceedings of the 7th International Conference on Advanced Data Mining and Applications (ADMA 2013), Wuhan, China, 14–16 May 2013; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2013; Volume 8229, pp. 442–452. [Google Scholar]
- Grimmer, J.; Stewart, B.M. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Polit. Anal. 2013, 21, 267–297. [Google Scholar] [CrossRef]
- Firoozeh, N.; Nazarenko, A.; Alizon, F.; Daille, B. Keyword extraction: Issues and methods. Nat. Lang. Eng. 2020, 26, 259–291. [Google Scholar] [CrossRef]
- De Graaf, R.; van der Vossen, R. Bits versus brains in content analysis. Comparing the advantages and disadvantages of manual and automated methods for content analysis. Communications 2013, 38, 433–443. [Google Scholar] [CrossRef]
- Lewis, S.C.; Zamith, R.; Hermida, A. Content analysis in an era of big data: A hybrid approach to computational and manual methods. J. Broadcast. Electron. Media 2013, 57, 34–52. [Google Scholar] [CrossRef]
- Feng, Y. Semantic Textual Similarity Analysis of Clinical Text in the Era of LLM. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence, (CAI), Singapore, 22–24 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1284–1289. [Google Scholar]
- Papageorgiou, E.; Chronis, C.; Varlamis, I.; Himeur, Y. A Survey on the Use of Large Language Models (LLMs) in Fake News. Future Internet 2024, 16, 298. [Google Scholar] [CrossRef]
- Maktabdar Oghaz, M.; Babu Saheer, L.; Dhame, K.; Singaram, G. Detection and classification of ChatGPT-generated content using deep transformer models. Front. Artif. Intell. 2025, 8, 1458707. [Google Scholar] [CrossRef]
- Wu, J.; Yang, S.; Zhan, R.; Yuan, Y.; Chao, L.S.; Wong, D.F. A Survey on LLM-Generated Text Detection: Necessity, Methods, and Future Directions. Comput. Linguist. 2025, 51, 275–338. [Google Scholar] [CrossRef]
- Domínguez-Diaz, A.; Goyanes, M.; de-Marcos, L. Automating Content Analysis of Scientific Abstracts Using ChatGPT: A Methodological Protocol and Use Case. MethodsX 2025, 15, 103431. [Google Scholar] [CrossRef] [PubMed]
- Ma, X.; Zhang, Y.; Ding, K.; Yang, J.; Wu, J.; Fan, H. On Fake News Detection with LLM Enhanced Semantics Mining. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), Miami, FL, USA, 12–16 November 2024; Al-Onaizan, Y., Bansal, M., Chen, Y.-N., Eds.; Association for Computational Linguistics: Miami, FL, USA, 2024; pp. 508–521. [Google Scholar]
- Yang, X.; Li, Y.; Zhang, X.; Chen, H.; Cheng, W. Exploring the Limits of ChatGPT for Query or Aspect-Based Text Summarization. arXiv 2023, arXiv:2302.08081. [Google Scholar]
- Bang, Y.; Cahyawijaya, S.; Lee, N.; Dai, W.; Su, D.; Wilie, B.; Lovenia, H.; Ji, Z.; Yu, T.; Chung, W.; et al. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP-AACI 2023), Nusa Dua, Indonesia, 1–4 November 2023; Park, J.C., Arase, Y., Hu, B., Lu, W., Wijaya, D., Purwarianti, A., Krisnadhi, A.A., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp. 675–718. [Google Scholar]
- Cer, D.; Yang, Y.; Kong, S.-Y.; Hua, N.; Limtiaco, N.; St. John, R.; Constant, N.; Guajardo-Céspedes, M.; Yuan, S.; Tar, C.; et al. Universal sentence encoder. arXiv 2018, arXiv:1803.11175. [Google Scholar]
- Wang, W.; Bao, H.; Huang, S.; Dong, L.; Wei, F. MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 2140–2151. [Google Scholar]
- Singh, D.K. Unraveling Enterprise Large Language Model Platform—Cohere. Int. J. Sci. Res. Publ. 2025, 15, 219–223. [Google Scholar] [CrossRef]
- Cann, T.J.B.; Dennes, B.; Coan, T.; O’Neill, S.; Williams, H.T.P. Using Semantic Similarity to Measure the Echo of Strategic Communications. EPJ Data Sci. 2025, 14, 20. [Google Scholar] [CrossRef]
- Cer, D.; Yang, Y.; Kong, S.-Y.; Hua, N.; Limtiaco, N.; John, R.S.; Constant, N.; Guajardo-Céspedes, M.; Yuan, S.; Tar, C.; et al. Universal Sentence Encoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Brussels, Belgium, 31 October–4 November 2018; Association for Computational Linguistics: Stroudsburg, PA, USA, 2018; pp. 169–174. [Google Scholar]
Original Title | Data Sources | Publication Date | |
---|---|---|---|
1 | The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns | Stanford HAI | March 2022 |
2 | AI Ethics And AI Law Grappling With Overlapping And Conflicting Ethical Factors Within AI | Forbes | November 2022 |
3 | The 2022 AI Index: AI’s Ethical Growing Pains | Stanford HAI | March 2022 |
4 | Prioritising AI & Ethics: A perspective on change | Deloitte | May 2022 |
5 | Top Nine Ethical Issues In Artificial Intelligence | Forbes | October 2022 |
6 | AI Ethics And AI Law Are Moving Toward Standards That Explicitly Identify And Manage AI Biases | Forbes | October 2022 |
7 | Evaluating Ethical Challenges in AI and ML | ISACA Journal | July 2022 |
8 | We’re failing at the ethics of AI. Here’s how we make real impact | World Economic Forum, WEF | January 2022 |
Original Title | Data Sources | Publication Date | |
---|---|---|---|
1 | The Ethics Of AI: Navigating Bias, Manipulation And Beyond | Forbes | June 2023 |
2 | The Ethics Of AI: Balancing Innovation And Responsibility | Forbes | December 2023 |
3 | AI Ethics In The Age Of ChatGPT—What Businesses Need To Know | Forbes | July 2023 |
4 | 96% Of People Consider Ethical And Responsible AI To Be Important | Forbes | April 2023 |
5 | How Businesses Can Ethically Embrace Artificial Intelligence | Forbes | May 2023 |
6 | Experts call for more diversity to combat bias in artificial intelligence | CNN | December 2023 |
7 | 5 AI Ethics Concerns the Experts Are Debating | Georgia Tech | August 2023 |
8 | Ethical Concerns Are Playing Catch-Up in Companies’ AI Arms Race: Equality | Bloomberg | June 2023 |
Original Title | Data Sources | Publication Date | |
---|---|---|---|
1 | AI’s Trust Problem | Harvard Business Review | May 2024 |
2 | ‘Uncovered, unknown, and uncertain’: Guiding ethics in the age of AI | Yale News | February 2024 |
3 | AI Regulation Is Evolving Globally and Businesses Need to Keep Up | Bloomberg Law | December 2024 |
4 | AI is not ready for primetime | CNN Business | March 2024 |
5 | With AI warning, Nobel winner joins ranks of laureates who’ve cautioned about the risks of their own work | CNN | October 2024 |
6 | Navigating The Ethics Of AI: Is It Fair And Responsible Enough To Use? | Forbes | November 2024 |
7 | AI And Ethics: A Collective Responsibility For A Safer Future | Forbes | October 2024 |
8 | AI Started as a Dream to Save Humanity. Then, Big Tech Took Over. | Bloomberg | September 2024 |
Year | Cluster | Cross-Model Similarity (Mean ± SD) |
---|---|---|
2022 | A-1 | 0.840 ± 0.054 |
2022 | A-2 | 0.791 ± 0.022 |
2022 | A-3 | 0.829 ± 0.034 |
2022 | A-4 | 0.813 ± 0.014 |
2022 | A-2 + A-3 | 0.801 ± 0.019 |
2022 | A-2 + A-4 | 0.765 ± 0.078 |
2022 | A-2 + A-3 + A-4 | 0.819 ± 0.045 |
2023 | B-1 | 0.818 ± 0.014 |
2023 | B-2 | 0.829 ± 0.027 |
2023 | B-3 | 0.808 ± 0.037 |
2023 | B-1 + B-2 | 0.824 ± 0.069 |
2023 | B-2 + B-3 | 0.797 ± 0.050 |
2023 | B-1 + B-2 + B-3 | 0.796 ± 0.033 |
2024 | C-1 | 0.807 ± 0.006 |
2024 | C-2 | 0.843 ± 0.025 |
2024 | C-1 + C-2 | 0.833 ± 0.056 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, W.-H.; Yu, H.-C. Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets. Mathematics 2025, 13, 2698. https://doi.org/10.3390/math13172698
Li W-H, Yu H-C. Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets. Mathematics. 2025; 13(17):2698. https://doi.org/10.3390/math13172698
Chicago/Turabian StyleLi, Wei-Hsuan, and Hsin-Chun Yu. 2025. "Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets" Mathematics 13, no. 17: 2698. https://doi.org/10.3390/math13172698
APA StyleLi, W.-H., & Yu, H.-C. (2025). Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets. Mathematics, 13(17), 2698. https://doi.org/10.3390/math13172698