Next Article in Journal
Nested Grover’s Algorithm for Tree Search
Previous Article in Journal
The Wafold: Curvature-Driven Termination and Dimensional Compression in Black Holes
Previous Article in Special Issue
Can Informativity Effects Be Predictability Effects in Disguise?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Challengesand Opportunities in Causality Analysis Using Large Language Models

by
Wlodek W. Zadrozny
Computer Science & Data Science, University of North Carolina Charlotte, Charlotte, NC 28223, USA
Entropy 2026, 28(1), 23; https://doi.org/10.3390/e28010023
Submission received: 18 July 2025 / Revised: 12 December 2025 / Accepted: 21 December 2025 / Published: 24 December 2025
(This article belongs to the Special Issue Complexity Characteristics of Natural Language)

Abstract

This article examines the challenges and opportunities in extracting causal information from text with Large Language Models (LLMs). It first establishes the importance of causality extraction and then explores different views on causality, including common sense ideas informing different data annotation schemes, Aristotle’s Four Causes, and Pearl’s Ladder of Causation. The paper notes the relevance of this conceptual variety for the task. The text reviews datasets and work related to finding causal expressions, both using traditional machine learning methods and LLMs. Although the known limitations of LLMs—hallucinations and lack of common sense—affect the reliability of causal findings, GPT and Gemini models (GPT-5 and Gemini 2.5 Pro and others) show the ability to conduct causality analysis; moreover, they can even apply different perspectives, such as counterfactual and Aristotelian. They are also capable of explaining and critiquing causal analyses: we report an experiment showing that in addition to largely flawless analyses, the newer models exhibit very high agreement of 88–91% on causal relationships between events—much higher than the typically reported inter-annotator agreement of 30–70%. The article concludes with a discussion of the lessons learned about these challenges and questions how LLMs might help address them in the future. For example, LLMs should help address the sparsity of annotated data. Moreover, LLMs point to a future where causality analysis in texts focuses not on annotations but on understanding, as causality is about semantics and not word spans. The Appendices and shared data show examples of LLM outputs on tasks involving causal reasoning and causal information extraction, demonstrating the models’ current abilities and limits.
Keywords: causality extraction; large language models; LLM; NLP; causality; GPT; Gemini; causality datasets; CNC corpus; causality analysis causality extraction; large language models; LLM; NLP; causality; GPT; Gemini; causality datasets; CNC corpus; causality analysis

Share and Cite

MDPI and ACS Style

Zadrozny, W.W. Challengesand Opportunities in Causality Analysis Using Large Language Models. Entropy 2026, 28, 23. https://doi.org/10.3390/e28010023

AMA Style

Zadrozny WW. Challengesand Opportunities in Causality Analysis Using Large Language Models. Entropy. 2026; 28(1):23. https://doi.org/10.3390/e28010023

Chicago/Turabian Style

Zadrozny, Wlodek W. 2026. "Challengesand Opportunities in Causality Analysis Using Large Language Models" Entropy 28, no. 1: 23. https://doi.org/10.3390/e28010023

APA Style

Zadrozny, W. W. (2026). Challengesand Opportunities in Causality Analysis Using Large Language Models. Entropy, 28(1), 23. https://doi.org/10.3390/e28010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop