Previous Article in Journal
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies

1
Independent Researcher, 49100 Angers, France
2
Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR 999078, China
3
Department of Public Health and Medicinal Administration, Faculty of Health Sciences, University of Macau, Macao SAR 999078, China
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(3), 83; https://doi.org/10.3390/informatics12030083
Submission received: 3 May 2025 / Revised: 22 July 2025 / Accepted: 12 August 2025 / Published: 15 August 2025
(This article belongs to the Section Human-Computer Interaction)

Abstract

Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system.
Keywords: AI; LLM; GPT; language: prediction AI; LLM; GPT; language: prediction

Share and Cite

MDPI and ACS Style

Zhang, Y.; Strelnikov, K. Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies. Informatics 2025, 12, 83. https://doi.org/10.3390/informatics12030083

AMA Style

Zhang Y, Strelnikov K. Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies. Informatics. 2025; 12(3):83. https://doi.org/10.3390/informatics12030083

Chicago/Turabian Style

Zhang, Yifan, and Kuzma Strelnikov. 2025. "Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies" Informatics 12, no. 3: 83. https://doi.org/10.3390/informatics12030083

APA Style

Zhang, Y., & Strelnikov, K. (2025). Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies. Informatics, 12(3), 83. https://doi.org/10.3390/informatics12030083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop