Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = Chinese Room Argument

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1053 KB  
Article
Epistemology in the Age of Large Language Models
by Jennifer Mugleston, Vuong Hung Truong, Cindy Kuang, Lungile Sibiya and Jihwan Myung
Knowledge 2025, 5(1), 3; https://doi.org/10.3390/knowledge5010003 - 1 Feb 2025
Viewed by 4638
Abstract
Epistemology and technology have been working in synergy throughout history. This relationship has culminated in large language models (LLMs). LLMs are rapidly becoming integral parts of our daily lives through smartphones and personal computers, and we are coming to accept the functionality of [...] Read more.
Epistemology and technology have been working in synergy throughout history. This relationship has culminated in large language models (LLMs). LLMs are rapidly becoming integral parts of our daily lives through smartphones and personal computers, and we are coming to accept the functionality of LLMs as a given. As LLMs become more entrenched in societal functioning, questions have begun to emerge: Are LLMs capable of real understanding? What is knowledge in LLMs? Can knowledge exist independently of a conscious observer? While these questions cannot be answered definitively, we can argue that modern LLMs are more than mere symbol-manipulators and that LLMs in deep neural networks should be considered capable of a form of knowledge, though it may not qualify as justified true belief (JTB) in the traditional definition. This deep neural network design may have endowed LLMs with the capacity for internal representations, basic reasoning, and the performance of seemingly cognitive tasks, possible only through a compressive but generative form of representation that can be best termed as knowledge. In addition, the non-symbolic nature of LLMs renders them incompatible with the criticism posed by Searle’s “Chinese room” argument. These insights encourage us to revisit fundamental questions of epistemology in the age of LLMs, which we believe can advance the field. Full article
Show Figures

Figure 1

16 pages, 268 KB  
Article
The Difficulties in Symbol Grounding Problem and the Direction for Solving It
by Jianhui Li and Haohao Mao
Philosophies 2022, 7(5), 108; https://doi.org/10.3390/philosophies7050108 - 27 Sep 2022
Cited by 3 | Viewed by 8716
Abstract
The symbol grounding problem (SGP) proposed by Stevan Harnad in 1990, originates from Searle’s “Chinese Room Argument” and refers to the problem of how a pure symbolic system acquires its meaning. While many solutions to this problem have been proposed, all of them [...] Read more.
The symbol grounding problem (SGP) proposed by Stevan Harnad in 1990, originates from Searle’s “Chinese Room Argument” and refers to the problem of how a pure symbolic system acquires its meaning. While many solutions to this problem have been proposed, all of them have encountered inconsistencies to different extents. A recent approach for resolving the problem is to divide the SGP into hard and easy problems echoing the distinction between hard and easy problems for resolving the enigma of consciousness. This however turns out not to be an ideal strategy: Everything related to consciousness that cannot be well-explained by present theories can be categorized as a hard problem which as a consequence would doom the SGP to irresolvability. We therefore argue that the SGP can be regarded as a general problem of how an AI system can have intentionality, and develop a theoretical direction for its solution. Full article
Back to TopTop