Next Article in Journal
Smart Health: Changes and Remains of Eastern Dietary Structures
Previous Article in Journal
The True Information Processor and the True Information Processor Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Large Language Models Cannot Meet Artificial General Intelligence Expectations †

by
Wolfgang Hofkirchner
1,2
1
The Institute for a Global Sustainable Information Society, 1220 Vienna, Austria
2
TU Wien, 1040 Vienna, Austria
Presented at the Workshop on AI and People, IS4SI Summit 2023, Beijing, China, 14–16 August 2023.
Comput. Sci. Math. Forum 2023, 8(1), 67; https://doi.org/10.3390/cmsf2023008067
Published: 11 August 2023

Abstract

:
Large language models (LLMs), in particular, diverse versions of Chat GPT, have been setting the agenda for expectations of artificial general intelligence (AGI) once again. Here, it will be argued that such expectations will not be satisfied by LLMs. The argumentation will not focus on concrete technical specifics of LLMs that hinder the materialization of AGI. It is rather the AGI itself that lacks the means for being realized. From a techno-social systems perspective, neither LLMs nor AGI can be called intelligent. Only (human) social systems, including techno-social systems, or humans or living systems or other physical systems that self-organize can show the feature of intelligence, but not man-made technological tools. The argumentation will cover praxiological, ontological and epistemological considerations.

1. Introduction

The rolling out versions of Chat GPT have launched hype over artificial general intelligence (AGI) expectations. Is this more than a bestselling topic for another artificial intelligence (AI) summer without any technological indication of realization?
In order to answer that question, arguments that touch on the practical, ontic and epistemic sides of the riddle will be brought to align with the techno-social systems perspective. Techno-social systems are social systems in which technology is inserted. The interplay of technology with social systems follows a certain pattern that becomes clarified in the Critical Techno-social systems Design Theory (CTDT) based on the Critical Social Systems Theory (CSST) [1], which in turn is based on the Evolutionary Systems Theory (EST) [2]. According to the EST, the world consists of agents, that is, entities that manifest agency. If co-acting, those agents let relations emerge under which they organize themselves for the sake of synergy such that a system emerged as a higher-order agent, of which the lower-order agents have become elements. Evolution shows physical, biotic and human/social systems, each with a history of complexification. According to the CSST, social systems are inhabited by social agents that are called actors. It is critical for them to strive for fulfilment of a good life in a good society and try to adapt their societal relations in a correspondent way.
According to the CTDT, they give also birth to technique. In contradistinction to social systems, technique is not an evolutionary, self-organizing system itself. It is not causative itself; indeed, it has no self. It is not an agent; it is rather a patient [3]. Thus, it cannot be said to be intelligent. Therefore, AI is a misnomer. Like any technology, AI is part of social systems that can be viewed as techno-social systems. AI can improve the performance of techno-social systems.
Let us go into details.

2. The Praxiological Argument

Praxiology as philosophical discipline is the kind of philosophy that deals with human practice. Part of that practice is the production and use of scientific–technological innovations that enhance and augment the self-actuation of actors, with the aim of providing a good life in a good society. These innovations concern material and ideational ways and means of human activities like physical tools, procedures or plans, altogether called technology. They lay the ground for the technological infrastructure of society.
Since any technology shall support a social function by a technical function that helps bring about the social function more efficaciously and more efficiently, it shall manufacture a determinate result. The technical function functionalizes certain cause–effect relationships such that the result is an improved fulfilment of the social function. That is, technology is designed as a mechanism. Mechanisms shall work in a strictly determined manner; they are not complex and the possibility space of the cause–effect relationship is artificially restricted to one possibility.
Technology is always inserted into a social system. Such a social system is then called a techno-social system. This techno-social system is itself not a mechanism. It remains a social system made of actors working together guided by social relations also if it integrates a mechanism. By integrating mechanisms, a social system changes its quality to a technologically supported social system. As such, it has the capacity to perform social functions smarter than without technological support.
Anyway, it depends on the combination of technical features of the design and the social features of the usage. Integration can fail if the social and the technical requests are not treated appropriately, that is, if they are not treated according to their different specifications that make them qualify for the sake of the whole. These failures are treating the social and the technical indiscriminately by either treating the social like the technical (technodeterminism) or the technical like the social (social constructivism), which assimilates both sides. Also treating them in discriminative ways can segregate them by either prioritizing the technical over the social (technocentrism) or the social over the technical (sociocentrism) or even by treating both on equal terms (techno/social interactivism) [1].
As any computerized digital technology for the support of social information processes, LLMs work with algorithms. Algorithms are representations of the artificially produced strict determinacies of causations [4]. This is also the case when dealing with social information processes. Integration can fail, if the nature of algorithmic work is not recognized as what it is: something that cannot bring about meaningful new information by itself. Any output of algorithmic working needs social interpretation in addition to become a meaningful fit within the social information processes. Otherwise, instead of fostering the autonomy of individual actors, as well as social systems, algorithmic support can be allowed too much interference in social affairs such that social autonomy is restricted to the detriment of social entities.

3. The Ontological Argument

Ontology is the subdiscipline of philosophy that deals with the basic functioning of the natural and (human) social world, as well as technology. It has become increasingly important with models in informatics, since an alleged blurring of what is (to be) denominated as artificial/technological and what is (to be) denominated as human/social has been put into question, in particular, relating to intelligence.
One should be aware that the term intelligence cannot signify the property of machine processes or a machine itself because machinic entities are not informational agents or self-organizing systems, and work along hetero-organized determinacies [4]. This idea has been taken up in a publication of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) on an ethically aligned design [5]. With reference to Hofkirchner [4] and philosopher Rafael Capurro [3] whose assignment of the distinction between agents and patients is helpful to understand the different roles, it states that humans and technology are “Of particular concern when understanding the relationship between human beings and A/IS is the uncritically applied anthropomorphic approach toward A/IS that many industry and policymakers are using today. This approach erroneously blurs the distinction between moral agents and moral patients, i.e., subjects, otherwise understood as a distinction between ‘natural’ self-organizing systems and artificial, non-self-organizing devices.” This is consequential for the issue of autonomy. “[…] A/IS cannot, by definition, become autonomous in the sense that humans or living beings are autonomous. With that said, autonomy in machines, when critically defined, designates how machines act and operate independently in certain contexts through a consideration of implemented order generated by laws and rules. In this sense, A/IS can, by definition, qualify as autonomous, especially in the case of genetic algorithms and evolutionary strategies. However, attempts to implant true morality and emotions, and thus accountability, i.e., autonomy, into A/IS […] may encourage anthropomorphic expectations of machines by human beings when designing and interacting with A/IS” [5].
Calling machines intelligent is a category mistake. Humans and their social systems might be intelligent (or not), which depends, and they might even give rise to digital (ly supported) intelligence if they are coupled in a dialectic combination. Both AI and human intelligence are the products of evolution, but despite being identical, they differ. “Ontologically, humans and society are the product of physical, biotic and social evolution; the machine is a product of humans and society” [6]. The dialectic combination would fail if the identity would mean that they share the same degree of complexity, that is either a technomorphic monistic reduction in the social along the concatenated steps of merism, biologism, physicalism and mechanicism, or a sociomorphic monistic projection onto the technical along structuralism, anthropism, psychism and animism. The dialectic combination would also fail in the case of dualism that would hypostatize the differences of the technical and the social, meaning that they are genuine entities of incomparable complexity, either promoting a technosingular model with machines superior to man or a sociosingular one with man superior over and above any machine or a techno/social indifferent model for which differences in complexity would not even make a difference. The dialectic combination states that the technical and the social differ in complexity, but constitute a united complex with asymmetrical roles. They complement each other according to their properties, namely to the values, norms and interests regarding the social functions, and to the affordances regarding the technical functions [1].
In the case of LLMs, one has to admit that these models are prone to errors in that the output of the algorithms upon which they are based on may be completely manufacture and describe things that do not exist at all according to Naomi Klein [7]. Architects and boosters of that technology call those errors “hallucinations”, by which they feed “the sector’s most cherished mythology” that they “are in the process of birthing an animate intelligence”. According to Klein, it is not the LLMs that are having hallucinations, “it’s the tech CEOs who unleashed them along with a phalanx of their fans”. It is a hallucination that AI would solve the climate crisis, because there are already sufficiently robust datasets; that AI would deliver wise governance, because politicians are dependent on lobbying campaigns; that tech giants could be trusted not to break the world, because they do what is the best for their shareholders; that AI would liberate workers from drudgery, because the people set free would not know how to earn their living.

4. The Epistemological Argument

Epistemology is the philosophical branch dealing with the methods of creating data, facts and figures, of knowledge and of wisdom, both in everyday thinking and science. The problem of how to frame the process of acquiring what is essential from what is apparent needs a solution for engineering sciences, in particular, so-called social informatics, information and communication technologies and society, and the like.
Moreover, to understand the relation between society and technology, a co-operation of different disciplines is required. Transdisciplinarity is the order of today. True transdiciplinarity would be failed by cross-disciplinarity, that is, a techno-universalist reduction to engineering methods or a social universalist projection of social and human science methods to engineering. True transdisciplinarity would also be failed by pluridisciplinarity, that is, the purism of either kind of a monodiscipline—techno- or socio-particularism—or the multi- and interdisciplinary method mix of social and human science and engineering, boiling down to techno/social relativism. True transdisciplinarity combines social and human science methods, as well as engineering methods to yield a single methodology, albeit on a meta-level [1].
Only then can thinking provide frames that do justice to both sides and allow to hypothesize and theorize. The ontological distinction between the social and the technical is accessible through theorizing. A theoretical frame is necessary to put empirical evidence in context and understand that the Turing test proves how easily human comprehension can be fooled. Applications of LLMs such as Chat GPT are reminiscent of ELIZA, the programme Joseph Weizenbaum developed in the 1960s that simulated human psychotherapists. Weizenbaum was shocked that his program found favor with therapists. Would he not be just as shocked today, or perhaps even more shocked than he was then?

5. Conclusions

Techno-social systemism presented here opens up the space for an answer to the question of whether the current hype in AI is different from hypes before. The answer is rather “No”. Practically, LLMs are built on algorithms that represent a strict determinacy (and AGI will not be an exception). Ontically, LLMs are lacking a self to be called intelligent (and AGI will not be an exception). Epistemically, LLMs are passing the Turing test faking intelligence (and AGI will not be an exception).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hofkirchner, W. The Logic of the Third: A Paradigm Shift to a Shared Future for Humanity; World Scientific: Singapore, 2023. [Google Scholar] [CrossRef]
  2. Hofkirchner, W. Emergent Information: A Unified Theory of Information Framework; World Scientific: Singapore, 2013. [Google Scholar]
  3. Capurro, R. Toward a comparative theory of agents. AI Soc. 2012, 27, 479–488. [Google Scholar] [CrossRef]
  4. Hofkirchner, W. Does computing embrace self-organization? In Dodig; Crnkovic, G., Burgin, M., Eds.; Information and Computation; World Scientific: Singapore, 2011; pp. 185–202. [Google Scholar]
  5. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (Ed.) Ethically Aligned Design: A Vision for Prioritizing Human Well- Being with Autonomous and Intelligent Systems, 1st ed.; IEEE: New York, NY, USA, 2019; Available online: https://standards.ieee.org/industry-connections/ec/ead1e-infographic.html (accessed on 31 July 2023).
  6. Hofkirchner, W. Blurring of the human and the artificial: A conceptual clarification. Proceedings 2020, 47, 7. [Google Scholar]
  7. Klein, N. AI Machines Aren’t “Hallucinating”. But Their Makers Are. The Guardian. 8 May 2023. Available online: https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein (accessed on 31 July 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hofkirchner, W. Large Language Models Cannot Meet Artificial General Intelligence Expectations. Comput. Sci. Math. Forum 2023, 8, 67. https://doi.org/10.3390/cmsf2023008067

AMA Style

Hofkirchner W. Large Language Models Cannot Meet Artificial General Intelligence Expectations. Computer Sciences & Mathematics Forum. 2023; 8(1):67. https://doi.org/10.3390/cmsf2023008067

Chicago/Turabian Style

Hofkirchner, Wolfgang. 2023. "Large Language Models Cannot Meet Artificial General Intelligence Expectations" Computer Sciences & Mathematics Forum 8, no. 1: 67. https://doi.org/10.3390/cmsf2023008067

Article Metrics

Back to TopTop