Abductive Cognition and Machine Learning: Philosophical Implications

A special issue of Philosophies (ISSN 2409-9287).

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 14964

Special Issue Editor


E-Mail Website
Guest Editor
School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, South Korea
Interests: abductive cognition; history and philosophy of logic and mathematics; problem of individuation; medieval philosophy; philosophy of games

Special Issue Information

Dear colleagues,

It is well-known that we are living in the age of artificial intelligence. If DeepBlue was a mere astral signal, AlphaGo was a symbolic announcement of the beginning of a new era. Now, as AlphaFold 2 takes another huge step toward the goal of AGI, AI is changing all aspects of our lives. How should we understand this phenomenon and its far-reaching philosophical implications? While machine learning iss known to be the key to enhancing AI algorithms, recent advances in cognitive neuroscience have been another crucial factor. Nevertheless, much is still unknown about how these advancements in AI are possible. How are we to fathom super-intelligent machine minds in order to live well together with them in the future? Turning to abductive cognition seems to be a natural strategy to answer this question, as it has elements of both intuition and inference. Pioneered by Charles S. Peirce in the late 19th century, abduction was studied extensively in logic, philosophy of science, cognitive science, computer science, artificial intelligence, law, and semiotics during the 20th century, and several notable monographs on abduction have been published in the first two decades of the 21st century, uncovering the logical form and various patterns of abduction. Could abductive cognition be a clue to the incredible recent success of machine learning? Could we accelerate the AI revolution by implementing abductive elements into machine scientists?

Prof. Woosuk Park
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Philosophies is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Abductive Cognition
  • Machine Learning
  • Explainable Artificial Intelligence
  • Creativity
  • Intuition
  • Inference

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 916 KiB  
Article
Abduction in Art Appreciation
by Akinori Abe
Philosophies 2022, 7(6), 132; https://doi.org/10.3390/philosophies7060132 - 20 Nov 2022
Cited by 1 | Viewed by 1383
Abstract
Individuals usually go to art museums to enjoy artworks. Generally, in order to appreciate the art in museums, a brief summary of certain information is provided as a caption. Viewers usually read these descriptions to aid their understanding. To provide broader technical support [...] Read more.
Individuals usually go to art museums to enjoy artworks. Generally, in order to appreciate the art in museums, a brief summary of certain information is provided as a caption. Viewers usually read these descriptions to aid their understanding. To provide broader technical support for this activity, several researchers have proposed a protocol for art appreciation. For instance, Leder et al. proposed a stage model for aesthetic processing, which combines aspects of understanding and cognitive mastering with affective and emotional processing. We have also conducted several experiments in order to determine the effect of information during art appreciation. For instance, we conducted an experiment where information about a piece of art was offered gradually and incrementally. In the experiment, the participants seemed to be able to gradually understand the artwork according to the obtained information. Our observations indicate that they tried to create stories for the artworks in order to explain the obtained information. In addition, for the abstract artworks, if they saw the title, they understood the artworks within their own explanation in the context of the title. Our research framework suggests that we can consider this observed framework as a process of abduction, where the incremental presentation of details about art helps a user form a hypothesis about the piece of art. In this paper, we will analyze artwork appreciation and understanding with this framework from the viewpoint of abduction. Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
Show Figures

Figure 1

36 pages, 472 KiB  
Article
Could a Computer Learn to Be an Appeals Court Judge? The Place of the Unspeakable and Unwriteable in All-Purpose Intelligent Systems
by John Woods
Philosophies 2022, 7(5), 95; https://doi.org/10.3390/philosophies7050095 - 26 Aug 2022
Viewed by 1458
Abstract
I will take it that general intelligence is intelligence of the kind that a typical human being—Fred, say—manifests in his role as a cognitive agent, that is, as an acquirer, receiver and circulator of knowledge in his cognitive economy. Framed in these terms, [...] Read more.
I will take it that general intelligence is intelligence of the kind that a typical human being—Fred, say—manifests in his role as a cognitive agent, that is, as an acquirer, receiver and circulator of knowledge in his cognitive economy. Framed in these terms, the word “general” underserves our ends. Hereafter our questions will bear upon the all-purpose intelligence of beings like Fred. Frederika appears as Fred’s AI-counterpart, not as a fully programmed and engineered being, but as a presently unrealized theoretical construct. Our basic question is whether it is in principle possible to equip Frederika to do what Fred does as an all-purpose participant in his own cognitive economy. Can she achieve a sufficiency of relevant similarity to him to allow us to say that she herself can do what Fred can do, perhaps even better? One of the things that Fred can do—or at least could learn from experience to do—is discharge the duties of an Appeals Court judge. As set down in the ancient doctrine of lex non scripta, Fred must be able to detect, understand and correctly apply certain tacit and implicit rules of law which defy express propositional formulation and linguistic articulation. Fred has an even more widespread capacity for the epistemically tacit and implicit, clearly one of his most cost-saving kinds of intelligence. Indeed, most by far of what Fred will ever know he will know tacitly and implicitly. So we must ask: how tightly bound to the peculiarities of Fred’s cognitive enablement conditions is the character of the intelligence that he manifests? And how far down Fred’s causal make-up does intelligence actually go? Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
16 pages, 825 KiB  
Article
How to Make AlphaGo’s Children Explainable
by Woosuk Park
Philosophies 2022, 7(3), 55; https://doi.org/10.3390/philosophies7030055 - 24 May 2022
Cited by 2 | Viewed by 2151
Abstract
Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk (Go, Weiqi) have arrived at superhuman level, there seems [...] Read more.
Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk (Go, Weiqi) have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have less explainable AI Baduk players as they make further progress? I shall show that the resolution of this apparent paradox depends on how we understand the crucial distinction between abduction and inference to the best explanation (IBE). Some further philosophical issues arising from explainable AI will also be discussed in connection with this distinction. Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
Show Figures

Figure 1

18 pages, 267 KiB  
Article
Backpropagation of Spirit: Hegelian Recollection and Human-A.I. Abductive Communities
by Rocco Gangle
Philosophies 2022, 7(2), 36; https://doi.org/10.3390/philosophies7020036 - 26 Mar 2022
Cited by 2 | Viewed by 2344
Abstract
This article examines types of abductive inference in Hegelian philosophy and machine learning from a formal comparative perspective and argues that Robert Brandom’s recent reconstruction of the logic of recollection in Hegel’s Phenomenology of Spirit may be fruitful for anticipating modes of collaborative [...] Read more.
This article examines types of abductive inference in Hegelian philosophy and machine learning from a formal comparative perspective and argues that Robert Brandom’s recent reconstruction of the logic of recollection in Hegel’s Phenomenology of Spirit may be fruitful for anticipating modes of collaborative abductive inference in human/A.I. interactions. Firstly, the argument consists of showing how Brandom’s reading of Hegelian recollection may be understood as a specific type of abductive inference, one in which the past interpretive failures and errors of a community are explained hypothetically by way of the construction of a narrative that rehabilitates those very errors as means for the ongoing successful development of the community, as in Brandom’s privileged jurisprudential example of Anglo-American case law. Next, this Hegelian abductive dynamic is contrasted with the error-reducing backpropagation algorithms characterizing many current versions of machine learning, which can be understood to perform abductions in a certain sense for various problems but not (yet) in the full self-constituting communitarian mode of creative recollection canvassed by Brandom. Finally, it is shown how the two modes of “error correction” may possibly coordinate successfully on certain types of abductive inference problems that are neither fully recollective in the Hegelian sense nor algorithmically optimizable. Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
15 pages, 739 KiB  
Article
On Explainable AI and Abductive Inference
by Kyrylo Medianovskyi and Ahti-Veikko Pietarinen
Philosophies 2022, 7(2), 35; https://doi.org/10.3390/philosophies7020035 - 23 Mar 2022
Cited by 4 | Viewed by 3454
Abstract
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions [...] Read more.
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction. Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
13 pages, 263 KiB  
Article
Human Abductive Cognition Vindicated: Computational Locked Strategies, Dissipative Brains, and Eco-Cognitive Openness
by Lorenzo Magnani
Philosophies 2022, 7(1), 15; https://doi.org/10.3390/philosophies7010015 - 28 Jan 2022
Cited by 4 | Viewed by 2777
Abstract
Locked and unlocked strategies are illustrated in this article as concepts that deal with important cognitive aspects of deep learning systems. They indicate different inference routines that refer to poor (locked) to rich (unlocked) cases of creative production of creative cognition. I maintain [...] Read more.
Locked and unlocked strategies are illustrated in this article as concepts that deal with important cognitive aspects of deep learning systems. They indicate different inference routines that refer to poor (locked) to rich (unlocked) cases of creative production of creative cognition. I maintain that these differences lead to important consequences when we analyze computational deep learning programs, such as AlphaGo/AlphaZero, which are able to realize various types of abductive hypothetical reasoning. These programs embed what I call locked abductive strategies, so, even if they present spectacular performances for example in games, they are characterized by poor types of hypothetical creative cognition insofar as they are constrained in what I call eco-cognitive openness. This openness instead characterizes unlocked human cognition that pertains to higher kinds of abductive reasoning, in both the creative and diagnostic cases, in which cognitive strategies are instead unlocked. This special kind of “openness” is physically rooted in the fundamental character of the human brain as an open system constantly coupled with the environment (that is, an “open” or “dissipative” system): its activity is the uninterrupted attempt to achieve the equilibrium with the environment in which it is embedded, and this interplay can never be switched off without producing severe damage to the brain. The brain cannot be conceived as deprived of its physical quintessence that is its openness. In the brain, contrary to the computational case, ordering is not derived from the outside thanks to what I have called in a recent book “computational domestication of ignorant entities”, but it is the direct product of an “internal” open dynamical process of the system. Full article
(This article belongs to the Special Issue Abductive Cognition and Machine Learning: Philosophical Implications)
Back to TopTop