Special Issue "Philosophy and Epistemology of Deep Learning"

A special issue of Philosophies (ISSN 2409-9287).

Deadline for manuscript submissions: closed (1 November 2019).

Special Issue Editors

Dr. Hector Zenil

Guest Editor
1. Algorithmic Dynamics Lab, Unit of Computational Medicine, Science for Life Laboratory, Center of Molecular Medicine, Karolinska Institute, Stockholm, Sweden.2. Algorithmic Nature Group, Laboratory of Scientific Research for the Natural and Digital Sciences, Paris, France.3. Oxford Immune Algorithmics, Oxford University Innovation, Oxford, U.K.
Interests: algorithmic learning; information theory; complex systems; philosophy of information; randomness; systems biology; digital philosophy; reprogrammability; cellular automata; measures of sophistication
Prof. Dr. Selmer Bringsjord
Website
Guest Editor
Rensselaer AI & Reasoning Laboratory Cognitive Science Department—School of HASS, Rensselaer Polytechnic Institute, New York, USA
Interests: cognitive science; robotics; computer science; logic & philosophy; technology; artificial intelligence

Special Issue Information

Dear Colleagues,

Current popular approaches to Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) are mostly statistical in nature, and are not well equipped to deal with abstraction and explanation. In particular, they cannot generate candidate models or make generalizations directly from data to discover possible causal mechanisms. One method that researchers are resorting to in order to discover how deep learning algorithms work involves using what are called ‘generative models’ (a possible misnomer). They train a learning algorithm and handicap it systematically whilst asking it to generate examples. By observing the resulting examples they are able to make inferences about what may be happening in the algorithm at some level. 
 
However, current trends and methods are widely considered black-box approaches that have worked amazingly well in classification tasks, but provide little to no understanding of causation and are unable to deal with forms of symbolic computation such as logical inference and explanation. As a consequence, they also fail to be scalable in domains they have not been trained for, and require tons of data to be trained on, before they can do anything interesting—-and they require training every time they are presented with (even slightly) different data. 
 
Furthermore, how other cognitive features, such as human consciousness, may be related to current and future directions in deep learning, and whether such features may prove advantageous or disadvantageous remains an open question.
 
The aim of this special issue is thus to attempt to ask the right questions and shed some light on the achievements, limitations and future directions in reinforcement/deep learning approaches and differentiable programming. Its particular focus will be on the interplay of data and model-driven approaches that go beyond current ones, which for the most part are  based on traditional statistics. It will attempt to ascertain whether a fundamental theory is needed or whether one already exists, and to explore the implications of current and future technologies based on deep learning and differentiable programming for science, technology and society.
 

Dr. Hector Zenil
Prof. Dr. Selmer Bringsjord
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Philosophies is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Approximate and Situated Causality in Deep Learning
Philosophies 2020, 5(1), 2; https://doi.org/10.3390/philosophies5010002 - 06 Feb 2020
Abstract
Causality is the most important topic in the history of western science, and since the beginning of the statistical paradigm, its meaning has been reconceptualized many times. Causality entered into the realm of multi-causal and statistical scenarios some centuries ago. Despite widespread critics, [...] Read more.
Causality is the most important topic in the history of western science, and since the beginning of the statistical paradigm, its meaning has been reconceptualized many times. Causality entered into the realm of multi-causal and statistical scenarios some centuries ago. Despite widespread critics, today deep learning and machine learning advances are not weakening causality but are creating a new way of finding correlations between indirect factors. This process makes it possible for us to talk about approximate causality, as well as about a situated causality. Full article
(This article belongs to the Special Issue Philosophy and Epistemology of Deep Learning)
Show Figures

Figure 1

Open AccessArticle
On Theoretical Incomprehensibility
Philosophies 2019, 4(3), 49; https://doi.org/10.3390/philosophies4030049 - 15 Aug 2019
Abstract
This contribution tentatively outlines the presumed conceptual duality between the issues of incompleteness and incomprehensibility—The first being more formal in nature and able to be declined in various ways until specified in the literature as theoretical incompleteness. This is theoretical and [...] Read more.
This contribution tentatively outlines the presumed conceptual duality between the issues of incompleteness and incomprehensibility—The first being more formal in nature and able to be declined in various ways until specified in the literature as theoretical incompleteness. This is theoretical and not temporary, which is admissible and the completion prosecutable. As considered in the literature, theoretical incompleteness refers to uncertainty principles in physics, incompleteness in mathematics, oracles for the Turing Machine, logical openness as the multiplicity of models focusing on coherence more than the optimum selections, fuzziness, quasiness, e.g., quasi-crystals, quasi-systems, and quasi-periodicity, which are intended as the space of equivalences that allow for coherent processes of emergence. The issue of incomprehensibility cannot be considered without reference to an agent endowed with cognitive abilities. In this article, we consider incomprehensibility as understood here as not generally scientifically explicable, i.e., with the available knowledge, as such incomprehensibility may be temporary, pending theoretical and technological advances, or deemed to be absolute as coincident with eventual definitive, theoretical non-explicability, and incomprehensibility. We considered the theoretically incomprehensibility mostly in three main ways: as the inexhaustibility of the multiplicity of constructivist reality as given by the theoretically incomprehensible endless loop of incomprehensible–comprehensible, and by existential questions. Moreover, theoretical incomprehensibility is intended as evidence of the logical openness of both the world and of understanding itself. The role of theoretical incomprehensibility is intended as a source of theoretical research issues such as paradoxes and paradigm shifts, where it is a matter of having cognitive strategies and approaches to look for, cohabit, combine, and use comprehensibility and (theoretical) incomprehensibility. The usefulness of imaginary numbers comes to mind. Can we support such research for local, temporary, and theoretical incomprehensibility with suitable approaches such as software tools, for instance, that simulate the logical frameworks of incomprehensibility? Is this a step toward a kind of artificial creativity leading to paradigm shifts? The most significant novelty of the article lies in the focus on the concept of theoretical incomprehensibility and distinguishing it from incomprehensibility and considering different forms of understanding. It is a matter of identifying strategies to act and coexist with the theoretically incomprehensible, to represent and use it, for example when dealing with imaginary numbers and quantum contexts where classical comprehensibility is theoretically impossible. Can we think of forms of non-classical understanding? In this article, these topics are developed in conceptual and philosophical ways. Full article
(This article belongs to the Special Issue Philosophy and Epistemology of Deep Learning)
Open AccessArticle
From Reflex to Reflection: Two Tricks AI Could Learn from Us
Philosophies 2019, 4(2), 27; https://doi.org/10.3390/philosophies4020027 - 24 May 2019
Abstract
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. [...] Read more.
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection. Full article
(This article belongs to the Special Issue Philosophy and Epistemology of Deep Learning)
Open AccessArticle
AlphaGo, Locked Strategies, and Eco-Cognitive Openness
Philosophies 2019, 4(1), 8; https://doi.org/10.3390/philosophies4010008 - 16 Feb 2019
Abstract
Locked and unlocked strategies are at the center of this article, as ways of shedding new light on the cognitive aspects of deep learning machines. The character and the role of these cognitive strategies, which are occurring both in humans and in computational [...] Read more.
Locked and unlocked strategies are at the center of this article, as ways of shedding new light on the cognitive aspects of deep learning machines. The character and the role of these cognitive strategies, which are occurring both in humans and in computational machines, is indeed strictly related to the generation of cognitive outputs, which range from weak to strong level of knowledge creativity. I maintain that these differences lead to important consequences when we analyze computational AI programs, such as AlphaGo, which aim at performing various kinds of abductive hypothetical reasoning. In these cases, the programs are characterized by locked abductive strategies: they deal with weak (even if sometimes amazing) kinds of hypothetical creative reasoning, because they are limited in what I call eco-cognitive openness, which instead qualifies human cognizers who are performing higher kinds of abductive creative reasoning, where cognitive strategies are instead unlocked. Full article
(This article belongs to the Special Issue Philosophy and Epistemology of Deep Learning)
Back to TopTop