In the absence of some of the main lectures, we post, and sometimes briefly discuss, the short abstracts of some of the crucial papers, which we were allowed to make public, even at a pre-conference publication, which did not happen.
6.1. Machine Consciousness
Anil Seth. The strength of weak artificial consciousness.
Abstract: There are at least two ways to think about the project of artificial (or machine) consciousness. On the strong view, the aim is to build an actually conscious machine. On the weak view, the aim is to build detailed models of properties of consciousness, while remaining at best agnostic about the conscious status of these models. I will make the case that the weak approach is the most realistic, and the most beneficial (and least dangerous) path to follow. I will suggest that the development of artificial intelligence does not lead inevitably to artificial consciousness, that attempts to build actually conscious machines are hamstrung by a lack of theoretical consensus about the sufficient conditions for consciousness, and that retaining strong artificial consciousness as a goal is ethically highly problematic. In contrast, the weak approach to artificial consciousness promises to enhance the scientific understanding of consciousness by providing explanatorily powerful bridges between physical/neural mechanisms and properties of consciousness—both functional and phenomenological. I will illustrate this with examples from a methodology which can be called ‘computational neurophenomenology’. Conversely, weak artificial consciousness also holds promise for artificial intelligence, through equipping the latter with some of the functional benefits associated with consciousness. Finally, I will address some of the risks posed by technologies that merely give the appearance of being conscious, and suggest some reasons why consciousness might be more tightly tied to being alive than being intelligent [
6].
Abstract: How is it possible that a physical system experiences a feeling of what it’s like? I suggest that this question is ill-posed: physical existence does not have an experiential aspect, and all experience is simulated, within a frame of reference that is entirely virtual. Understanding consciousness requires a conceptual analysis that explains the genesis of a cohesive dynamic model of the universe by perceptual processes (processing agent) in the service of control tasks (control agent), and the scanning and reflection of the perceptual model by an integrated analytical, attentional process (attention agent). I will discuss some of the necessary conditions for a system creating and acting on models of its own agency, volition, first person perspective, and conscious phenomenology. The sense of agency, self and phenomenology are not realized in physics, but virtual. Virtuality implies that the causal structure of a domain is not shaped by physics, but by the functional constraints of a representational task. Such representations are either simulations (models that are reproducing observable dynamics of a domain using a different causal structure) or simulacra (reproductions of observables without underlying causal structure). Simulations allow interaction with the model, to explore possible branches and counterfactual states, while simulacra don’t offer interaction. Virtualism is not a new perspective, it is a conceptual clarification at the point of convergence of various contemporary, functionalist approaches to understanding the functionality, implementation and phenomenology of consciousness, including Barnard Baars’ and Stanislas Dehaene’s Global Workspace Theory, Michael Graziano’s Attention Schema theory, Keith Frankish’s Illusionism, Thomas Metzinger’s Self model theory and Yoshua Bengio’s Consciousness Prior. This convergence is marked by the role of consciousness as a control model of attention, at the interface between perception and reasoning, in the service of integrating different mental representations into a coherent model of reality, including the observing system’s own agency.
Abstract: By far the most common misunderstanding in the ethics of machine consciousness is that people think that one first assigns a certain probability to the emergence of conscious systems, and then makes a proposal as to how to best optimize the risk/benefit ratio from an ethical perspective. The typical knee-jerk reaction then is: “We do not even have to think about this, because it is all wildly speculative, mere Science Fiction!” This is false, and the popular resentment it expresses blocks progress. There is no given probability in this domain, and the ethical challenge rather consists in discussing the ethics of risk-taking under conditions of epistemic indeterminacy. „Epistemic indeterminacy” means it is not the case that either we know that artificial consciousness will inevitably emerge at some point or we know that artificial consciousness will never be instantiated on machines or other postbiotic systems [
7] (p. 47). It is this neither-nor-ness that has to be dealt with in a rational, intellectually honest, and ethically sensitive way.
J. Kevin O’Regan Making a machine that really feels.
Abstract: There is an aspect of consciousness that is often considered to be mysterious and perhaps not amenable to science, and therefore not implementable in machines: so-called “Phenomenal Consciousness”. Phenomenal Consciousness involves the experience of “qualia” like the raw feel of the redness of red, the smell of onion, or the prick of a pin. Facts like the fact that experiences differ among themselves in certain ways, and that globally they have “something it’s like”, seem not to be explicable by current science. Instead, current theories of consciousness just bluntly assert that somehow the special phenomenology of experiences “emerges” from certain forms of complex information processing. But most current theories have nothing to say about the mechanisms that allow Phenomenal Consciousness to emerge in this way. The “sensorimotor theory”, on the other hand, is an approach that is directly aimed at explaining Phenomenal Consciousness. It suggests that there is a way of thinking about what a sensory experience consists in that disperses the apparent mystery of qualia. The approach contends that experience should be considered a thing we do, not a thing that is generated by brains. Taking this view immediately allows the similarities and differences between different experiences to be explained in terms of similarities and differences in the sensorimotor laws that govern the interactions with the world that different experiences consist in. The additional fact that experiences globally have “something it’s like” is explained by the fact, first, that one has conscious access to the experience, and second, that the experience has the property of “sensory presence”. Sensory presence is a measure of the extent to which an experience imposes itself on our cognitive processes by virtue of having what I call “bodiliness”, “insubordinateness” and “grabbiness”.
If we accept the sensorimotor approach to Phenomenal Consciousness, there is no obstacle to machines having “feels” exactly in the same way humans do. As soon as machines are sufficiently intelligent to be able to develop selves and be aware of their actions and thoughts, then, when they interact with the world, they will automatically also “feel”.
Peter (Piotr) Boltuc, Non-Reductive Physicalism.
Abstract: Radical functionalism on consciousness holds that all conscious and intelligent functions are strictly physical, while non-reductive physicalism holds that conscious experience cannot be reduced to strictly mechanical/functional third-person experiences. We define non reductive physicalism not in terms of advanced functionalities, but of what psychology calls
creature-consciousness, at the level of its bio-chemical specificity, not content. These positions might be seen as irreconcilable. I try to show that they are not, by arguing that first-person consciousness is physical like chemistry or biology are physical, creating non-reducible, emergent physical processes. Thus, I demonstrate that non-reductive physicalism represents a complementary fit with radical functionalism on consciousness. Link to this presentation has been included in
Section 3.
Ron Chrisley, Machine Consciousness, Meta-Knowledge, and Physical Omniscience.
Abstract: Several thinkers have argued that a capacity for certain kinds of meta-knowledge is central to being conscious, and that meta-knowledge will, in turn, be central to the design of at least some forms of machine consciousness. After a quick review of such work, I will present a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, one in which such meta-knowledge plays a central role. First I will show that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. I will then show that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration will be achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, I will close with a discussion of the implications of this argument for machine consciousness, and vice versa.
6.2. Keynotes in AI
Abstract: The patternist philosophy of mind begins from the simple observation that key aspects of generally intelligent systems (in particular those aspects lying in Peirce’s Third metaphysical category) can be understood by viewing such systems as networks of patterns organized to recognize patterns in themselves and their environments. Amongmany other applications this approach can be used to drive formalization of the concept of an “open ended intelligence”, a generally intelligent system that is oriented toward ongoingly individuating itself while also driving itself through processes of radical growth and transformation. In this talk I will present a new formalization of open-ended intelligence leveraging paraconsistent logic and guided by patternist philosophy, and discuss its implications for practical technologies like AGI and brain-computer interfacing. Given the emphatically closed-ended nature of today’s prevailing AI and BCI technologies, it seems critical both pragmatically and conceptually to flesh out the applicability of broader conceptions of intelligence in these areas.
Summit Panel: Artificial Inventors, AI, Law and Institutional Economics Stephen Thaler (Creativity Engines Inc.); Kate Gaundry (Kilpatrick Townsend & Stockton LLP). Commenting Panelist: Peter Boltuc (University of Illinois Springfield; Warsaw School of Economics).
Stephen Thaler, The Artificial Sentience Behind Artificial Inventors.
Abstract: Using a new artificial neural network paradigm called vast topological learning [
4], a multitude of artificial neural networks bind themselves into chains that geometrically encode complex concepts along with their anticipated consequences. As certain nets called “hot buttons” become entangled with these chains, simulated volume neurotransmitter release takes place, selectively reinforcing the most advantageous of such topologically expressed ideas. In addition to providing important clues about the nature and role of sentience (i.e., feelings) within neurobiology, this model helps to explain how an artificial inventor called “DABUS” has autonomously generated at least two patentable inventions [
8,
9,
10].
Kate Gaudry, Potential Impacts of Various Inventorship Requirements.
Abstract: Though many entities are discussing A.I. and patents, this umbrella topic covers a vast diversity of situations. Not only can artificial intelligence be tied to inventions in multiple ways, but the involvement of various types of parties can shift potential outcomes and considerations. This presentation will walk through various potential scenarios that may arise (or arise more frequently) as A.I. advances and consider when and how patents may be available to protect the underlying innovation.
Peter Boltuc, as a session chair I decided to desist from presenting his commentary since the session went out of time. Intended remarks pertained to technical interpretation of court decisions and Kate Gaudry’s writings pertaining to limits of machine personhood.
Oron Shagrir with Philippos Papayannopoulos, and Nir Fresco, have talked of ‘Two kinds of computational indeterminacy’.
Jun Tani Exploring Robotic Minds Under the Framework of Predictive Coding and Active Inference.
Abstract: My research has investigated how cognitive agents acquire structural representation via iterative interaction with their environments, exercising agency and learning from resultant perceptual experience. Over the past two decades, my group has tackled this problem by applying the framework of predictive coding and active inference to development of cognitive constructs of robots. Under this framework, intense interaction occurs between top-down intention, which acts proactively on the outer world, and the resultant bottom-up perceptual reality accompanied by prediction error. The system tries to minimize the error or free energy either by modifying the intention or the outer world by acting on it. I argue that the system should become “conscious” when some computational efforts are required to minimize this error. Otherwise, everything just goes smoothly and automatically wherein no space for consciousness remains. My talk highlights our on-going cognitive neurorobotics studies which examine (1) development of primary intersubjectivity in dyadic imitative interaction robots, (2) emergent behavior observed in a goal-directed planning robot under real-time embodied constraints.
6.3. Panel: Gödel, Church, and Turing in Retrospect
Abstract It is often said that when the founders of computability talked about computers, they referred to a human computer. My aim is to distinguish between different approaches to the concept of a human computer, and to argue that the founders of computability and their interpreters take a stand between them. I will then conclude by commenting on the relations between human computation and physical computation.
Nathan Salmón, The Decision Problem for Effective Procedures.
Abstract: It is proved that the notion of an effective procedure (such as the truth-table method for determining provability in the propositional calculus, or the effective procedure for bisecting an angle using only a compass and a straightedge) is not itself decidable. The proof does not invoke Gödel numbering, Church’s thesis, Turing’s thesis, or the Church-Turing thesis. It instead proceeds directly from the intuitive notion of an effective procedure. While the result itself is perhaps none too surprising, it has a potentially awkward consequence for the task of solving decision problems (e.g., for solving the decision problem for provability in the propositional calculus).
The paper by Gary Mar on: Gödel on –Creativity of Mathematics versus Turing’s Mechanistic View of the Mind: An Irreconcilable Dichotomy?—Is being published in this issue.
The longer papers related to the conference on philosophy and computing have been invited to Philosophy and Science [Filozofia i Nauka] a journal of the Polish Academy of Science and other publications. Closely related material may appear in a future issue of The Journal of Artificial Intelligence and Consciousness.