Next Article in Journal
Creating New “Enclosures”: Violently Mimicking the Primitive Accumulation through Degradation of Women, Lockdowns, Looting Finance, War, Plunder
Next Article in Special Issue
Turing’s Conceptual Engineering
Previous Article in Journal
Vectors of Thought: François Delaporte, the Cholera of 1832 and the Problem of Error
Previous Article in Special Issue
Intuition and Ingenuity: Gödel on Turing’s “Philosophical Error”
 
 
Article
Peer-Review Record

From Turing to Conscious Machines

Philosophies 2022, 7(3), 57; https://doi.org/10.3390/philosophies7030057
by Igor Aleksander
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Philosophies 2022, 7(3), 57; https://doi.org/10.3390/philosophies7030057
Submission received: 29 April 2022 / Revised: 24 May 2022 / Accepted: 27 May 2022 / Published: 29 May 2022
(This article belongs to the Special Issue Turing the Philosopher: Established Debates and New Developments)

Round 1

Reviewer 1 Report

As clearly outlined in the abstract, the scientific roots of this paper can be retraced in 1950 Turing’s seminal work “Computing Machinery and Intelligence”. In particular, the Author(s) offer unedited, original and new hermeneutical reading of, at least, two foundational elements of Turing’s work: the question “can a machine think?”, framed in the context of the “Imitation game”, and the option of using networks of neuron-like units in what Turing called a ‘unorganised’ machine. This ‘unorganised’ machine is made correspond to ante litteram Neural State Machines (NSM), whose structure is retained similar to the structure of human brain. Such machines are considered for their ability to think.

The Author(s) argue that “can a machine think?” is epistemologically intertwined with “can a machine be conscious?” and that it may be not possible to discern consciousness just from the behavior of the system: how this behavior is achieved is important whilst in the context of the imitation game to appear conscious was sufficient. In contrast with the general approaches to machine consciousness through algorithmic methods, the notion of M-conscious is axiomatically introduced: being M-conscious means being conscious in a machine way and NSM are argued being M-conscious machines.

The epistemological interconnection between “can a machine think?” and “can a machine be conscious?” has been originally established by considering a formal parallel between biological entities and NSM. In particular the “mind” of living entities is made correspond to “state structure” of artificial entities as well as “mental states” are made correspond to “internal states” of NSM.

The Author(s), after solid argumentations, can deliver the conclusion that “an M-conscious machine could produce some conscious behavior which, through being powered by a neural state machine, could share with living entities the properties that are responsible for thought (a state structure mind, with machine states as mental states)”.

The matter is very interesting and arguments are clearly expressed. The paper’s appropriateness for the journal is good. The scholarly, novelty and quality of the paper is very high. The importance and correctness of the work is excellent. The soundness of the language is very good. I have no specific comments.

 

 

 

Author Response

I thank the reviewer for having appreciated the paper in some depth. There are no points that require a specific response. The suggestion to do an additional spell check will be carried out.

Reviewer 2 Report

There are minor typos on lines 76, 100, 107, 122, 127, 164, 253, several of which involve the placement of commas, something that varies in dialects of English. 

Author Response

I thank the reviewer for pointing out the necessary typographical corrections. These and others will be corrected.

Reviewer 3 Report

The paper addresses a key view in the literature. However, it presents a rather superficial account of how consciousness could be implemented in machines that is either only partly or completely irrelevant to both the Turing test and its interpretation by Copeland and Proudfoot. With respect to the former problem, the author(s) should consider that in the 1950 paper, Turing explicitly rejects consciousness as relevant for intelligence on the basis that it is solipsistic. The authors must at the very least address this problem.

With respect to the latter interpretative problem, the emotional aspects of intelligence can be interpreted in various ways, but these are independent of whether the machine is conscious or not. The author(s) mention Searle, who thought that consciousness was required for intelligence. Searle also argued that life was necessary for consciousness. This contradicts the author(s) argumentation, which again needs to be related to the interpretation of Copeland and Proudfoot. I suggest that the author(s) consult the relevant portions of Boden (2016, chapter 5) and the distinction, which challenges some aspects of Searle's account, by Haladjian and Montemayor (2016). The authors must address at least these two challenges, namely, a) that even neural network-based machines may not be conscious because they are not biological systems (see Boden), but b) that in spite of not being conscious they may still be intelligent (Haladjian and Montemayor). Without addressing these issues, the argumentation is not effective.

Author Response

Referee 3 makes three  salient points to which I provide a comment which will appear in the revised paper

  1. “Turing explicitly rejects consciousness as relevant for intelligence on the basis that it is solipsistic”

Reply: Turing addresses the issue of consciousness in his rebuttal of Jefferson’s 1949 Lister Oration argument that a machine cannot have the same ‘feelings’ and perhaps write sonnets like humans.  Turing points out that, in the extreme, the only way to know that a person thinks , is to be that person. This is the solipsist point of view.  Turing goes on to write “… instead of arguing continually over this point it is usually the convention to believe everyone thinks. …” I see this as the ‘attributional’ point of view which has beenappears in the paper at the end of section 7.

 

  1. “ .. neural network based machines may not be conscious because they are not biological systems (see Boden)” The referee is pointing to Boden’s 2016 book and mentions chapter 5.

Reply: Boden’s (2016) narrative (mainly in chapter 6, not 5) reviews the pros and cons of claims made for early work in machine consciousness, neural and not neural. I shall  refer to this work, but her discussion does not exclude machine consciousness on the grounds of not being biological.  In the paper, the concept of M-consciousness is introduced to highlight that a consciousness-like property can occur usefully in machines and that its distinction from the biological throws light on both.

  1. “ … in spite of not being conscious they may still be intelligent (Haladjian and Montemayor, 2016).

Reply: In the machine world most AI work attempts to achieve ‘intelligent’ tasks without encompassing ‘consciousness’. In the natural world, being conscious is necessary to develop intelligence. The above authors are concerned with deeper aspect of consciousness such as the feeling of emotion arguing that this is still a long way from being available to machines. I totally agree and can briefly stress this issue in the revision.

 

Round 2

Reviewer 3 Report

The paper should be revised for typos and style throughout the text, including the references. The authors addressed my main concerns, but should make an effort to make the paper a bit more rigorous and readable. 

Back to TopTop