# Turing’s Conceptual Engineering

## Abstract

**:**

## 1. Introduction

## 2. The Turing Test as an Intuition Pump

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.([4], p. 433)

- Answering questions such as “Can machines think?” involves providing lexical definitions of the meaning of component terms.
- Lexical definitions of the meaning of component terms reflect the normal use of the words.
- If lexical definitions reflect normal use, then answering questions such as “Can machines think?” should rely on statistical surveys such as Gallup polls.
- Therefore, answering questions such as “Can machines think?” should rely on statistical surveys.
- However, answering questions such as “Can machines think?” should not rely on statistical surveys.

The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object therefore it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behaviour.([15], p. 431)

This is not a conceptual analysis, this is conceptual engineering of an intuitive notion into, first, a clarified notion and then into a sharpened one. Turing merged these two steps in his famous informal exposition … he fixed the use and the context of his explication of effective calculability, abstracting the notion of effective calculability from practical limitations, ruling out infinity and any kind of ingenuity, and focusing on the symbolic processes underneath any calculation process. This disambiguation and clarification of effective calculability belongs to the explication step of the clarification of the explicandum. At the same time, Turing sharpened this clarified explicandum by arguing that what is effectively calculable has to be computable by an abstract human calculator respecting in his actions the bounds that we are now all familiar with, thereby implicitly giving a semi-formal definition of the notion of computorability. This implicit semi-formal axiomatization of effective calculability in terms of actions of a computor belongs instead to the mid-level step of the sharpening of the clarified explicandum.([29], p. 21)

- Can an artefact be made to show the behavioural characteristics of an organism?
- How closely in principle could the behaviour of such an artificial organism parallel that of a human mind? ([34], p. 105).

## 3. From Close to Distant Reading of Turing

- Correlation does not imply causality.
- Correlation does not imply causation.

Because of the complexity of all morphogenetic processes, relevant experimental data may be difficult to obtain, but the task should not be regarded as an impossible one. An evident primary test of the theory will consist in the closeness of its applicability to a wide range of biological materials. (…) Indications of the validity of the theory by the method of prediction have already been obtained by Turing, using the digital computer.([43], p. 46)

(…) it has been shown that there are machines theoretically possible which will do something very close to thinking. They will, for instance, test the validity of a formal proof in the system of Principia Mathematica.([15], p. 472)

## 4. Conclusions

## Funding

## Conflicts of Interest

## Notes

1 | Some critics of ordinary language philosophy ascribe this fallacy to this research tradition as a whole [5]. However, this seems to be insufficiently charitable. Mere frequency or typicality of usage is not sufficient for ordinary language philosophers to justify its correctness. In fact, ordinary language philosophers were much more sophisticated in justifying their normative claims, for example, about category mistakes involved in typical usage patterns (see, e.g., [6]). |

2 | |

3 | A number of critics, starting from Claude Shannon and John McCarthy ([15], p. 437), through Stanisław Lem [19], to Ned Block [20], objected that the process could be quite “nonintelligent”, while still producing intelligent-like conversations. The plausibility of the idea that a simple “lookup table” is capable of producing and understanding a potentially infinite number of English expressions over 30 min of unconstrained conversation is debatable, in particular because these lookup tables are not supposed to track the history of the ongoing conversation. Our current deep neural networks with hundreds of billions of parameters display remarkably fluent linguistic behavior, but remain brittle for common-sense questions and still fail to track the history of interaction. |

4 | Turing speculated that a machine would pass his test in a hundred years from 1952 ([15], p. 452), so the situation may look different in thirty years’ time. Note that he mentioned the end of the 20th century in his 1950 paper as the time when “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” ([15], p. 449). This latter claim does not imply that the judges at the turn of the 21st century would consider any machine capable of imitating human conversation, however. |

5 | Unfortunately, the official website of the Cambridge Turing Archive is down as of 17 January 2022, previously available at www.turingarchive.org, (accessed on 10 June 2022). This makes crucial Turing online archives no longer accessible. Unfortunately, these also did not include full transcripts of Turing’s correspondence (OCR for typewritten or handwritten text remains very noisy), such as those found in 2017. For this reason, the corpus does not cover all remaining correspondence of Alan Turing. The corpus does not contain Turing’s writings in pure mathematics [44] and his unpublished work in logic from [45] either. There are two reasons for this exclusion: (1) the optical character recognition (OCR) of mathematical notation is extremely noisy; (2) collocation analysis does not produce meaningful results for English documents that are predominantly mathematical. While the writings in [15] were already digitalized and proofread, other work had to go through OCR. For this purpose, tesseract OCR engine was used (in the default LSTM setting). A more detailed description of the corpus along with all modeling results is found in the repository RepOD available at https://doi.org/10.18150/WSTA4E (accessed 15 June 2022). |

6 | To recreate the diagrams, open the Turing corpus in SketchEngine, and then click the Thesaurus Table on the Basic search tab, enter the term of choice, and click GO. When the textual table appears, click Show Visualization. All output data produced this way is available in the public repository, including the logDice metric of collocation strength and frequency of the related term. |

7 | In this respect, SketchEngine differs from typical collocation extraction methods, including word2vec models, which are oblivious to the grammatical form of underlying expressions. The association score used to extract collocations in SketchEngine is relatively corpus-independent [47], in contrast to other association scores used in collocation research. For example, Alfano’s method does not involve any surface grammar constructs and relies on term frequency and co-occurrence [48], but these are comparable only in a single corpus. In general, collocation research often relies on variants of mutual information score [41]. This implies that semantic similarity in the corpus under study may lead to somewhat diverging results if other association scores are assumed. It must be stressed that various language technologies may serve different goals, and provide other insights into the text. For example, a word embedding model built using the word2vec algorithm [38] from the Turing corpus does not provide the same results as SketchEngine because its reliability requires much more data (e.g., the terms closest to intelligence in this model are “near”, “during”, “bury”, “irregularities”, “early”, “supply”, and “discrete”, which are far from informative). The word2vec model is available in the repository. |

8 | I owe this observation to Joanna Loeb. |

## References

- Blackburn, S. Think: A Compelling Introduction to Philosophy; Oxford University Press: Oxford, UK; New York, NY, USA, 1999; ISBN 978-0-19-210024-5. [Google Scholar]
- Cappelen, H. Fixing Language: An Essay on Conceptual Engineering; Oxford University Press: Oxford, UK; New York, NY, USA, 2018; ISBN 978-0-19-881471-9. [Google Scholar]
- Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Proc. Lond. Math. Soc.
**1937**, s2-42, 230–265. [Google Scholar] [CrossRef] - Turing, A. Computing Machinery and Intelligence. Mind
**1950**, LIX, 433–460. [Google Scholar] [CrossRef] - Gellner, E. Words and Things: An Examination of, and an Attack on, Linguistic Philosophy; Gollancz: London, UK, 1959; ISBN 978-1-138-14236-7. [Google Scholar]
- Ryle, G. The Concept of Mind; Hutchinson’ s University Library: London, UK, 1949; ISBN 978-0-09-023892-7. [Google Scholar]
- Pullum, G.K. The Land of the Free and The Elements of Style. Engl. Today
**2010**, 26, 34–44. [Google Scholar] [CrossRef] [Green Version] - Wittgenstein, L. The Blue and Brown Books; Blackwell: Oxford, UK, 1958. [Google Scholar]
- Shanker, S. Wittgenstein’s Remarks on the Foundations of AI; Routledge: London, UK; New York, NY, USA, 1998; ISBN 978-0-415-09794-9. [Google Scholar]
- Floyd, J. Turing on “Common Sense”: Cambridge Resonances. In Philosophical Explorations of the Legacy of Alan Turing; Floyd, J., Bokulich, A., Eds.; Boston Studies in the Philosophy and History of Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 324, pp. 103–149. ISBN 978-3-319-53278-3. [Google Scholar]
- French, R.M. The Turing Test: The First 50 Years. Trends Cogn. Sci.
**2000**, 4, 115–122. [Google Scholar] [CrossRef] - Block, N. The Mind as the Software of the Brain. In An Invitation to Cognitive Science; Osherson, D., Gleitman, L., Kosslyn, S., Eds.; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
- Proudfoot, D. A New Interpretation of the Turing Test. Rutherford J. N. Z. J. Hist. Philos. Sci. Technol.
**2005**, 1. [Google Scholar] - Łupkowski, P. Test Turinga: Perspektywa Sędziego; Wydawnictwo Naukowe UAM: Poznan, Poland, 2010; ISBN 978-83-232-2208-8. [Google Scholar]
- Turing, A. The Essential Turing. Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life Plus the Secrets of Enigma; Copeland, B.J., Ed.; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
- Copeland, B.J. The Turing Test. In The Turing Test: The Elusive Standard of Artificial Intelligence; Moor, J.H., Ed.; Studies in Cognitive Systems; Springer: Dordrecht, The Netherlands, 2003; pp. 1–21. ISBN 978-94-010-0105-2. [Google Scholar]
- Castelfranchi, C. Alan Turing’s “Computing Machinery and Intelligence”. Topoi
**2013**, 32, 293–299. [Google Scholar] [CrossRef] - Descartes, R. The Philosophical Writings of Descartes; Cottingham, J., Stoothoff, R., Murdoch, D., Eds.; Cambridge University Press: Cambridge, UK, 1985; Volume 1, ISBN 9780521288071. [Google Scholar]
- Lem, S.S. Summa Technologiae; Wydawn. Literackie: Kraków, Poland, 1974. [Google Scholar]
- Block, N. Psychologism and Behaviorism. Philos. Rev.
**1981**, 90, 5–43. [Google Scholar] [CrossRef] [Green Version] - Fodor, J.A. Psychological Explanation: An Introduction to the Philosophy of Psychology; Random House: New York, NY, USA, 1968. [Google Scholar]
- Dennett, D.C. Intuition Pumps and Other Tools for Thinking; Penguin Books: London, UK, 2013; ISBN 978-0-393-08206-7. [Google Scholar]
- Colby, K.M. Artificial Paranoia; a Computer Simulation of Paranoid Processes; Pergamon general psychology series, 49; Pergamon Press: New York, NY, USA, 1975; ISBN 978-0-08-018162-2. [Google Scholar]
- Aron, J. Software Tricks People into Thinking It Is Human. Available online: https://www.newscientist.com/article/dn20865-software-tricks-people-into-thinking-it-is-human/ (accessed on 13 January 2022).
- Koehn, P. Statistical Machine Translation; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2010; ISBN 978-0-521-87415-1. [Google Scholar]
- McCorduck, P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence; 25th Anniversary Update; A.K. Peters: Natick, MA, USA, 2004; ISBN 978-1-56881-205-2. [Google Scholar]
- Hofstadter, D.R. Gödel, Escher, Bach: An Eternal Golden Braid; Basic Books: New York, NY, USA, 1979; ISBN 978-0-465-02685-2. [Google Scholar]
- Quinon, P. Can Church’s Thesis Be Viewed as a Carnapian Explication? Synthese
**2021**, 198, 1047–1074. [Google Scholar] [CrossRef] [Green Version] - De Benedetto, M. Explication as a Three-Step Procedure: The Case of the Church-Turing Thesis. Eur. J. Philos. Sci.
**2021**, 11, 21. [Google Scholar] [CrossRef] - Carnap, R. Logical Foundations of Probability; Routledge and Kegan Paul: London, UK, 1950. [Google Scholar]
- Gandy, R. The Confluence of Ideas in 1936. In The Universal Turing Machine. A Half-Century Survey; Herken, R., Ed.; Oxford University Press: Oxford, UK, 1988; pp. 55–111. [Google Scholar]
- MacKay, D.M. Information, Mechanism and Meaning; M.I.T. Press: Cambridge, UK, 1969; ISBN 0-262-13055-6. [Google Scholar]
- Walter, W.G. The Living Brain; Norton: New York, NY, USA, 1953. [Google Scholar]
- MacKay, D.M. Mindlike Behaviour in Artefacts. Br. J. Philos. Sci.
**1951**, II, 105–121. [Google Scholar] [CrossRef] - Moretti, F. Conjectures on World Literature. New Left Rev.
**2000**, 1, 54–68. [Google Scholar] - Piper, A. Can We Be Wrong? The Problem of Textual Evidence in a Time of Data; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar] [CrossRef]
- Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing. IEEE Comput. Intell. Mag.
**2018**, 13, 55–75. [Google Scholar] [CrossRef] - Mikolov, T.; Chen, K.; Corrado, G.S.; Dean, J. Efficient Estimation of Word Representations in Vector Space. arXiv
**2013**, arXiv:1301.3781. [Google Scholar] - Pennington, J.; Socher, R.; Manning, C.D. GloVe: Global Vectors for Word Representation. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Firth, J.R.; Palmer, F.R. Selected Papers of J.R. Firth, 1952–1959; Indiana University Press: Bloomington, IN, USA, 1968. [Google Scholar]
- Gablasova, D.; Brezina, V.; McEnery, T. Collocations in Corpus-Based Language Learning Research: Identifying, Comparing, and Interpreting the Evidence. Lang. Learn.
**2017**, 67, 155–179. [Google Scholar] [CrossRef] [Green Version] - Turing, A. Mechanical Intelligence; Ince, D., Ed.; Collected works of A.M. Turing; North-Holland: Amsterdam, The Netherlands; New York, NY, USA, 1992; ISBN 978-0-444-88058-1. [Google Scholar]
- Turing, A.; Saunders, P.T.; Turing, A. Morphogenesis; Collected works of A.M. Turing; North-Holland: Amsterdam, The Netherlands; New York, NY, USA, 1992; ISBN 978-0-444-88486-2. [Google Scholar]
- Turing, A. Pure Mathematics; Britton, J.L., Good, I.J., Eds.; Collected works of A.M. Turing; North-Holland: Amsterdam, The Netherlands; New York, NY, USA, 1992; ISBN 978-0-444-88059-8. [Google Scholar]
- Turing, A.; Gandy, R.O.; Yates, C.E.M.; Turing, A. Mathematical Logic; Collected works of A. M. Turing; Elsevier Science: Amsterdam, The Netherlands; New York, NY, USA, 2001; ISBN 978-0-444-50423-4. [Google Scholar]
- Kilgarriff, A.; Baisa, V.; Bušta, J.; Jakubíček, M.; Kovář, V.; Michelfeit, J.; Rychlý, P.; Suchomel, V. The Sketch Engine: Ten Years On. Lexicography
**2014**, 1, 7–36. [Google Scholar] [CrossRef] [Green Version] - Rychlý, P. A Lexicographer-Friendly Association Score. In Proceedings of the Second Workshop on Recent Advances in Slavonic Natural Languages Processing, Jeseníky, Czech Republic, 5–7 December 2008; Masaryk University: Brno, Czechia, 2008; pp. 6–9. [Google Scholar]
- Alfano, M. Nietzsche’s Moral Psychology; Cambridge University Press: London, UK, 2019; ISBN 978-1-108-66106-5. [Google Scholar]
- Proudfoot, D.; Copeland, J. Turing and the First Electronic Brains: What the Papers Said. In Routledge Handbook of the Computational Mind; Colombo, M., Sprevak, M., Eds.; Routledge: London, UK; New York, NY, USA, 2019; pp. 23–37. [Google Scholar]
- Newell, A. Artificial Intelligence and the Concept of Mind. In Computer Models of Thought and Language; Schank, R., Colby, K.D., Eds.; WH Freeman: San Francisco, CA, USA, 1973; pp. 1–60. [Google Scholar]

**Figure 1.**Terms semantically close to “intelligence”. Font and circle sizes correspond to the term frequency in the corpus, while distances reflect semantic similarity.

**Figure 5.**Modifiers of the term “machine” in the Turing corpus. To create this diagram in SketchEngine, click “Word sketch”, enter the term “machine” and click GO, and then select the column showing modifiers of “machine” by clicking the icon “Only keep this column…”. Next, click Show Visualization on the top right-hand side. The circle sizes reflect frequency, while distance reflects the collocation strength.

**Figure 6.**A comparison of adjectival predicates that accompany the terms “mind” and “brain”. To the left, we find the terms associated mostly with “mind”, and to the right, those associated with “brain”.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Miłkowski, M.
Turing’s Conceptual Engineering. *Philosophies* **2022**, *7*, 69.
https://doi.org/10.3390/philosophies7030069

**AMA Style**

Miłkowski M.
Turing’s Conceptual Engineering. *Philosophies*. 2022; 7(3):69.
https://doi.org/10.3390/philosophies7030069

**Chicago/Turabian Style**

Miłkowski, Marcin.
2022. "Turing’s Conceptual Engineering" *Philosophies* 7, no. 3: 69.
https://doi.org/10.3390/philosophies7030069