Next Article in Journal
Matching a Trope Ontology to the Basic Formal Ontology
Previous Article in Journal / Special Issue
Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots
Article
Peer-Review Record

Embodied AI beyond Embodied Cognition and Enactivism

Philosophies 2019, 4(3), 39; https://doi.org/10.3390/philosophies4030039
Reviewer 1: Anonymous
Reviewer 2: Tom Froese
Reviewer 3: Anonymous
Philosophies 2019, 4(3), 39; https://doi.org/10.3390/philosophies4030039
Received: 18 April 2019 / Revised: 10 July 2019 / Accepted: 11 July 2019 / Published: 16 July 2019

Round 1

Reviewer 1 Report

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 12.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 13.0px} span.s1 {font-kerning: none} span.s2 {font: 12.0px 'Helvetica Neue'; font-kerning: none}

In my understanding, the author aims to put forward a radical view on embodied cognition (EC) and embodied artificial intelligence (EAI) that disqualifies embodiment as constitutive or even necessary for cognition. The bigger aim would be to suggest a perspective of cognition that is coherent with both living and artificial systems.


The aim stated in the text is to show that the notion of embodiment boils down to functionalism and, therefore, it would be an idle wheel in explaining cognition. To accomplish his aims, the author proposes to address the conceptual and empirical issues from strong EAI, namely circularity, epiphenomenalism and implicit mentalism.


Despite the significance of the content, I cannot recommend this text for publication, for the text doesn’t satisfy its aims, and it could benefit from a better organizational strategy.


I will provide below a few difficulties the reader finds on the text. 


I would also recommend a grammatical revision.


On item 2, page 2:


“To allow EC to be more than a trivial thesis about the fact that all functional processes
need to be instantiated by physical structures, EC needs a meaningful difference between the body and other physical systems, which seems infeasible unless one sneaks in mentalistic or circular definitions of the body.”


It would good to explain why it seems infeasible. 


On page 3: 


1- “Of course, any representational level would defeat the purpose of EC.”


This does not seem to be a matter of course, for people are debating over it. 

See, for example, your reference number 25.


2- “The crux of the matter, though, is whether there is any difference between a functional loop
implemented by means of a computational unit and a functional loop implemented by means of rotating iron spheres. While the two cases are described using different terminologies, they implement exactly the same functional pattern. Why should they be any different? The body – and the network of interactions with the world – is just an extended brain. They offer the right
circumstances to instantiate functional patterns.”


This (in italics) is really the question and it is not an easy one: is there more to the nature of cognition than its functions? Will AI ever be conscious, will they feel? Is ‘detecting’ the same as ‘feeling'? 

These are very important issues to be determined. There are serious implications of determining them, like attributing intentionality to AI, giving them civil rights etc.


On page 4:


There are important questions apparently used rhetorically. This doesn’t seem appropriate for the importance of the matter.

 
On page 5: 


The description of Figure 1 is made of questions. It would be good to have an actual description that explains why the picture is relevant. 


As it stands, it seems that you think your reader is a dummy. 


On item 3, page 5: 


There is an important point here: if you consider video-games, you are only providing another environment to the same body and, therefore, not showing that the body is not necessary. 


On page 6: 


There are a few problems of clarity here. 

1- Regarding the relevance of the body on functionalism views.

2- Regarding your understanding of Chalmers and Clark. 


On page 6, item 4:


It seems to me that your question of whether EC has any value for AI, is the center of your concern. It would be good for the reader if you start from this point and then develop the arguments.


Pages 7 and 8:


The treatment of the points you mentioned in your abstract is quite short. 


Page 10:


Your proposal of en-world-ment should be developed a little further. It would be good to consider the implications of such view. 


Author Response

I thank the reviewer for his/her very helpful comments. I understand that the original version of the paper needed a deep revision to cope with an insufficient communication strategy. I followed all his/her suggestions in order to improve the article. I add here some specific comments to highlight the changes I did to fulfill the reviewer’s comments. Of course, since I had to take into consideration also the other reviewer’s comments, the amount of changes I made is much greater.

I attached a docx file where I list each comment and question.


Author Response File: Author Response.pdf

Reviewer 2 Report

The article highlights several philosophical shortcomings of embodied AI and concludes with an alternative proposal, namely that what we experience is identical to the world itself and that this experience shows up by virtue of our embodiment. At this general level the article makes a useful contribution to the literature, which closely aligns with certain tendencies in 4E cognition research, especially of the enactivist variety. Its novel contribution to these tendencies is to attempt to clarify the concepts of constitution and of identity. In particular, it limits the notion of identity to the external world, while the organism is relegated to a necessary condition of the experience. This is an interesting proposal, which is consistent with the absence of organismic physiology in our experience, and as such it deserves to be considered in more detail. 

 

However, unfortunately this contribution is overshadowed by the article’s uneven quality in terms of rigorous argumentation. The writing often borders on the colloquial, and even more problematically, the style of philosophical argumentation often tends towards a mixture of painting extremes and making overgeneralizations and giving personal opinions that are sometimes even passed as facts. Moreover, although in many respects the criticisms of embodied AI are actually justified, they are out of date because most of them have already been raised a decade ago (see particularly the work by Di Paolo, Ziemke, Bishop, Froese, Vernon, etc.). In addition, the tradition that has raised these criticisms originally in the most sustained manner, namely the enactive approach, is only given a superficial treatment. This is unfair to the decade of research in enactive philosophy of mind that has tried to address many of the same challenges that the author would like to address. It also means that there is a missed opportunity to take a closer critical look at how the enactive approach has tried to address the challenges, and how it has developed accordingly over the years. 

 

In particular, while in many aspects it might have been quite close to other forms of embodied cognition over a decade ago, it is now a very distinct paradigm that has come to a similar conclusion as the author, namely that our embodiment is a condition of possibility for the direct perception of the world. Rather than superficially building up a strawman in order to brush away the enactive approach, it would be much more fruitful and mutually beneficial to make a deeper attempt to compare and contrast the most recent manifestations of this approach with the alternative approach that the author is defending. As it stands, it is not clear whether there is really any essential difference between the identity theory of the author and the identity theory and direct perception theory defended by the enactive approach. It seems like there could be some interesting differences, like an avoidance of reliance on the concept of constitution and a limitation of the identity relationship to the world in exclusion of the organism’s body. However, unfortunately at the moment these differences and their potential theoretical and practical consequences remain unclear.

 

For these reasons I cannot recommend publication of the article, although it contains seeds for promising future contributions if the author spends some more time on developing them.

 

Below I offer some more specific feedback:

 

-      61: It is indeed a challenge to demonstrate how constitution makes a difference from causation, but at least philosophically there are some profound differences. But even in explanatory terms it has been used. For instance, Noe has argued that since the world plays a constitutive role in perceptual experience, it better accounts for its open-ended richness (Noë, 2002). See also (Fish, 2009).

-      68: I don’t get why “epiphenomenalism makes constitution very bad for AI”

-      77: The enactive approach has been doing a lot of work on defining precisely in a better way what being or having a body amounts to, see e.g. for extensive treatments (Di Paolo, Buhrmann, & Barandiaran, 2017; Di Paolo, Cuffari, & De Jaegher, 2018).

-      88: Shaun Gallagher wrote what? Is the next paragraph a quote?

-      102: asking for evidence that no other ways of achieving something exists is not really the business of science. That’s not a fault of EC but a general impossibility.

-      103: asking for evidence of constitution is also somewhat odd. Why should empirical data be the final arbitrator regarding the constitution-causation-identity debate? Is there no role for philosophical argumentation?

-      118: again, it seems odd to argue that applying mechanical overkill to solve a problem that could be solved much simpler with embodied design principles fails to show that EC plays a necessary functional role. To take another famous example from the history of science: it is not impossible to calculate the movements of the stars when taking the earth as the center of the solar system, but it’s much easier to do it when assuming that the sun is at the center. So most people assume that the latter is a better explanation. Why not apply the same standards to EC?

-      140: Yes, functionalism naturally leads to EC of a certain variety. That’s why the extended mind hypothesis is occasionally referred to as extended functionalism. But not all forms of embodied cognition accept functionalism. See e.g. the rejection of functionalism by enactive cognition authors (Di Paolo, 2009; Froese, 2017).

-      p. 4: most of this criticism of the vagueness of the concept of embodiment is correct, but not new. E.g. check out Gallagher’s concern about the “body snatchers”.

-      222: there is a debate about whether the implementation makes a difference in the current philosophy of computation. There is a growing trend to acknowledge that the materiality can make a difference. Intuitively, at a sufficiently abstract level functions can be the same when realized in different media, but as you go down to more and more fine-grained levels, differences may become apparent. Accordingly, Clark talks about “micro-functionalism”. By the way, the author’s thought experiment only works in principle if there is a bottom level to the universe; otherwise there will always be another, smaller level of detail that escapes the simulation and which can lead to different behavior under counterfactual circumstances. Moreover, the thought experiment also fails if, along with functionalism, we drop determinism.

-      227: It is not clear whether any body and its environment can be simulated. Many people assume so, but it’s hard to prove that this is the case. Moreover, there are good reasons to doubt that this is the case (Di Paolo, 2009).

-      232: many people, especially enactivists, agree that it is necessary to move beyond functionalism, which is why materiality, historicity, and organization are important topics for current research.

-      263: it’s quite strange to accuse the extended mind hypothesis of being “pretty obvious”. It wasn’t so obvious before it was finally proposed!

-      282: I don’t get why the notion of embodiment is supposed to be mostly cosmetic. There are plenty of examples from robotics in which it is shown that properties of the agent’s embodiment make an essential contribution to its behavior and problem-solving capacities. See especially all the work by Pfeifer and colleagues, e.g. (Pfeifer & Bongard, 2007). Moreover, that these advances in robotics are not just cosmetic is evidence by the impressive theoretical changes they encouraged (Wheeler, 2005). Brooks’ robots are still discussed as useful thought experiments in recent philosophical debates (Hutto & Myin, 2013).

-      291: The enactive approach has asked many of these questions over the years and tried to provide rigorous responses. Regarding what counts as an action, see for example (McGann, 2007).

-      314: I quite like the author’s observation regarding that “matter matters”. There has been a growing resistance against multiple realizability.

-      332: why should science be the final arbiter regarding constitution? It seems strange to reduce these questions to scientific methods alone.

-      339: this swamp man scenario does not work according to the enactive approach; same embodiment and organization would entail same experience

-      369: why does the author claim that body-world interactions are not any more like one’s experience than neural signals? And to the contrary, isn’t the author precisely appealing to the similarity between experience and world at the end of the article?

-      Section 4.4: the criticisms of sensorimotor contingency theory are largely valid, but they have been raised years ago. See especially (Thompson, 2005). This is why Noe embraced a more biologically grounded enactive approach in his subsequent work (Noë, 2009).

-      389: the enactive notion of body derives from autopoietic theory, which focuses on self-production of a systemic identity. So there seems to be no vicious circularity here.

-      402: why would the “only” reason to endorse embodiment be because it contains a brain? The body realizes many important processes underlying behavior.

-      409: some enactivists have criticized the body-centered view of their colleauges, see e.g. Hutto and Myin’s equal partners principle (Hutto & Myin, 2013). But there also good arguments for assuming that the relationship between body and world is asymmetrically centered on the body of the organism (Di Paolo, 2010; Froese & Ziemke, 2009).

-      424: autopoiesis is not a matter of function. It is a matter of organization, which can be used to distinguish between different kinds of system. This is not simply an ad-hoc matter, but actually separates living and artificial systems as essentially different kinds of systems (Ziemke, 2007).

-      427: why does this not sound good? Does the author want to defend functionalism? It would be better to hear arguments rather than personal impressions.

-      448: the mind-object identity theory sounds close to recent developments in enactive theory, which also stress the identity of an organism in the world with its experience (Myin & Zahnoun, 2018), and which accept that the contents of perceptual experience are identical with the world itself, thus providing direct access to the world (Beaton, 2013, 2016).

-      458: suddenly the author accepts that the body is necessary? So why all of the dismissal of embodiment research? Does being necessary not mean it is special? It seems that the author owes the reader an account of what it is about the body that allows it to serve as a necessary condition of possibility of experience. And if this is unpacked a bit more, it is quite likely that the author will end up with the biological foundations of enactivism.

 

References

 

-               Beaton, M. (2013). Phenomenology and embodied action. Constructivist Foundations, 8(3), 298-313. 

-               Beaton, M. (2016). Sensorimotor direct realism: How we enact our world. Constructivist Foundations, 11(2), 265-297. 

-               Di Paolo, E. A. (2009). Extended life. Topoi, 28(1), 9-21. 

-               Di Paolo, E. A. (2010). Robotics inspired in the organism. Intellectica, 1-2(53-54), 129-162. 

-               Di Paolo, E. A., Buhrmann, T., & Barandiaran, X. (2017). Sensorimotor Life: An Enactive Proposal. Oxford, UK: Oxford University Press.

-               Di Paolo, E. A., Cuffari, E. C., & De Jaegher, H. (2018). Linguistic Bodies: The Continuity Between Life and Language. Cambridge, MA: MIT Press.

-               Fish, W. (2009). Perception, Hallucination, and Illusion. New York, NY: Oxford University Press.

-               Froese, T. (2017). Life is precious because it is precarious: Individuality, mortality, and the problem of meaning. In G. Dodig-Crnkovic & R. Giovagnoli (Eds.), Representation and Reality in Humans, Other Living Organisms and Intelligent Machines(pp. 30-55). Switzerland: Springer.

-               Froese, T., & Ziemke, T. (2009). Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial Intelligence, 173(3-4), 366-500. 

-               Hutto, D. D., & Myin, E. (2013). Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: The MIT Press.

-               McGann, M. (2007). Enactive theorists do it on purpose: Toward an enactive account of goals and goal-directedness. Phenomenology and the Cognitive Sciences, 6(4), 463-483. 

-               Myin, E., & Zahnoun, F. (2018). Reincarnating the identity theory. Frontiers in Psychology, 9(2044). doi:10.3389/fpsyg.2018.02044

-               Noë, A. (2002). Is the visual world a grand illusion? Journal of Consciousness Studies, 9(5-6), 1-12. 

-               Noë, A. (2009). Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. New York, NY: Hill and Wang.

-               Pfeifer, R., & Bongard, J. C. (2007). How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.

-               Thompson, E. (2005). Sensorimotor subjectivity and the enactive approach to experience. Phenomenology and the Cognitive Sciences, 4(4), 407-427. 

-               Wheeler, M. (2005). Reconstructing the Cognitive World: The Next Step. Cambridge, MA: The MIT Press.

-               Ziemke, T. (2007). What's life got to do with it? In A. Chella & R. Manzotti (Eds.), Artificial Consciousness(pp. 48-66). Exeter, UK: Imprint Academic.

Author Response

I thank the reviewer for his/her very helpful comments. I understand that the original version of the paper needed a deep revision to cope with an insufficient communication strategy. I followed all his/her suggestions in order to improve the article. I add here some specific comments to highlight the changes I did to fulfill the reviewer’s comments. Of course, since I had to take into consideration also the other reviewer’s comments, the amount of changes I made is much greater.

I attached a docx file where I list each comment and question.


Author Response File: Author Response.pdf

Reviewer 3 Report

I read the present with great interest but found several serious flaws in its argumentation that manifest themselves in a variety of ways.

The paper construes "Embodied Cognition" as a straw man by making at times simply exaggerated, at times outright false claims, hence falling into the straw man fallacy.
More generally, the paper, at many positions, makes too strong claims that are not backed up by the literature or the author's opinion is stated as fact.
Overall the paper requires a major re-write and far more nuanced discussion of EC than is currently present - less polemics and more scientifically backed qualifications.

In the current state I fail to see the merit of the publication other than acting as a pointer to the author's previous publications. But I would be willing to reconsider my position if an actual argument was presented without the present distortions and falsehoods.

Stylistically the writing became sloppier as the paper progressed. I spotted more grammatical, semantic, and lexical errors, hence the paper also requires a considerable "clean-up" operation in this respect.


I will in the following point out some of the wrongly stated "facts", misrepresenations, and distortions, and at the end give an imcomplete list of lexical/grammatical/semantic errors. I am unable to provide a full list of and explanation of falsehoods and misrepresentations of EC as there are simply too many. But the author should be able to apply the criticism as pointed out in the following list to the rest of the paper.



C1 (line 32-37): I don't share the author's opinion, and this is precisely what it is, that the improvements in AI and robotics have been merely a matter of an increase in computing power. While the advent of massively parallel graphics cards are fundamental to Deep Learning, there are areas in robotics whose progress was much more linked to changes on the conceptual level than on the level of mere computing power. As a counter-example to the alleged "fact" I point to reactive robotics as a robotic paradigm, by and large founded by Rodney Brooks. Later developements in this line ot research are are passive walkers, where expensive central processing needed for generating the required gait has been replaced with material properties
that are more similar to the human walking apparatus.
Furthermore the recent advances in speech processing that are cited by the author still do not lead to anything like natural conversations.
Tellingly, as also pointed out by some of the leading researches in the field point, the shortfall might be linked to insufficient progress in the understanding and "modelling" of pragmatics - an area that has more to do how speech is "coupled" to the wider contexts than "central processing" of linguistic entities [1]. Illustrative examples of such pragmatic failure can be found in [2]. So there really is no "proof of fact" here, just an expression of opinion.

C2 (line 42-43): "If cognition can occur outside the head ...". This is an example of far too general and oversimplistic reasoning. The devil here really is in the detail. The statement may be true for certain cognitive processes and tasks, but certainly not for all. Certain cognitive processes may involve a tight coupling between central processing, sensor-apparatus, and the physical environment. Say for example driving a car, shifting gears, or the like, where part of the processing has been pushed in the sub-conscious, and relevant signals originate from the car itself are mediated by the nerves of hand, legs, etc. This type of cognition cannot be simply be "pushed into" the head. Obviously one could try to simulate the entire process in a computer, but this would require near-total knowledge of the process(es), and would constitute a mere duplication of the embodiment in silico. This would effectively only amount to a replacement by the the original embodiment, through some artificially created embodiment, rather then re-locating all involved processes "into the head".
 Another example, where the environmental/social context cannot just be "pushed into the head" would be social cognitive processes that involve other agents and temporally tightly coupled interactions such as conversations. Here, the adherence or violation of time contraints give rise to certain interpretations of what has been said [3]. Such time contraints are genuinely inter-personal and can therefore not just be "pushed into the head" of any single one of the interlocutors. Processes that are genuinely inter-personal cannot be reduced to a single
agent. Again, one could try and simulate "the other", but all that one would do in this case, is to replicate the inter-locutor in silico, rather than "pushing it back" and reducing it to the central processing of a single agent.

C3 (line 57): "On the other hand, EC entails a stronger thesis about the constitutive role of the body and the environment". This maybe the case for certain branches of EC, but most probably not for all. As "embodied cognition" is not a single branch of research but rather an umbrella
term for a number of approaches, it will be hard to find assertions and tenet that all researches in these fields will subscribe to.
I do not agree that ALL researchers subscribe to a "stronger thesis" than the emphasis on the important role that the body and the physical and/or social environment plays in cognition.

C4 (line 64) [Strawman alarm]: "This is not a straightforward thesis ... ". I disagree strongly! It really is quite straightforward. As we know from many complex systems, some behaviours or regularities can only be observed on certain level of the system but not on lower levels when looking at the single entities that make up the more complex level. While this may sound superficially mysterious, there really is nothing mysterious about such emergent processes. Take a flock of birds as example: while you can observe certain patterns and regularities in the behaviour of the flock, these patterns cannot be observed with the single bird. However, if we know the rules, we can simply implement them and reproduce such behaviour. There is really nothing mysterious or epiphenomenal about such processes. The natural world is full of
 complex systems where higher-level processes are given rise to by lower-level processes. Most biological phenomena will only make sense on distinct levels of hte organism, but not on lower levels. Searching for cancer on an atomic level would be futile. Nevertheless hardly anyone would call it epiphenomenal just because we cannot reduce it to quantum physics in a straightforward manner.


C5 (line 79ff.) "To allow EC to be more than a trivial thesis ...". This sounds like mere polemics to me. Functionalism and EC are somewhat orthogonal to each other in terms of what they try to explain (cf. [4]). Many researchers that subscribe to EC try to explain the details of some cognitive and sensorimotor processes, either just to understand it, or to reproduce it artificially. While these details may seem uninteresting, or "trivial" to the author, in their grand sketch of the universe, it still is serious research.
 If you take the example of energy-efficient walking, it may be uninterersting to the average philsopher, but it has been an unsolved problem for quite some time in robotics, how to get a machine to work as efficiently akin to a human being. In a similar fashion that are currently
 many research projects on grasping in robotics, which may be of great interest to EC researchers in that it that it may not be straightforward to disentangle the embodied part of the cognition (material properties etc.) from the (centrally processed) machine learning bit.
 These approaches are at the same time functionalist in that there is an attempt to replicate a genuinely human capability, and embodied. Yet "mere functionalists" would probably not care about this type of research.

C6 (lines 96-107): There is a contradiction within the same paragraph. First he cites Lakoff & Johnson to say that yes, parts of higher cognition such as language are rooted and permeated by bodily metaphors and parameters. A few sentences later we find the claim that there is no evidence that EC would be constitutive of mental phenomena. If linguistic concepts such as spatial relationships are given rise through particular properties of our body - distinguishing between "forth" and "back" only makes sense in light of the fact that our eyes are attached to only one side of the body - I cannot see how this body-related parameter would not be "constitutive" of these very concepts. It would take a very odd definition of "constitutive" for the given set of assertions to not be internally contradictory.

C7 (lines 117-118): "... achieving efficient biology-like ...". How could a robot that has sensors and uses senor-data to modulate its actuators possibly be "without any embodiment"? I am flabbergasted ... .


C8 (lines 142 ff): This and the following sentence are quite polemic. But let me nevertheless give you an example of a robot that came out of the historical line of reactive robots which abandoned "classic central processing" to a good degree: Roomba, the robotic vaccum cleaner. That is at least a little achievement, isn't it?

C9 (lines 146-147): Polemics again. The relevant question that some EC researchers ask it not whether one type of functional relation is "better" than another inside the brain. The questions typically aim at improving our understanding of the cognitive process under investigation
that may involve some sensori-motor processing which may or may not be reduced to central processing. Before one understands the process in question, such questions are purely polemic.

C10 (lines 210-211): "Actually, it is ...": many simulators are not "commercial programms" but open source. And many of them are frameworks that need custom programming in order to obtain a exectuable simulation. Please rewrite, as this gives a strong indication that the author
has never worked with a simulator.

C11 (lines 229-230 & lines 271-277): This is a non-argument. Many EC researchers use simulations in order to advance their understanding of certain processes. I am not aware of any EC researcher that has claimed that selected processes could not be simulated under the granted that we do understand the constitutive parts of the process in quesiton and their interaction sufficiently. But this is often not the case, and even then simulations might be used to explore the parameter space and compare the result to the non-simulated original process in question.

C12 (lines 234 ff.): "If one is a functionalist ... ". Polemics again! As pointed out above under C5 und by ref [4] functionalism and EC are not mututally exclusive, in the same sense, that physics and biology are not mutually exclusive. While biology could potentially be reduced to physics somehow, there is nevertheless some merit in looking at phenomena on a biological level. Just because functionalists are not interested in the nitty gritty details of sensorimotor loops, it is not fundamentally opposed to EC.

C13 (line 251): "get away with it ..." Get away with what?


C14 (lines 281-283): EC, if reducible to functionalism, is only irrelevant in the same sense that biology is irrelevant as field of research  if reducible to physics, which it under some perspective presumably is.

C15 (lines 289-293). Polemics, once again. Nobody I am aware of claims that the body is "special" in any metaphysical sense. EC researchers often just emphasise that not all relevant parts of a cognitive process are located within the brain, but are spatially distributed.
That does not make the body "special" in any sense, it just points out the location of where important elements of processing and interaction are located.

C16 (lines 299ff.). The notion of `agent' is fairly fundamental to Artificial Intelligence in general, and not specific to EC.

More falsehood, misrepresentations of EC research, and polemics can be found in lines 315-317, 322-323, 331-332, 367-369, 377-380, 397, 403, 424-427 (mispresentation of autopoiesis, which is functinally defined), 474,





Grammatical, lexical, semantic errors:

line 61: "Constitution suffers ... . " I don't understand this sentence. Apart from containing lexical errors, I don't see anything mysterious or "epiphenomenal" in the assertions that involve the word constitution, or "consists of .. ". There is nothing epiphenoemnal about the assertion "A table consists, or is constituted of, for table legs and a table surface". So either this sentence is outright false or it is just formulated in a strange way, that needs clarification.

line 73: "What does it count ... "

line 121: "Eventually ..." A non-sentence

line 189: "In alternative ..." non-English

line 263: opposers <-> opponents?

line 275: ".. where is the physical structure": rewrite

lines 350-351, 374, 458-459,


References:
[1] Hirschberg & Manning (15). Advances in natural language processing. Science 349
[2] Porcheron et al (18). Voice Interfaces in Everday Life. In: Proceedings CHI
[3] Jefferson (89). Preliminary notes on a possible metric which provides for a "standard maximum" silence of approximately one second in conversation.
    In: Conversation: an interdisciplinary perspective
[4] Gallagher (11). Interpretations of embodied cognition. In: Tschacher & Bergomi (eds): The Implications of Embodiment

Author Response

I thank the reviewer for his/her very helpful comments. I understand that the original version of the paper needed a deep revision to cope with an insufficient communication strategy. I followed all his/her suggestions in order to improve the article. I add here some specific comments to highlight the changes I did to fulfill the reviewer’s comments. Of course, since I had to take into consideration also the other reviewer’s comments, the amount of changes I made is much greater.

I attached a docx file where I list each comment and question.


Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 12.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 12.0px} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 13.0px} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 13.0px} p.p7 {margin: 0.0px 0.0px 5.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000} p.p8 {margin: 0.0px 0.0px 5.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000; min-height: 12.0px} p.p9 {margin: 0.0px 0.0px 5.0px 0.0px; text-align: justify; font: 11.0px 'Helvetica Neue'; color: #000000; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none}

Dear Author,


Indeed the second version of the manuscript is more comprehensive than the first. 


Thanks for submitting that again. It gave me much food for thought!


Please consider the following questions, objections, and suggestions:


Questions and objections:


Lines 54 and 55: 

What is it for a machine to be embodied or enacted? Are you suggesting that it is a machine that writes its own code?


Lines 60 to 62: 

Different from what?


Line 105: 

What would be (serve as) an empirical confirmation of the thesis that cognition is embodied?


Lines 114 to 116:

In my understanding, the expression “playing a role in shaping cognition” means that the body is constitutive of the mind and not merely that it causes differences in one’s cognitive processes (I explain why in the next paragraph). Causation, in this context, would imply that cognition is something apart (different) from the body and, therefore, also from the subject, if we assume that the body is part of the subject. Naturally, we don’t want to conclude that cognition is something different from the subject. 


The body is not constitutive of the world, but it can be constitutive of our experiences of the world.


It seems to me that the body being constitutive of the mind doesn’t mean that it merely causes differences in one’s experiences. Considering a causal role would imply that the element is not constitutive, of course, since on thing cannot cause itself. In other words, for causation to take place, one has to consider that an element A causes an element B. This means that they are two different things and not one thing causing itself. 

When I say, for example, that my pen is constituted by plastic and ink, these features are not in a causal relation between themselves. Nevertheless, the very fact that they both constitute the object is a requirement for the object to fulfill its function, namely to serve as a pen.


Lines 119 to 120:

Please explain why not.


Lines: 134 to 137:

This part is difficult to understand. 



General notes: 

It seems to me that you conflate three aspects of the enactive approach into two. Namely, mind, body, and world, into body and world. To say that the body is constitutive of the mind is not the same as to say that the world is also constitutive of the mind. Some theories argue that they both are, but some don’t commit to that. 


As far as I understand, embodiment in AI would be something like the software depending on the very characteristics of the hardware to ‘compute’ something in a way that a different hardware could not fulfill. It is not clear to me, whether this depending relation could be said to be constitutive or not. 

Let’s consider an example: 

Imagine a sponge. This sponge is full of micro-sensors (imagine nano-sensors) that are activated when the sponge ‘touches’ an object or surface. Now imagine that this sponge is connected, by something like an arm, with a central system that calculates the hardness and roughness of the surface, or even the shape of the object. Now imagine that this sponge is substituted by a harder one. In the case of a causal relation, the central system would, then, with the new sponge,  and new information, miscalculate the hardness and roughness of the surfaces or even the shapes of the objects. In a constitutive relation, on the other hand, the central system would, perhaps, adapt itself with the new part and self-regulate (based on several sources of information) to calculate the hardness and roughness of surfaces and so on. 

Now imagine it is not a sponge anymore. It’s another object, like a hammer. Would the central system adapt to use the hammer to different functions? 

Now imagine that there is no object in the end of the arm. Would the central system miss it? 

What would this ‘missing’ be? Would it be ‘not being able to perform any of its calculations’? If yes, let’s compare human cognition with artificial cognition: does the functionalist approach seem sufficient?


Lines 193 and 194:

These lines seem disconnected from the previous text. 


Lines 196 and 197:

You mentioned before, that the structure of the body may influence the way we use words, so it would be good to say here that language and body affect mental states. 


Lines 218 and 219:

I would suggest that you say something like this: ‘If it is not necessary, it doesn’t entail constitution.’ Instead of ‘constitution does not obtain’ because ‘obtain’ is a verb that indicates actuality. The fact that EC is not shown as necessary doesn’t mean (nor imply) that constitution doesn’t obtain. 


Lines 237 and 238:

Why would it be? (the electronic activity be any less physical than the sphere)

Is the brain any less physical than the body?


Does your question presuppose that someone is defending the opposite?


General comment about the Cartesian dualism:

To substitute the mind for the brain is not a real step out of the cartesian dualism. It actually maintains the brain x body dualism and falls into the mereological fallacy, which consists in attributing to the parts functional roles that belong to the whole. 

Check your reference number 69 for more details on that. 


Lines 252 to 254:

This doesn’t seem to be the conclusion of what has been said before in this paragraph. 


Lines 255 to 271:

This paragraph has too much going on at the same time and it is not clear what is your aim.  


Lines 271 to 282:

The examples you present can be classified as agents or not depending on your definition of agency. 

I think you might benefit from this reference:

Barandiaran, X., Di Paolo, E. & Rohde, M. (2009) Defining Agency: individuality, normativity, asymmetry and spatio-temporality in action.  Adaptive Behavior Journal. 

http://barandiaran.net/textos/defining_agency   

 

You are not providing a reference or argument in favor of reductive physicalism. To state in the description of the pictures that the human body is just a physical object seems a little neglective of the whole discussion that tries to explain consciousness, phenomenology, and even the very concept of agency. Perhaps it would be better to suggest that you are highlighting the physical aspect and opting for a reductionist physicalist approach. 


Just a few observations about the images: they are pixelated (definition seems to be too small). If you wanna keep them, I would suggest including an image of a chemical plant, since it belongs to your list of examples.


What is your aim with the images? It seems to me that you want them to be cases of physical objects. Is that correct? I would say that presenting a washing machine next to a body is not persuasive of the idea that they are similar in any aspect.


So, to better achieve your aims, if I understand them correctly, perhaps it would be better to simply replace them by a paragraph in which you present the reasons why you believe that the body is just a physical object. 



Lines 284 to 302:

This thought experiment indicates that “functionally equivalent embodied cognition” can be disembodied. It doesn’t show that cognition in general, or consciousness can also be disembodied. The main point is whether we accept that the functional story is all there is in explaining cognition (and also consciousness), as you say from lines 308 to 314.


I would suggest that you start your paper stating that you adopt a functionalist view. 


Line 387:


Why only epiphenomenalism if you mentioned two problems before?


Perhaps: “Leaving aside the two aforementioned problems…” ?



Lines 405 to 407:


This example is not clear.


Lines 405 to 414:


It seems to me that you are assuming that the program has a mental process. 


Please make it explicit what is the problem for the action performed by the body to be composed by elements that are not directly experienced by the subject, such as light rays, eye movements, etc. In addition, it would be good to consider whether theoretical explanations of cognition could be composed of such elements.


Lines 415 to 419:


Perhaps develop this idea a bit further. 

Yes, body-world interactions in living organisms are needed for real (or veridical) experience. In this sense, they are “closer” to one’s experiences, while neural signals are a neuro-biological condition that we don’t actually experience.


I believe that the idea here is that we perceive colors, hues, sounds, objects, faces, etc, due to physical processes that can be explained in terms of mathematical models that consider, for example, distances, light rays, eye-movements, etc. But this is a methodological choice. Not necessarily an ontological one. 


Wouldn’t you be conflating the methodological and ontological aspects?



Line 445:


Please make explicit what is the problem with the fact that “there is still a physical entity at the center of the agent”. 


Are you referring to the circularity? Is this the same problem as the homunculus? How?

The problem of the homunculus seems to be a lack of fundamental justification, not a circular one.



Line 477:


I suggest making it explicit that they are dubbed by you.



Line 512:


What are colors as such?

There is a huge debate on what colors are and what it is that we perceive when we see colors. I suggest that you make this point more explicit or fundament your view with some references. 


Line 515:


remove ‘frankly’


Line 518 and 519:


This is way too short! Please explain. 


Line 534:


I suggest introducing the paragraph.



General questions about you the en-world-ment perspective:


How does it explain memory and perceptual error, such as illusions, hallucinations, or false beliefs?


How can mind be one object (line 560) and many (line 469)?


I would suggest that you spend a paragraph making it explicit what are the aspects that make this view different from widening the supervenience basis. For example: How could we interpret examples, such as Clark’s memory notebook, according to this identity theory?



General Considerations:


I strongly suggest that you develop your critique of EC as circular. It would be good to actually show the circularity, reconstructing the steps of the authors and then concluding that those steps lead to circularity, using references. It would also be good to point out if the circularity is in the thesis or in the arguments. For these are two different kinds of circularity. I say this because your critique of EC is a very important point for you to dismiss EC and then suggest your view. It would be good that it is more detailed and consistent. 





Suggestions and corrections:


Line 37: Substitute ‘In fact’ for ‘Nevertheless’

Line 106: Substitute ‘Cognitive enactivism’ for ‘Radical Enactivism’

Line 120: Put the dot after the ‘[43]’?

Line 141: Remove ‘Eventually’

Line 161: Isn’t this reference year 2017?

Line 167: Substitute ‘form’ for ‘from’, substitute ‘notion’ for ‘notions’

Line 169: Substitute ‘processes’ for ‘process’

Line 285: Add capital letter ‘In’

Line 349: Substitute “In this section the main ontological shortcomings of current approaches to EC are listed. Of course, given the limited space, some approximation is accepted” for “In this section, I briefly list the main ontological shortcomings of current approaches to EC” 

Line 380: “experiences are and yet one might”  it seems that there is ver missing

Line 399: Add number of reference

Line 408: Add ‘of’ after ‘experience’

Line 413: Remove ’s’ from ‘patterns’

Line 422: Add ’s’ in ‘sensor’ 

Line 426: Remove ’s’ from ‘become’

Line 481: Add ’s’ on ‘set’

Line 496: Substitute: “Mind are not embodied; they are…” for “Mind is not embodied; it is…”

Line 505: Correct ‘relative’

Line 511: Remove ’s’ from ‘seems’

Line 545: Remove ‘in the’

Line 547: remove ’s’ from ‘functionalists’


Author Response

Questions and objections:

 

 

Lines 54 and 55:

What is it for a machine to be embodied or enacted? Are you suggesting that it is a machine that writes its own code?

If the structure of the machine (fixed by its hardware and software) is defined a priori and is not the result of the actual coupling between the environment and the machine, why should it be embodied or embedded? It is only a contingent fact that that machine is in a given environment. There is no developmental/causal relation between the machine and the environment in which it operates.

Added this text: “If the structure of the machine (both hardware and software) is not the result of the developmental coupling with the environment, there is no true embodiment between the machine and its environment.”

 

Lines 60 to 62:

Different from what?

Changed in “another”

 

Line 105:

What would be (serve as) an empirical confirmation of the thesis that cognition is embodied?

Key question, in fact I don’t think it is an empirical question insofar as cognition is mostly a matter of how we describe what a system does. It is not a natural kind. However, this contention would require a lot more space and, moreover, that paragraph is mostly about constitution, so I’ll skip the issue, at least there.

 

Lines 114 to 116:

In my understanding, the expression “playing a role in shaping cognition” means that the body is constitutive of the mind and not merely that it causes differences in one’s cognitive processes (I explain why in the next paragraph). Causation, in this context, would imply that cognition is something apart (different) from the body and, therefore, also from the subject, if we assume that the body is part of the subject. Naturally, we don’t want to conclude that cognition is something different from the subject.

The body is not constitutive of the world, but it can be constitutive of our experiences of the world.

It seems to me that the body being constitutive of the mind doesn’t mean that it merely causes differences in one’s experiences. Considering a causal role would imply that the element is not constitutive, of course, since on thing cannot cause itself. In other words, for causation to take place, one has to consider that an element A causes an element B. This means that they are two different things and not one thing causing itself.

When I say, for example, that my pen is constituted by plastic and ink, these features are not in a causal relation between themselves. Nevertheless, the very fact that they both constitute the object is a requirement for the object to fulfill its function, namely to serve as a pen.

I completely agree with all of the above as a matter of fact. The problem is that constitution is epiphenomenal (which automatically follows from being different from causation). Take the case of the pen. Whether the plastic and ink constitute a pen is immaterial for what they do. Constitution is a weak ontological relation akin to supervenience. That’s why in the end I consider an identity thesis.

I modified the paragraph as such:

If the body and the environment had a constitutive role, a metaphysically necessary relation should obtain. However, constitution is a weak ontological relation akin to supervenience. It is epiphenomenal. It does not change what happens. In fact, the fact that a function might be implemented more easily by means of embodiment than through computation does not imply that the constitution relation holds. Practical feasibility has no ontological relevance for constitution.

 

Lines: 134 to 137:

This part is difficult to understand.

I will argue that, without additional ontological premises, embodiment is not different from functionalism. In fact I will contend that many key notions in EC and EAI are not necessarily embodied.

Changed into

I will argue that embodiment is not different from functionalism unless additional hypotheses about the nature of the mind are put forward. In fact I will contend that many key notions in EC and EAI do not require embodiment.

 

General notes:

It seems to me that you conflate three aspects of the enactive approach into two. Namely, mind, body, and world, into body and world. To say that the body is constitutive of the mind is not the same as to say that the world is also constitutive of the mind. Some theories argue that they both are, but some don’t commit to that.

Yes, very good point.

As far as I understand, embodiment in AI would be something like the software depending on the very characteristics of the hardware to ‘compute’ something in a way that a different hardware could not fulfill. It is not clear to me, whether this depending relation could be said to be constitutive or not.

Let’s consider an example:

Imagine a sponge. This sponge is full of micro-sensors (imagine nano-sensors) that are activated when the sponge ‘touches’ an object or surface. Now imagine that this sponge is connected, by something like an arm, with a central system that calculates the hardness and roughness of the surface, or even the shape of the object. Now imagine that this sponge is substituted by a harder one. In the case of a causal relation, the central system would, then, with the new sponge,  and new information, miscalculate the hardness and roughness of the surfaces or even the shapes of the objects. In a constitutive relation, on the other hand, the central system would, perhaps, adapt itself with the new part and self-regulate (based on several sources of information) to calculate the hardness and roughness of surfaces and so on.

This is not a constitutive relation because, clearly, there is a causal effect from the new part and the central system that will reprogram or adapt itself to the harder gripper.

Now imagine it is not a sponge anymore. It’s another object, like a hammer. Would the central system adapt to use the hammer to different functions?

Now imagine that there is no object in the end of the arm. Would the central system miss it?

What would this ‘missing’ be? Would it be ‘not being able to perform any of its calculations’? If yes, let’s compare human cognition with artificial cognition: does the functionalist approach seem sufficient?

Well, it depends on what the central system does, I think.

 

Lines 193 and 194:

These lines seem disconnected from the previous text.

Removed.

 

Lines 196 and 197:

You mentioned before, that the structure of the body may influence the way we use words, so it would be good to say here that language and body affect mental states.

I say it just one line below “It only shows that language affects mental states.”

 

Lines 218 and 219:

I would suggest that you say something like this: ‘If it is not necessary, it doesn’t entail constitution.’ Instead of ‘constitution does not obtain’ because ‘obtain’ is a verb that indicates actuality. The fact that EC is not shown as necessary doesn’t mean (nor imply) that constitution doesn’t obtain.

Changed!

Lines 237 and 238:

Why would it be? (the electronic activity be any less physical than the sphere)

Is the brain any less physical than the body?

Does your question presuppose that someone is defending the opposite?

Well, I guess that my target in that line was represented by computationalists who believe that the software is something over and above its implementation. I changed the line as follows

Yet, is the electronic activity inside the CPU, usually referred to as the software or the computational level, any less physical than the spheres and their momentum?

 

General comment about the Cartesian dualism:

To substitute the mind for the brain is not a real step out of the cartesian dualism. It actually maintains the brain x body dualism and falls into the mereological fallacy, which consists in attributing to the parts functional roles that belong to the whole.

Check your reference number 69 for more details on that.

I do agree!

 

Lines 252 to 254:

This doesn’t seem to be the conclusion of what has been said before in this paragraph.

Changed in

The idea of breaking free from the computational mind – often seen as a modern version of the immaterial Cartesian mind – has been perceived as an enlightened move to steer away from metaphysical nonsense.

Lines 255 to 271:

This paragraph has too much going on at the same time and it is not clear what is your aim. 

Changed into

From a distance, the notion of material engagement between the body and the world looks more physicalist than the notion of an internal computational mind of some sort. Yet, this inference is misleading. The head is as physical as the body. Postulating that processes engaging body and world are constitutively different from processes inside the head requires additional hypotheses. Philosophers and scientists – such as Chalmers, Clark, and many others – have inadvertently resurrected the world-soul distinction in terms of head-body or body-world, which is a form of Cartesian materialism or covert dualism [33,69]. In this regard, Murray Shanahan points out at the lurking dualism of Chalmers’ view “Chalmers’ distinction bears the same hallmark as Descartes’ reflection. In both cases, a wedge is driven between inner and outer. For Descartes, body and place (outer) are divided from thought (inner), whereas for Chalmers, the information processing taking place in the brain (outer) is divided from phenomenal experience (inner). Of course, there is a sense in which information processing occurring in the brain is ‘inner’ relative to the goings on in the ‘outer’ environment. But this is not the sense of ‘inner’ at stake here”. [70] Shanahan stresses that the inner/outer distinction between the head and the body (or between the body and the world) does not overlap with the distinction between the mind and the world. Likewise, if the body has no special status, the notion of embodiment will be empty.

 

Lines 271 to 282:

The examples you present can be classified as agents or not depending on your definition of agency.

I think you might benefit from this reference:

Barandiaran, X., Di Paolo, E. & Rohde, M. (2009) Defining Agency: individuality, normativity, asymmetry and spatio-temporality in action.  Adaptive Behavior Journal.

http://barandiaran.net/textos/defining_agency  

You are not providing a reference or argument in favor of reductive physicalism. To state in the description of the pictures that the human body is just a physical object seems a little neglective of the whole discussion that tries to explain consciousness, phenomenology, and even the very concept of agency. Perhaps it would be better to suggest that you are highlighting the physical aspect and opting for a reductionist physicalist approach.

Just a few observations about the images: they are pixelated (definition seems to be too small). If you wanna keep them, I would suggest including an image of a chemical plant, since it belongs to your list of examples.

What is your aim with the images? It seems to me that you want them to be cases of physical objects. Is that correct? I would say that presenting a washing machine next to a body is not persuasive of the idea that they are similar in any aspect.

So, to better achieve your aims, if I understand them correctly, perhaps it would be better to simply replace them by a paragraph in which you present the reasons why you believe that the body is just a physical object.

OK DONE! I removed the images and I added the paragraph (instead of the images).

 

Line 387:

Why only epiphenomenalism if you mentioned two problems before?

Perhaps: “Leaving aside the two aforementioned problems…” ?

RIGHT! CHANGED AS REQUESTED

 

Lines 405 to 407:

This example is not clear.

Changed into

Suppose that the same functional pattern might be instantiated both by a biological eye perceiving an apple and by a simulation running inside a central unit. Suppose that the resulting mental process was different in the two cases. Why should there be any difference?

Lines 405 to 414:

It seems to me that you are assuming that the program has a mental process.

This couldn’t be farther from my view!

Please make it explicit what is the problem for the action performed by the body to be composed by elements that are not directly experienced by the subject, such as light rays, eye movements, etc. In addition, it would be good to consider whether theoretical explanations of cognition could be composed of such elements.

OK ADDED A LINE TO THAT EFFECT

Lines 415 to 419:

Perhaps develop this idea a bit further.

Yes, body-world interactions in living organisms are needed for real (or veridical) experience. In this sense, they are “closer” to one’s experiences, while neural signals are a neuro-biological condition that we don’t actually experience.

I believe that the idea here is that we perceive colors, hues, sounds, objects, faces, etc, due to physical processes that can be explained in terms of mathematical models that consider, for example, distances, light rays, eye-movements, etc. But this is a methodological choice. Not necessarily an ontological one.

Wouldn’t you be conflating the methodological and ontological aspects?

This will be addressed later in the mind-object identity proposal

 

Line 445:

Please make explicit what is the problem with the fact that “there is still a physical entity at the center of the agent”.

Are you referring to the circularity? Is this the same problem as the homunculus? How?

The problem of the homunculus seems to be a lack of fundamental justification, not a circular one.

I changed the last paragraph as follows

Although the brain centered view has been substituted by a body centered view, the body is still characterized by being the shell of the brain. The brain is no longer the container of the mind, but it remains at the center of its physical underpinning. Ontologically, the body plays a role not unlike that of the homunculus. A body-centered view is no less problematic than a brain-centered view.

 

Line 477:

I suggest making it explicit that they are dubbed by you.

Changed in

The details of this view have been dubbed elsewhere by the author as the mind-object identity [37,38,46].

 

Line 515:

remove ‘frankly’

Removed!

 

Line 518 and 519:

This is way too short! Please explain. I added a short description of the main gist of that.

Of course, such a proposal is a radical form of realism and thus it must address the traditional arguments against all forms of realism: illusions and dreams/hallucinations. As to illusions, the proposal is to revisit them in terms of misbeliefs about what we perceive rather than in terms of misperceptions. For instance, a mirage is just as physical as anything else and yet it yields to erroneous beliefs rather than to wrong perceptions. Yet, what we see when we see a mirage is just what we should see. As to hallucinations and dreams, they might be explained as forms of delayed and reshuffled perception once the notion of the present is reconsidered [38, 46]. In other words, the hallucination of, say, a dagger might be explained as the delayed perception of a dagger one saw some years earlier. I am aware that I cannot even start to outline a brief reply to these two questions but it will suffice to say that the strategy will be to address all known empirical cases of both illusions and hallucinations and to show that they can be reduced to cases of unusual perception [38].

General questions about you the en-world-ment perspective:

 

How does it explain memory and perceptual error, such as illusions, hallucinations, or false beliefs?

This is the key objection of course. The three issues admit different replies, of course.

Illusions might be explained in terms of erroneous beliefs rather than in term of misperception (eg. A mirage is just as physical as anything else)

Hallucinations are explained in terms of spacetime extended present and recombination (as in mirrors) of existing particulars.

False beliefs, well, they are just false beliefs!

However, I can’t even start to address these three aspects in the present paper.

How can mind be one object (line 560) and many (line 469)?

The reviewer is right that these statement in the context of the present paper are confusing so I revised the text. By and large, the point is that any object is also a combination (a sum) of other objects.

I would suggest that you spend a paragraph making it explicit what are the aspects that make this view different from widening the supervenience basis. For example: How could we interpret examples, such as Clark’s memory notebook, according to this identity theory?

Added this paragraph.

The proposed view is different from widening the supervenience basis insofar as it suggests the consciousness is one with the set of objects that take place relatively to one’s body. The supervenience relation is not an ontological thesis. It does not tell us what the mind is, but only its logical relation with its supervenience basis. The mind-object identity, thus, is compatible with the notion that the mind supervenes on the body, only the mind is not the body. Famous examples, such as Clark’s memory notebook, are not problematic since they simply extended the supervenience basis of the mind.

 

General Considerations:

I strongly suggest that you develop your critique of EC as circular. It would be good to actually show the circularity, reconstructing the steps of the authors and then concluding that those steps lead to circularity, using references. It would also be good to point out if the circularity is in the thesis or in the arguments. For these are two different kinds of circularity. I say this because your critique of EC is a very important point for you to dismiss EC and then suggest your view. It would be good that it is more detailed and consistent.

I do agree that the critique might indeed be improved. I hope that the present changes will move in that direction.

Suggestions and corrections:

ALL APPLIED

 

 

Line 37: Substitute ‘In fact’ for ‘Nevertheless’

Line 106: Substitute ‘Cognitive enactivism’ for ‘Radical Enactivism’

Line 120: Put the dot after the ‘[43]’?

Line 141: Remove ‘Eventually’

Line 161: Isn’t this reference year 2017?

Line 167: Substitute ‘form’ for ‘from’, substitute ‘notion’ for ‘notions’

Line 169: Substitute ‘processes’ for ‘process’

Line 285: Add capital letter ‘In’

Line 349: Substitute “In this section the main ontological shortcomings of current approaches to EC are listed. Of course, given the limited space, some approximation is accepted” for “In this section, I briefly list the main ontological shortcomings of current approaches to EC”

Line 380: “experiences are and yet one might”  it seems that there is ver missing

Line 399: Add number of reference

Line 408: Add ‘of’ after ‘experience’

Line 413: Remove ’s’ from ‘patterns’

Line 422: Add ’s’ in ‘sensor’

Line 426: Remove ’s’ from ‘become’

Line 481: Add ’s’ on ‘set’

Line 496: Substitute: “Mind are not embodied; they are…” for “Mind is not embodied; it is…”

Line 505: Correct ‘relative’

Line 511: Remove ’s’ from ‘seems’

Line 545: Remove ‘in the’

Line 547: remove ’s’ from ‘functionalists’


Reviewer 2 Report

The current manuscript reads much better and can be published. 

Personally, I still wish that the author would more deeply engage with the enactive literature, and especially its detailed arguments regarding the importance of considering biological embodiment for understanding the mind. Nevertheless, the author chose to make a broader critique of embodied AI, and so I guess that this more targeted exchange will have to wait for another time. 

One more point: after having gone through the revised manuscript, I am still not sure if I fully understand the difference between the concept of constitution and the concept of identity. Both are clearly distinct from causation, and both appeal to some kind of ontological relationship. Might this just be a terminological dispute?

Author Response

I thank the reviewer for his comments. 

I do agree that the difference between constitution and identity is a subtle one.

My short answer is that identity is ontologically flat and metaphysically necessary. Constitution is not metaphysically necessary.

Thanks again!


Riccardo


Back to TopTop