Next Article in Journal
Absorbed Concert Listening: A Qualitative, Phenomenological Inquiry
Previous Article in Journal
The Psycholinguistics of Self-Talk in Logic-Based Therapy: Using a Toolbox of Philosophical Antidotes to Overcome Self-Destructive Speech Acts
 
 
Article
Peer-Review Record

Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action

Philosophies 2025, 10(2), 37; https://doi.org/10.3390/philosophies10020037
by Artem S. Yashin 1,2
Reviewer 1: Anonymous
Reviewer 2:
Philosophies 2025, 10(2), 37; https://doi.org/10.3390/philosophies10020037
Submission received: 10 November 2024 / Revised: 11 March 2025 / Accepted: 13 March 2025 / Published: 25 March 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

In this paper, Author states that the viability of the causal theory of action (CTA) depends on the ability of CTA to deal with cases of causal deviance.  (I will refer to the author or authors of this paper as “Author” throughout.)  Author argues that defenders of CTA have adopted various strategies for dealing with the problem of causal deviance, but that brain-computer interfaces (BCIs) allow for a distinct set of casual deviance challenges that have been insufficiently theorized.  Author further argues that BCIs help show that whether a potential impediment to an agent’s action counts as a causal deviance case depends on how the overall agential system is structured.  Author argues for these last two points by appealing to three related but distinct BCI cases which are instances of non-standard action: Case 1, when it leads to behavior, is a case of causal deviance which cannot be easily accounted for through the typical strategies; Case 2, when it leads to behavior, is not a case of causal deviance but is a case of ordinary action; and Case 3, when it leads to behavior, is not a case of causal deviance either.

 

In my judgment, this paper needs major revisions prior to publication.  The problem that Author addresses in this paper is potentially interesting; BCIs are interesting and are potentially significant for many topics in the philosophy of action, including the problem of deviant causal chains.  And Author shows a good awareness of the relevant literature, both the philosophy of action literature and the BCI literature.  These suggest to me that there is a valuable paper here.  However, there are many problems with this paper as it currently stands.

 

First, the paper is is very frequently unclear or confusing about its claims and argumentative structure; it was very difficult for me to identify the thesis of the paper and how that thesis was argued for.

 

Second, the thesis (as I understand it) is a bit underwhelming and insufficiently supported.  The argument that BCIs present novel challenges for CTAs, an argument based on Case 1, is presented extremely quickly—in one paragraph (lines 412–421).  While I don’t find the arguments in that paragraph persuasive, they could be a worthwhile contribution to the literature if they were spelled out in more detail.  Moreover, the argument that the details of the agential system are relevant—an argument that depends on Case 2 and Case 3—is underwhelming.  It is not particularly noteworthy to just claim that whether there is a deviant causal chain depends on how the behavior is produced; that is true in all deviant causal chain cases.  And if Author wants to make a claim about deviant causal chains, it’s unclear why Author is spending time talking about cases that do not involve deviant causal chains at all.  (Author might use the differences between Case 1, Case 2, and Case 3 to more precisely identify those conditions under which deviant causal chains occur—but Author does not do that in this paper, saying instead “I do not propose a complete solution to the problem of deviant causal chains, as it requires further investigation” (ln. 559–560).  But Author does not need to give a complete solution for this to be an worthwhile paper—only to give a more precise analysis of when BCIs might present deviant causal chains and when they do not.)

 

Third, and importantly, I worry that Author’s account of deviant causal chains is underdescribed or inaccurate.  Author sometimes suggests that deviant causal chains occur when an agent’s intended action occurs through “coincidence” or “luck”, or when the causal connection between intention and action is “unstable”.  The BCI cases that Author analyzes seem to depend on this interpretation.  But on my reading, deviant causal chains require more: the content of the intention (or belief/desire pair) must be for the outcome or behavior produced, and the intention must in fact cause the outcome or behavior produced.  Author defends Case 1 as being a case of deviant causation as follows (lines 398–411):

 

“Now, suppose the BCI in Case 1 correctly translates this intention, but it succeeds because of artifacts in the neural data recording. For example, the EEG data can be corrupted by movements or blinks of the subject. Let us say corruption of the data had significant impact on translation of the intention. In this case, we could say that the correct operation was executed by coincidence. Have we stumbled upon a deviant causal chain? It certainly seems so. We initially assumed that the interface offered a reliable way to act, and it was this reliability that ensured the user had proper control over the computer. If the interface makes an error but still performs the intended operation, the result is a matter of luck: the intention happened to align with the correct outcome, but the causal connection between the two is unstable. Had the agent chosen a different command or acted at a different moment, the outcome might not have aligned with their intention. Since we doubt the presence of a basic action (icon selection) in this scenario, we are dealing with antecedental waywardness, and it takes place outside the agent’s body.”

 

I take it the argument is that (1) the outcome was caused by the intention, and (2) the outcome was what the agent intended, but (3) the causation did not happen in the right way and so it is a case of deviant causation.  Claim 2 seems indisputable, but I’m less clear on whether 1 and 3 are true.  What Author needs is for the intention to be the signal that the BCI responds to, and the output signal results in the moving of the file, but the causal connection between the input and the output is deviant.  The case feels underdescribed to me, though; simply describing the deviance as “artifacts in the neural data recording” is insufficient.  Moreover, the emphasis Author places on the causal connection between input signal and output signal being unreliable worries me, as the reliability is not at issue when we talk about deviant causal chains.  (In Davidson’s mountain climber case, it could be that the climber’s anxiety and horror at their intention to drop their fellow climber always and reliably causes them to drop them; it would still be a case of deviant causation, because the intention is not causing the behavior/outcome in the right way.)  This “in the right way” language is notoriously tricky, and it’s not entirely clear what that would even mean with BCIs—but that is what Author needs to articulate in this paper.

 

Those are the three major concerns for this paper.  I have a number of additional concerns which are also significant; I will try to present them more quickly.

 

As mentioned already, I think that the account of deviant causal chains in 1.1 can be presented more cleanly; I do not have a good sense by the end of 1.1 just what Author thinks a deviant causal chain is or is not, outside of the laundry list of cases Author presents.  Author should give something that looks more like an account of deviant causation, and less like a list of cases.

 

Section 1.2 also needs to be restructured.  Author wants to present a series of strategies for dealing with deviant causal chains, in order to later show that Case 1 cannot be handled by any of them.  But these cases are not presented very clearly.  Author relies on Mayr’s account, which is fine, but Author also includes Mayr’s analysis of the cases including potential problems.  This is very confusing; Author should only say as much as is needed for the reader to understand what the strategies are.  The criticisms of the strategies should be omitted, as they are irrelevant for this paper (insofar as I understand the thesis).

 

Section 2 is generally okay, though I think more work needs to be done to explain the kind of BCIs Author is interested in from training through to behavior.  Author frequently equivocates between “intentions” and “actions”, and frequently talks of “controlling a BCI” when it seems the more appropriate language would be “acting through a BCI” or “performing a BCI-mediated action”.  After all, the agent is training to act, not control the BCI; the BCI is functioning well when it correctly pairs the input signal it receives from the agent with the output signal which produces the intended result.  The BCI becomes more finely attuned as the agent trains—but I don’t know that agents’ phenomenological experience is of controlling the BCI.  (If so, it might be useful to state.)

 

In Section 3, with Case 1, I worry that Author has not adequately defended the claim that the existing strategies for dealing with causal deviance fail here.  (I mentioned this worry above.)  Consider what Author calls the immediate causation strategy.  Author says “The immediate causation strategy also fails, as the BCI's complexity prevents a direct link between the user's intention and the computer's operation—complex computations separate the two” (lines 415–418).  First, I’m not sure what this means; why does the BCI’s complexity prevent a direct link between the user’s intention and the computer’s operation?  Our brains are quite complex, and there is a very complicated electrochemical process that leads from intention formation to bodily movement.  But second, I don’t see why this addresses Searle’s response, which is that intentions are self-referential: their content is that an action be performed by way of carrying out that very intention.  Whether Searle’s strategy is a viable one can be discussed further, but Case 1 seems to fail as a challenge here because it’s not clear that the action, when successfully performed, was performed by way of the accompanying intention in action.  I have similar worries for all of the strategies—Author simply does not go into nearly enough detail for me to know how Case 1 actually engages with the strategies.

 

Finally, as also mentioned above, there are simply too many confusing sentences in this paper.  This does not seem to be a problem with Author’s facility with English, which is fluent; the problem seems to be that Author is not sufficiently precise and careful with terminology throughout.

Author Response

Comment 1: First, the paper is is very frequently unclear or confusing about its claims and argumentative structure; it was very difficult for me to identify the thesis of the paper and how that thesis was argued for.

Response 1: I have clarified the argumentative structure of the paper and refined my claims. Another reviewer also suggested revising the Introduction, where I now summarize the paper's structure. I have highlighted new or significantly reworked text in yellow throughout the manuscript.

Comment 2: Second, the thesis (as I understand it) is a bit underwhelming and insufficiently supported.  The argument that BCIs present novel challenges for CTAs, an argument based on Case 1, is presented extremely quickly—in one paragraph (lines 412–421).  While I don’t find the arguments in that paragraph persuasive, they could be a worthwhile contribution to the literature if they were spelled out in more detail.  Moreover, the argument that the details of the agential system are relevant—an argument that depends on Case 2 and Case 3—is underwhelming.  It is not particularly noteworthy to just claim that whether there is a deviant causal chain depends on how the behavior is produced; that is true in all deviant causal chain cases.  And if Author wants to make a claim about deviant causal chains, it’s unclear why Author is spending time talking about cases that do not involve deviant causal chains at all.  (Author might use the differences between Case 1, Case 2, and Case 3 to more precisely identify those conditions under which deviant causal chains occur—but Author does not do that in this paper, saying instead “I do not propose a complete solution to the problem of deviant causal chains, as it requires further investigation” (ln. 559–560).  But Author does not need to give a complete solution for this to be an worthwhile paper—only to give a more precise analysis of when BCIs might present deviant causal chains and when they do not.)

Response 2: In the revised version, I have made the thesis more explicit and provided more detailed argumentation in support of it.

Comment 3: Third, and importantly, I worry that Author’s account of deviant causal chains is underdescribed or inaccurate.  Author sometimes suggests that deviant causal chains occur when an agent’s intended action occurs through “coincidence” or “luck”, or when the causal connection between intention and action is “unstable”.  The BCI cases that Author analyzes seem to depend on this interpretation.  But on my reading, deviant causal chains require more: the content of the intention (or belief/desire pair) must be for the outcome or behavior produced, and the intention must in fact cause the outcome or behavior produced.  Author defends Case 1 as being a case of deviant causation as follows (lines 398–411): ... I take it the argument is that (1) the outcome was caused by the intention, and (2) the outcome was what the agent intended, but (3) the causation did not happen in the right way and so it is a case of deviant causation.  Claim 2 seems indisputable, but I’m less clear on whether 1 and 3 are true.  What Author needs is for the intention to be the signal that the BCI responds to, and the output signal results in the moving of the file, but the causal connection between the input and the output is deviant.  The case feels underdescribed to me, though; simply describing the deviance as “artifacts in the neural data recording” is insufficient.  Moreover, the emphasis Author places on the causal connection between input signal and output signal being unreliable worries me, as the reliability is not at issue when we talk about deviant causal chains.  (In Davidson’s mountain climber case, it could be that the climber’s anxiety and horror at their intention to drop their fellow climber always and reliably causes them to drop them; it would still be a case of deviant causation, because the intention is not causing the behavior/outcome in the right way.)  This “in the right way” language is notoriously tricky, and it’s not entirely clear what that would even mean with BCIs—but that is what Author needs to articulate in this paper.

Response 3: I have revised my discussion to offer a more precise treatment of the problem and to clarify how BCIs can present instances of deviant causal chains.

Comment 4: As mentioned already, I think that the account of deviant causal chains in 1.1 can be presented more cleanly; I do not have a good sense by the end of 1.1 just what Author thinks a deviant causal chain is or is not, outside of the laundry list of cases Author presents.  Author should give something that looks more like an account of deviant causation, and less like a list of cases.

Response 4: I have restructured Section 1.1, now focusing on formulating the problem more clearly and reducing the number of examples.

Comment 5: Section 1.2 also needs to be restructured.  Author wants to present a series of strategies for dealing with deviant causal chains, in order to later show that Case 1 cannot be handled by any of them.  But these cases are not presented very clearly.  Author relies on Mayr’s account, which is fine, but Author also includes Mayr’s analysis of the cases including potential problems.  This is very confusing; Author should only say as much as is needed for the reader to understand what the strategies are.  The criticisms of the strategies should be omitted, as they are irrelevant for this paper (insofar as I understand the thesis).

Response 5: I have removed Mayr's analysis of the strategies from this section, as I agree that it was not directly relevant. Additionally, I have expanded my own analysis of how these strategies address the cases I formulated.

Comment 6: Section 2 is generally okay, though I think more work needs to be done to explain the kind of BCIs Author is interested in from training through to behavior.  Author frequently equivocates between “intentions” and “actions”, and frequently talks of “controlling a BCI” when it seems the more appropriate language would be “acting through a BCI” or “performing a BCI-mediated action”.  After all, the agent is training to act, not control the BCI; the BCI is functioning well when it correctly pairs the input signal it receives from the agent with the output signal which produces the intended result.  The BCI becomes more finely attuned as the agent trains—but I don’t know that agents’ phenomenological experience is of controlling the BCI.  (If so, it might be useful to state.)

Response 6: I have made the terminology more consistent, addressing instances where philosophical terms from action theory overlapped with technical terminology from BCI studies. For example, "translation of intention" is one such term. I have also more clearly distinguished between "action" and "control." Furthermore, I have added a paragraph discussing the agent’s experience when acting through a BCI.

Comment 7: In Section 3, with Case 1, I worry that Author has not adequately defended the claim that the existing strategies for dealing with causal deviance fail here.  (I mentioned this worry above.)  Consider what Author calls the immediate causation strategy.  Author says “The immediate causation strategy also fails, as the BCI's complexity prevents a direct link between the user's intention and the computer's operation—complex computations separate the two” (lines 415–418).  First, I’m not sure what this means; why does the BCI’s complexity prevent a direct link between the user’s intention and the computer’s operation?  Our brains are quite complex, and there is a very complicated electrochemical process that leads from intention formation to bodily movement.  But second, I don’t see why this addresses Searle’s response, which is that intentions are self-referential: their content is that an action be performed by way of carrying out that very intention.  Whether Searle’s strategy is a viable one can be discussed further, but Case 1 seems to fail as a challenge here because it’s not clear that the action, when successfully performed, was performed by way of the accompanying intention in action.  I have similar worries for all of the strategies—Author simply does not go into nearly enough detail for me to know how Case 1 actually engages with the strategies.

Response 7: As noted earlier, I have expanded the subsection that addresses how different strategies for handling deviant causal chains apply to the BCI cases.

I sincerely appreciate the thorough and valuable feedback. I hope these revisions have significantly improved the paper.

Reviewer 2 Report

Comments and Suggestions for Authors

 

Review of ‘Causal Deviance in BCI Control’ for Philosophies

 

 

In this paper, the author examines the problem of deviant causal chains by thinking about how it interacts with Brain Computer Interfaces (BCIs) where agents can plausibly act on a computer by merely thinking about what they want to do. The BCI converts the neural signal into a command for the computer, which is then implemented. The paper’s goal is to show that thinking about BCIs will help enrich our discussion of the problem of deviant causal chains for the causal theory of action, because it involves some special features which make certain theories more or less probable. A secondary aim seems to be to understand the nature of an agent’s relation to the BCI itself, since, though it isn’t a human body, it does seem to be, in some ways, playing the role of the body in certain cases.

 

This was an interesting paper, with some ideas that I quite liked. I also liked the paper’s general aims and approach to a topic that is important. I think it should be accepted pending some quite major revisions. However, I should emphasise that the revisions I will suggest are almost all structural, to help bring out the paper’s main themes and arguments more clearly. I think the content of the paper is in most ways fine, although I will point out some things I think need improving there too. The main point of my comments is that I think the paper needs to be more clearly structured, needs to cut some material that is irrelevant, and focus on the arguments which are most important.

 

 

Structure

 

§1 – Introduction

The paper lacks a proper introduction, which would be very helpful (a) to orient the reader at the beginning, and (b) to make it clear what the main contribution of the paper is. The abstract is helpful but obviously short (as it should be). But the section labelled ‘Introduction’ is very long, and the vast majority of it is a literature review. Moreover, even by the end of §1, I was not sure what the official aims of the paper were. This became clearer later, but should be stated at the beginning. It would be easier for the reader if the author added an introduction of about one page before all this material which tells us what the problem of causal deviance is, how considering BCIs will help us answer it, and how considering the problem helps us understand agent-BCI interactions. If it is clear there, then the paper will be clearer, and I think it will also make it more obvious to readers why they should read it.

 

On a related point, I would suggest the author change the paper’s title. As it stands, it does not clearly indicate to prospective readers quite what it’s about, and the notion of a vanishing point is somewhat obscure. My reason for suggesting this is that the paper would reach a wider audience with a clearer title. But this is completely at the author’s discretion, and does not bear at all on the quality of the paper, or any decision I make about acceptance. It’s intended as a helpful suggestion.

 

§2 – Methodology

I thought that, while there were some interesting details in §2, on the whole the section did not add much to the paper. The discussion of modular epiphenomenalism and Friston’s free energy principle seemed quite irrelevant to the paper. And the more important points – the assumptions about BCIs from line 326 down – could easily be integrated into the main discussion of BCIs earlier on. I suggest integrating those important parts of §2 into other parts, and otherwise deleting the section.

 

§4 – Implications for Philosophy of Action

I think this section contains interesting material. To my mind, its most important contribution is to argue that your BCI thought experiments support some form of reliability approach to causal deviance over any other approaches. This seems to me a central point of the paper, but this is never signposted in the text. The author should make it very clear to the reader what the point of this discussion is. Lines 533-535 seem to contain one of the key points: BCI thought experiments suggest that causal deviance is a matter of the functional structure of the systems supporting agency, and this suggests a version of the reliability approach. If that’s the key point, then the author should make this clear to the reader and structure their discussion around it.

 

Arguments in the Thought Experiments in §3

The main contributions this paper makes are in showing how considering BCI thought experiments affects our thinking about causal deviance. But these arguments are too buried beneath all of the literature review and other material. The author would greatly improve the paper if they brought out these arguments and put the spotlight on them. For instance, lines 412-421 are an original and interesting passage of argument, but they take up relatively little of the paper. The paper should be oriented around these more important and original lines of argument, whereas at the moment the paper is dominated by literature reviews and more peripheral discussions.

 

§1.2

This subsection is a very extensive literature review of the various responses to the problem of causal deviance. However, it is very long and reads too much like a very good encyclopaedia entry. This detracts from the main argumentative line of the paper. I would suggest significantly cutting this down, and making it clearer to the reader why the author needs to talk about these theories. This is another case of the general point I would like to get over to the author: they should foreground their own arguments about causal deviance and BCI, and have much less literature review.

 

 

Content

 

1)     There is a typo on line 8: ‘not in a right way’ should read ‘not on the right way’.

2)     On p.2, the author uses an example of causal deviance which is supposed to be Davidson’s but it is not correctly stated. The author’s version is different from Davidson’s, and less compelling. I would suggest changing this to Davidson’s original case.

3)     At line 245, the author claims that when a person uses a BCI to command a computer to do something, there is a mental action and an external event that is not an action. The external event is the algorithmic behaviour of the computer. But why does the author think this is not an action? The causal theory of action is roughly the view that what makes an event an action is that it is caused in the right way by a person’s intentions (or whatever mental states one chooses). If the computer’s behaviour is caused in the right way by a person’s intentions, why can’t it be one of their actions? Does the author have an argument to this effect? One thing I suspect is that they move from the claim that they aren’t bodily actions (line 243) to the claim that they aren’t actions at all. But surely part of the interest here is in the possibility of extra-bodily action. I would like some clarity from the author at this point in the paper.

4)     At line 305, the author suggests that surveys of intuitions might help arbitrate disagreements about intuitions. But how could this help? A survey just collects all the intuitions, and it might say that Philosopher A is in the minority, but that presumably shouldn’t make us disregard their thought. If the author thinks surveys could help, I would like them to explain how.

5)     I was puzzled by the claim on line 306 that the author will assume that the causal theory of action is “a broadly valid theory of action”. Isn’t this precisely what’s at issue? The problem of deviant causal chains is what leads many philosophers to reject the causal theory. The interest of the author’s discussion of BCI lies in enriching our sense of whether the causal theory can survive this objection. So this claim seems very out of place in this context.

6)     Around line 336, the author talks about modest conceivability and thought experiments. I think the general idea they express might be fine, but one worry I had was that, since CTA is a metaphysical theory of action, the causal connections between mental states and actions needs to hold in every possible world. This suggests we should look toward far out possibilities as well as close by ones. If this discussion stays in the paper (see my comments under Methodology, above), then I would like to see some consideration of this point in order to clarify the approach.

7)     The question posed on line 450 seemed interesting and important, but the author moves on to something else immediately. What does the author think the answer to that question is, and why? Saying more on this would improve the interest and argumentative quality of the paper.

8)     Lines 557-567 contain the seeds of a promising argument in favour the reliability strategy, but the author somewhat backs off of developing the point. But this seems like an important contribution which the author should develop in more detail. I am not asking them to develop a fully fledged reliability theory, but it would be good for them to develop the idea in some more detail. For instance: what sorts of reliability theory does the argument support; does it place constraints on what reliability theorists can say or appeal to?

 

Comments for author File: Comments.pdf

Author Response

Comment 1: §1 – Introduction. The paper lacks a proper introduction, which would be very helpful (a) to orient the reader at the beginning, and (b) to make it clear what the main contribution of the paper is. The abstract is helpful but obviously short (as it should be). But the section labelled ‘Introduction’ is very long, and the vast majority of it is a literature review. Moreover, even by the end of §1, I was not sure what the official aims of the paper were. This became clearer later, but should be stated at the beginning. It would be easier for the reader if the author added an introduction of about one page before all this material which tells us what the problem of causal deviance is, how considering BCIs will help us answer it, and how considering the problem helps us understand agent-BCI interactions. If it is clear there, then the paper will be clearer, and I think it will also make it more obvious to readers why they should read it.

I have restructured the Introduction section to include a concise account of the problem and the aims of the paper. I have highlighted new or significantly reworked text in yellow throughout the manuscript.

Comment 2: On a related point, I would suggest the author change the paper’s title. As it stands, it does not clearly indicate to prospective readers quite what it’s about, and the notion of a vanishing point is somewhat obscure. My reason for suggesting this is that the paper would reach a wider audience with a clearer title. But this is completely at the author’s discretion, and does not bear at all on the quality of the paper, or any decision I make about acceptance. It’s intended as a helpful suggestion.

Response 2: I have changed the title as suggested, and the editor also recommended to change the title. While this "vanishing point" was intended to attract readers, I agree that if it appears obscure, it is better to omit it for clarity.

Comment 3: §2 – Methodology. I thought that, while there were some interesting details in §2, on the whole the section did not add much to the paper. The discussion of modular epiphenomenalism and Friston’s free energy principle seemed quite irrelevant to the paper. And the more important points – the assumptions about BCIs from line 326 down – could easily be integrated into the main discussion of BCIs earlier on. I suggest integrating those important parts of §2 into other parts, and otherwise deleting the section.

Response 3: The Methodology section is required by the journal's guidelines, and I believe it may be useful for readers from non-philosophical backgrounds working with BCIs. However, I agree that part of its content was more relevant to Section 1.3, so I have moved it there. I also removed the passage on modular epiphenomenalism, as the arguments in this paper do not depend on proving that CTA is the correct theory of action. Regarding Friston’s free energy principle, its mention was requested by the editors, so I have retained it. It may be expected by the journal’s audience.

Comment 4: §4 – Implications for Philosophy of Action. I think this section contains interesting material. To my mind, its most important contribution is to argue that your BCI thought experiments support some form of reliability approach to causal deviance over any other approaches. This seems to me a central point of the paper, but this is never signposted in the text. The author should make it very clear to the reader what the point of this discussion is. Lines 533-535 seem to contain one of the key points: BCI thought experiments suggest that causal deviance is a matter of the functional structure of the systems supporting agency, and this suggests a version of the reliability approach. If that’s the key point, then the author should make this clear to the reader and structure their discussion around it

Response 4: I have significantly restructured Section 4 and expanded the discussion to emphasize the key point. I have made it more explicit and structured the discussion accordingly.

Comment 5: Arguments in the Thought Experiments in §3. The main contributions this paper makes are in showing how considering BCI thought experiments affects our thinking about causal deviance. But these arguments are too buried beneath all of the literature review and other material. The author would greatly improve the paper if they brought out these arguments and put the spotlight on them. For instance, lines 412-421 are an original and interesting passage of argument, but they take up relatively little of the paper. The paper should be oriented around these more important and original lines of argument, whereas at the moment the paper is dominated by literature reviews and more peripheral discussions.

Response 5: I have streamlined the literature review and given greater emphasis to the discussion of the thought experiments, bringing the arguments more to the foreground.

Comment 6: §1.2. This subsection is a very extensive literature review of the various responses to the problem of causal deviance. However, it is very long and reads too much like a very good encyclopaedia entry. This detracts from the main argumentative line of the paper. I would suggest significantly cutting this down, and making it clearer to the reader why the author needs to talk about these theories. This is another case of the general point I would like to get over to the author: they should foreground their own arguments about causal deviance and BCI, and have much less literature review.

Response 6: As noted above, I have shortened Section 1.2, reducing unnecessary literature review.

Comment 7: On p.2, the author uses an example of causal deviance which is supposed to be Davidson’s but it is not correctly stated. The author’s version is different from Davidson’s, and less compelling. I would suggest changing this to Davidson’s original case.

Response 7: In the revised version, I have carefully reformulated Davidson’s example to align more accurately with his original case.

Comment 8:   At line 245, the author claims that when a person uses a BCI to command a computer to do something, there is a mental action and an external event that is not an action. The external event is the algorithmic behaviour of the computer. But why does the author think this is not an action? The causal theory of action is roughly the view that what makes an event an action is that it is caused in the right way by a person’s intentions (or whatever mental states one chooses). If the computer’s behaviour is caused in the right way by a person’s intentions, why can’t it be one of their actions? Does the author have an argument to this effect? One thing I suspect is that they move from the claim that they aren’t bodily actions (line 243) to the claim that they aren’t actions at all. But surely part of the interest here is in the possibility of extra-bodily action. I would like some clarity from the author at this point in the paper.

Response 8: I have clarified this discussion. In that subsection, I was specifically addressing basic actions, but I had not been precise enough with terminology. I have now made the distinction clearer.

Comment 9: At line 305, the author suggests that surveys of intuitions might help arbitrate disagreements about intuitions. But how could this help? A survey just collects all the intuitions, and it might say that Philosopher A is in the minority, but that presumably shouldn’t make us disregard their thought. If the author thinks surveys could help, I would like them to explain how.

Response 9: I have removed this passage, as it would require further discussion to justify its inclusion, and omitting it does not affect the overall argument.

Comment 10:  I was puzzled by the claim on line 306 that the author will assume that the causal theory of action is “a broadly valid theory of action”. Isn’t this precisely what’s at issue? The problem of deviant causal chains is what leads many philosophers to reject the causal theory. The interest of the author’s discussion of BCI lies in enriching our sense of whether the causal theory can survive this objection. So this claim seems very out of place in this context.

Response 10: I have omitted this claim, as assuming CTA is broadly valid is not necessary to evaluate the impact of BCIs on CTA.


Comment 11:  Around line 336, the author talks about modest conceivability and thought experiments. I think the general idea they express might be fine, but one worry I had was that, since CTA is a metaphysical theory of action, the causal connections between mental states and actions needs to hold in every possible world. This suggests we should look toward far out possibilities as well as close by ones. If this discussion stays in the paper (see my comments under Methodology, above), then I would like to see some consideration of this point in order to clarify the approach.

Response 11: I have clarified this section. My intention was to emphasize that the cases introduced are physically possible and do not require imagining physically impossible worlds.

Comment 12:  The question posed on line 450 seemed interesting and important, but the author moves on to something else immediately. What does the author think the answer to that question is, and why? Saying more on this would improve the interest and argumentative quality of the paper.

Response 12: In the revised version, this line has been removed as the argument was restructured.

Comment 13:  Lines 557-567 contain the seeds of a promising argument in favour the reliability strategy, but the author somewhat backs off of developing the point. But this seems like an important contribution which the author should develop in more detail. I am not asking them to develop a fully fledged reliability theory, but it would be good for them to develop the idea in some more detail. For instance: what sorts of reliability theory does the argument support; does it place constraints on what reliability theorists can say or appeal to?

Response 13: I agree that this argument should have been developed in greater detail, and I have expanded on it in the revised version. Specifically, I have clarified what type of reliability theory the argument supports and whether it imposes constraints on existing reliability-based accounts of causal deviance.

I sincerely appreciate the thoughtful and constructive feedback. I have made substantial revisions to improve clarity, argumentation, and focus, and I hope the paper is now significantly stronger.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Author,

This manuscript has been very well reworked in light of my comments. I think you have done a very good job, and the paper looks much better balanced and more clearly foregrounds the argument. I think it should be published, pending a few small very minor changes I detail below. (One small suggestion for future revisions is that I found it a little bit too difficult to work through all of the different colours, highlights, and track-changes. Perhaps you could simply it for future reviewers).

  • L62 should read “according to
  • L135-148 – It would be helpful to describe the paper’s structure in terms of its sections, so the reader can map out what is coming.
  • L676 – There is a typo: change ‘a wring way’ to ‘the wrong way’.
  • L826-827 – “Specifically, a theory must support discrete actions without requiring high-precision intentions or continuous guidance. It must also account for empirical data on which causal pathways are normative for executing a given action.” I found this puzzling. What does ‘support discrete actions’ mean? What does it mean for causal pathways to be normative for executing an action?

Author Response

Comment 1: L62 should read “according to

Response 1: Indeed; corrected as suggested..

Comment 2: L135-148 – It would be helpful to describe the paper’s structure in terms of its sections, so the reader can map out what is coming.

Response 2: I have added section numbers to better outline the paper’s structure.

Comment 3: L676 – There is a typo: change ‘a wring way’ to ‘the wrong way’.

Response 3: The typo has been corrected.

Comment 4: L826-827 – “Specifically, a theory must support discrete actions without requiring high-precision intentions or continuous guidance. It must also account for empirical data on which causal pathways are normative for executing a given action.” I found this puzzling. What does ‘support discrete actions’ mean? What does it mean for causal pathways to be normative for executing an action?

Response 4: I have clarified this passage as follows: "Specifically, a theory must accommodate discrete actions – those that are initiated by a single, overarching intention and carried out as a unified whole – without requiring high-precision intentions or continuous guidance. A theory must also account for empirical data on which causal pathways are normal for a given action, whether the data pertain to biological systems or machines. "

Thank you for your kind words! I did my best to improve and clarify the manuscript, and I’m glad that it was successful. I apologize if the changes were difficult to track – there were quite a few, and finding a clear way to present them was not easy. I also appreciate your final comments.

Back to TopTop