Computational Architecture Mediating Inhibitory Control of Coordinated Eye-Hand Movements

Significant progress has been made in understanding the computational and neural architecture that mediates eye and hand movements made in isolation. However, less is known about the mechanisms that control these movements when they are coordinated. Here, we outline our computational approaches using accumulation-to-threshold and race-to-threshold models to elucidate the mechanisms that initiate and inhibit these movements. We suggest that, depending on the behavioral context, the initiation and inhibition of coordinated eye-hand movements can operate in two modes- coupled and decoupled. The coupled-mode operates when the task context requires a tight coupling between the effectors; a common command initiates both effectors, and a unitary inhibitory process is responsible for stopping them. Conversely, the decoupled mode operates when the task context demands weaker coupling between the effectors; separate commands initiate the eye and hand, and separate inhibitory processes are responsible for stopping them. We hypothesize that higher-order control processes assess the behavioral context and choose the most appropriate mode. This computational architecture can explain heterogeneous results observed across many studies that have investigated the control of coordinated eye-hand movements and may also serve as a general framework to understand the control of complex multi-effector movements.


Introduction
To lead productive lives, we need to rapidly stop contextually inappropriate movements, such as stop stepping onto the street when we see a car approaching or stop speaking when a lecture starts. Such forms of inhibitory control, i.e., executive control processes that mediate behavioral action-stopping, have usually been studied using simple movements such as eye movements and button presses [reviewed in 1,2]. However, majority of our daily activities involve complex multi-effector movements. Scant attention has been paid to studying the neural and computational mechanisms that mediate the stopping of such complex movements. In this review, we summarize our series of studies on one such complex two-effector system -coordinated eye-hand movements.
Many of our everyday actions require coordinated eye-hand movements. For example, reaching for a cup on the table or playing tennis. Typically, this involves an initial rapid eye movement called a saccade which focuses the target onto the fovea, followed by a hand movement directed to the same location, 80 -100 ms later. This tight spatial and temporal coupling of the eye and hand movement is the hallmark of coordination [3]. However, despite this, distinct neural pathways are thought to control the initiation and stopping of these effectors [4,5]. The neural pathways involved in generating eye and hand movements are thought to diverge in the parietal cortex after common early processing in visual areas. In the macaque brain, saccade-related signals are observed in the lateral intraparietal area and frontal and supplementary eye fields which converge onto the superior colliculus and brain stem, which in turn innervates the eye muscles [reviewed in 6,7]. Hand movement-related signals are detected in the parietal reach region, the dorsal premotor cortex, and motor cortex which project to the spinal cord and then the hand muscles [reviewed in 8,9]. This suggests that distinct anatomical networks initiate eye and hand movements. In addition to the anatomical separability, the eye and hand systems can also function independent of each other.
However, other studies have suggested that some regions in the brain respond to both eye and hand movements such as the premotor cortex [10,11], frontal eye fields [12,13], posterior parietal cortex [14][15][16][17] [but see 18,19], and superior colliculus [20][21][22]. These regions possibly contribute to the neural mechanism mediating eye-hand coordination and seem to be distributed across the brain [reviewed in 23]. A similar issue arises when addressing how coordinated eye-hand movements are stopped. Prior research has shown that stopping eye and hand movements are thought to be mediated by anatomically separate regions. Eye movements are thought to be stopped by distinct populations of neurons in the frontal eye field and superior colliculus [24,25], while cortical regions such as the right inferior frontal cortex, pre-supplementary motor area, premotor and primary motor cortex are important for stopping manual movements [26][27][28][29]. Additionally, Leung and Cai (2007) [30] have reported both overlapping and nonoverlapping activations in the prefrontal cortex during stopping of saccades and button presses suggesting that both common or separate nodes might mediate stopping of coordinated eye-hand movements. While these studies are noteworthy, we propose they are largely phenomenological in nature and fall short of providing a mechanistic basis for understanding the control of eye-hand coordination.
In this review we outline our approach, based on behavior, to study the computational architecture that mediates the initiation and inhibition of coordinated eyehand movements. We used stochastic accumulation-to-threshold models to fit eye and hand reaction time (RT) distributions and predict the correlation between the RTs. To understand the mechanisms inhibiting coordinated eye-hand movements we used behavioral measures such as 1) Compensation functions (the probability of erroneous response as a function of time-analogous to inhibition functions [31]), and, 2) Target Step Reaction Time (a measure of the stopping latency-analogous to Stop Signal Reaction Time [32]). We used the framework of the independent race model [33] to model the stopping behavior. Under the race model framework, whether or not a prepotent response is stopped depends on the outcome of a race-to-threshold between a GO and a STOP process. The GO process initiates the movement while the STOP process inhibits it. If the GO process reaches the threshold first, then the movement is initiated, however, if the STOP process reaches the threshold first, then the movement is inhibited. Thus, using behavior and modeling we were able to establish the computational mechanisms that mediate the stopping of coordinated eye-hand movements in different behavioral contexts.
We suggest that the initiation of eye and hand movements operates in two modescoupled and decoupled. The coupled-mode operates when a behavioral context requires strong coupling between the two effectors (e.g. when reaching for a cup). In this case, the movements are generated by a 'common command'. The decoupled mode operates when a behavioral context requires weaker coupling between the two effectors (e.g. when playing drums). In this case, the movements are generated by separate, independent commands. More importantly, we propose that the architecture that governs the stopping of coordinated eye-hand movements also depends on the mode that initiates the movements. In the coupled-mode, when the coordinated movements are initiated by a common command, a unitary effector-independent STOP process attempts to inhibit the movements. Conversely, in the decoupled-mode, when separate commands are sent to initiate coordinated eye-hand movements then separate effector-specific STOP processes are recruited to inhibit the movements. We also propose that the brain flexibly chooses the architecture to use depending on the behavioral context, and indeed can switch between different architectures on a trial-by-trial basis. We start by describing the coupled mode, then the decoupled mode, and finally end with how to validate such computational architectures using neuroscience techniques.

Computational architecture that mediates the initiation of eye-hand movements
Coordinated eye-hand movements to a peripherally presented target are characterized by three salient features: 1) the mean eye RT is typically 80-100 ms less than mean hand RT [34,35], 2) the correlation between eye and hand RT is high [36,37], and, 3) the variability of the eye and hand RT distributions (quantified using standard deviation, SD) are similar. While the first two findings are ubiquitous in the field, the last finding has hardly been considered, yet it turned out to be a crucial diagnostic in unraveling the architecture of eye-hand coordination. This is because the SD of any RT distribution under stochastic accumulation scales linearly with its mean [38,39]; but in the case of coordinated eye-hand movements, this linear scaling was not observed [40]. This led us to the hypothesis that a "common command" [35,[41][42][43] underlies the initiation of coordinated eye-hand movements.
We tested the ability of three different architectures to explain the behavior when participants made coordinated saccade and reaching movements to a target [40,44]. The motor plan for each effector was modeled as a stochastic rise-to-threshold accumulator where the time of threshold crossing represented the RT in a trial. The simplest architecture of coordination was the "independent model" in which eye and hand movements are generated by independent systems operating in parallel; driven by a common visual target ( Figure 1A). This passive model could not predict any of the three salient features of the eye-hand behavior-mean and SD of eye and hand RT distributions and high RT correlation. We also tested an architecture called an "interactive model", which incorporated an active mechanism of coordination ( Figure 1B). This was based on the experimental observation that RTs were different between the coordinated and alone conditions. In the alone condition, the same eye and hand movements were executed separately, without the other accompanying effector. Notably, eye RTs were delayed by 50 ms while the hand RTs were faster by 100 ms in the coordinated condition compared to the alone conditions [40]. We incorporated this RT modulation in our eye-hand model as interactions -an inhibitory interaction from the hand that delays the saccade onset and an excitatory interaction from the eye that speeds up the hand movement. Despite incorporating an active mechanism of coordination, this model failed to predict two of the salient features of eye-hand behavior, i.e. the high RT correlations and the similarity in SDs of eye and hand RT distributions. A "common command model" was also tested ( Figure 1C). This model suggests that eye and hand effectors are controlled by a common command, modeled as a single stochastic accumulator. When this accumulator crosses the threshold, the eye movement starts and is followed by a hand movement after a "delay" which represents the time taken to activate the larger hand muscles to initiate the movement. This architecture could account for all 3 salient features; the difference in the means of eye and hand RTs, the similarity of SDs of eye and RT distributions, and also the high RT correlations. Furthermore, we validated this model by measuring the EMG activation of the arm deltoid muscles. The muscle activation preceded saccade onset by 50 ms and was strongly correlated with saccade onset. More importantly, the biomechanical delay measured from the EMG onset to hand onset was strongly correlated with the hand delay estimated from the common command model for each subject. These evidences suggest that the common command model is a biologically plausible mechanism that generate coordinated eye-hand movements.

Control of coordinated eye-hand movements
Current evidence regarding the control of eye-hand movements is derived largely from the countermanding task, in which the inhibitory control of eye or hand movements has been studied in isolation by assessing the duration taken to inhibit a movement, called the stop-signal reaction time (SSRT) [32]. These studies showed that it takes longer to inhibit a hand movement [~200 ms; 33,44,45] than an eye movement [~100 ms; 24,46,47]. Similar results have been obtained by other studies involving coordinated eye-hand movements, indicating that the ability to stop eye movements is different from the ability to stop hand movements [49][50][51].
These studies, however, have not considered the architecture that initiates eye-hand movements. This may be critical since the nature of control employed to stop these movements may vary depending on the architecture of coordination used to initiate them. Following from above, if coordinated eye-hand movements employ a common dedicated circuit to initiate; then it stands to reason that stopping of such movements may also entail a common inhibitory mechanism. We tested this in our subsequent experiments using a variant of the countermanding task called the redirect task. The redirect task had two types of trials -no step trials (60% of trials) in which the subject made eye and hand movements to the peripheral target. In the step trials (40% of trials), after a delay called the target step delay (TSD), a second target appeared at a different location. In these trials, participants tried to withhold their response to the initial target and then redirect their response to the final target. Thus, in both countermanding and redirect tasks, participants try to stop their response to the initial target in a minority of trials. Additionally, in the redirect task, they have to redirect their response to the second target. The behavioral responses from redirect tasks parallel that seen in countermanding tasks-1) With increasing TSD, erroneous responses (movements made to the first target, i.e. unsuccessful redirection) increase. This produces a psychometric function called compensation function which measures the probability of erroneous response as a function of TSD (analogous to the inhibition function in the countermanding task), 2) Redirected movements are well described by the race model [44,52,53], 3) The race model can be used to estimate the Target Step Reaction Time (TSRT), the average latency to redirect movements (analogous to Stop Signal Reaction Time in the countermanding task).
In our study, participants performed the redirect task in 3 different conditions, 1) eye-alone, where only saccade were made, 2) hand-alone condition, where only reaches were made, 3) eye-hand condition, where both saccades and reaches were made. This allowed us to compare how stopping performance changed depending on whether the effector was executed by itself (alone condition) or along with the other effector (coordinated condition). In the alone conditions, the TSRT for the eye was significantly less than that of the hand ( Figure 2F). Additionally, the compensation functions for the eye and hand were distinct from each other ( Figure 2B). Taken together, this suggests that in the alone conditions, the two effectors are inhibited by separate effector-specific STOP processes. However, when we compared the behavior in the coordinated condition, we observed an interesting result: the eye TSRT was less than that of the hand ( Figure 2F) but, the eye and hand compensation functions were comparable to each other ( Figure 2D).

(E) A bar graph showing the symmetrical shifts (alone) and non-symmetrical shifts (coordinated). The blue bar shows the average difference between the mean RTs of eye and hand while the magenta bar shows the average difference between the means of eye and hand compensation functions. (F) A bar graph showing the TSRT calculated for eye (blue) and hand (magenta)during the alone condition and coordinated condition after incorporating the ballistic stage.
To understand this conundrum of similar compensation functions despite dissimilar TSRTs, we considered the effect that RT distributions have on the compensation functions. The nature of the compensation function depends on the outcome of the race between the GO and STOP processes. If the GO process, characterized by the no-step RT distribution, is slower, then it is expected to shift the compensation function to the right. We tested this expectation across the three conditions. In the alone condition, the difference between the means of eye and hand RT distributions (Figure 2A) was comparable to the differences between the means of their compensation functions ( Figure 2B). This suggests that the difference seen in the compensation functions during the isolated execution of the effectors is entirely driven by the RT differences ( Figure 2E). However, in the coordinated condition, the difference between the means of eye and hand RT (~100 ms; Figure 2C) was significantly higher than the difference between the means of their compensation functions (~35 ms; Figure 2D). This peculiar shift of the no-step RT, without affecting the compensation function (i.e. non-symmetric shift) was specific to the coordinated condition ( Figure 2E). This suggests that the nature of control during stopping of eye and hand movements is distinct between the alone and coordinated conditions. Previous studies which have reported such changes in RT without corresponding changes in compensation functions have attributed it to a ballistic stage that is immune to inhibitory control and reflects a "point of no return" during movement initiation [54,55]. We, therefore, tested if a ballistic stage could be detected in subjects during the control of coordinated eye-hand movements.

A ballistic stage explains redirect behavior for coordinated eye-hand movements
In our study, we estimated the ballistic stage using numerical simulations [56] which involved fitting the observed compensation function of the hand during coordinated eyehand movements. Using this method, we estimated that the last 45 ms of the hand motor preparation to be ballistic in nature. Since the electromechanical delay or the time between the EMG onset and hand movement is ~100 ms [42,57] we conclude that the "point of no return" for the hand movement may appear after EMG onsets. The ballistic stage might result from the interplay between the common command (GO process) that initiates eyehand movement and the inhibitory process. When the GO wins the race to the threshold resulting in an eye movement, the inhibitory process decays or is actively inhibited. This inevitably results in the hand movement following the eye to the erroneous target due to the absence of any active inhibition. As a consequence, the compensation functions of the eye and hand are comparable despite significant differences in their GO RTs during the coordinated condition. Recent studies have also substantiated the existence of a ballistic stage in countermanding tasks. Muscle activity decreases ~60 ms prior to the behavioral measure of stopping latency (SSRT) suggesting that this time interval reflects a ballistic stage during which the inhibitory process cannot intervene and stop the response [58][59][60]. Such an architecture also resolves the paradox of different TSRTs for the eye and hand in the coordinated condition. When this ballistic stage of 60 ms is incorporated, the hand TSRT in the coordinated condition becomes comparable to the eye TSRT. The similarity in the eye and hand TSRT and the comparable compensation functions of eye and hand suggests that a unitary, effector-independent inhibitory mechanism explains the control of coordinated eye-hand movements.

A unitary STOP model: Salient features and validation
The idea that a unitary STOP process inhibits coordinated eye-hand movements contradicts previous studies that suggested distinct inhibitory processes for eye and hand movements [49,50]. To validate the unitary STOP model, we simulated race models with a unitary STOP process and a ballistic stage in the hand motor plan. This model was able to account for all the observed behavioral measures in the eye-hand redirect task such as compensation functions and proportions of correct and incorrect trials. The performance of the unitary stop model was then compared to the performance of an alternate model which postulated that effector-specific STOP processes inhibited eye and hand movements. However, both models were able to predict all features of the eye-hand redirect behavior to the same extent. Using a statistical model selection criterion (Akaike Information Criterion) the unitary stop model was found to be the best as it predicted the redirect behavior with lesser number of free parameters.
Further evidence beyond this statistical criterion also validated the existence of a unitary STOP process for controlling coordinated eye-hand movements. Previously the

Figure 3 | Schematics depicting the ballistic stage for coordinated eye-hand movements. The GO process (green) gets activated after a visual delay following the first target and the STOP process (red) gets activated in a similar way following the second target onset. These two processes race to a threshold. Eye movements (red dots) are initiated when the GO process crosses the threshold and hand movements following a delay (violet). After the GO process wins the race, the STOP process is actively inhibited or decays. The lack of inhibition results in a ballistic stage (grey) leading to the inevitable execution of hand movements (blue dot).
GO processes for the coordinated eye-hand movements were modeled as a common accumulator because the variability in the eye and hand RTs were comparable. Similarly, a unitary STOP model in turn makes a specific prediction that the variability in the redirect behavior of eye and hand should be comparable. To test this, we estimated the variability of the STOP process by replotting the compensation functions using ZRFT (zscore relative finishing time) normalization instead of TSDs. We found that the STOP variability was comparable during the coordinated condition validating the predictions of the unitary STOP model. Interestingly, when the eye and hand movements were executed in isolation, the variability of the eye and hand STOP processes were significantly different. This is consistent with our previous results that suggested separate, effector-specific STOP processes for inhibiting eye and hand movements when executed in isolation.
To test the unitary STOP architecture experimentally, a variant of the redirect task (selective redirect) was used. Subjects were asked to perform a coordinated eye-hand movement in 60% of the no-step trials. During the remaining 40% of trials, in separate blocks, subjects were instructed to inhibit either the eye (Eye-Stop); or inhibit the hand (Hand-Stop); or inhibit both effectors (Eye-Hand Stop). We hypothesized that a unitary STOP process if engaged to inhibit a coordinated eye-hand movement, would not be able to selectively stop a single effector. Consistent with this idea, the hand was inhibited in the eye-stop condition and the eye was inhibited in the hand-stop condition even though the behavioral context did not require it. The redirect performance across the three conditions was not significantly different. This experiment showed strong behavioral evidence that eye and hand effectors when coordinated are controlled by a unitary STOP process that inhibits both effectors.
Finally, taking together all the evidence observed in the coordinated condition, 1) similarity of compensation functions, 2) similarity of the eye and hand TSRT after incorporating a ballistic stage, 3) superiority of unitary STOP model over multiple STOP model as assessed by Akaike Information Criterion, 4) similarity in STOP variability, 5) participants' inability to selectively stop one effector in the selective redirect task, suggests that a unitary, effector-independent STOP process is recruited when coordinated movements are to be inhibited.

Computational architecture mediating flexible initiation of coordinated eye-hand movements
Majority of our day-to-day eye and hand movements are temporally coupled However, in certain behavioral contexts, it might be advantageous to not couple the two movements. Consistent with this, studies have reported heterogeneous results regarding the temporal coupling between the two effectors. Some studies have reported weak correlations ranging between 0.1-0.4 [34,[61][62][63], while others have reported moderate to high correlations ranging between 0.6-0.9 [37,40,64,65]. These suggest a flexible coupling between the two effectors based on task context. While the common command model can account for tight coupling between the effectors that are seen in certain behavioral contexts, it cannot account for varying low levels of coupling seen in other behavioral contexts.
We studied the computational mechanism that allows such flexible coupling [66] [reviewed in 66]. We hypothesized that coordinated eye-hand movements operate in two modes-coupled and decoupled. Each of these modes of coordination has specific behavioral signatures. In the coupled mode, a single accumulator is responsible for initiating both movements; hence there is a single source of variability underlying both effectors. This predicts high eye-hand RT correlation and that the SD of the two RT distributions will be similar. Conversely, in the decoupled mode, separate accumulators are responsible for initiating the two movements, hence there are distinct sources of variability underlying the effectors. This predicts low eye-hand RT correlations and that the SD of the RT distributions will be dissimilar. We tested this hypothesis by asking participants to perform two tasks specifically designed to induce them to operate in either coupled or decoupled mode.
Participants performed Go tasks in two contexts-a search task (coupled-mode; Figure 4A) and dual-task (decoupled mode; Figure 4B). We reasoned that the initiation of the two effectors would be coupled or decoupled depending on whether the Go cues for the two movements were common or not, respectively. In every trial of the search task, participants had to make eye-hand movements to a common target presented among distractors, i.e. the Go cue for both effectors was common. In contrast, in the dual-task, the Go cue for the eye was the appearance of a peripheral target, while the Go cue for the hand was a tone. Further, the tone was presented only in a minority of trials ensuring that the target appearance was not predictive of the Go cue for the hand. We hypothesized that the behavior in the search task would be consistent with the prediction of a common command model (similar SDs of eye and hand RT distributions and high RT correlation), while the behavior in the dual-task would be consistent with the predictions of the separate command model (dissimilar SDs of the eye and hand RT distributions and low RT correlation). The results were consistent with this prediction. In the search task, RT correlations were high (~0.8) and SD of eye and hand RT distributions were not significantly different from each other despite a ~90 ms difference in their means. Further, this behavior was fit well by the common command model but not by the separate commands model. Conversely, in the dual-task, RT correlations were low (~0.3) and both the mean and SD of the eye RT distribution were significantly less than the mean and SD of the hand RT distribution, respectively. Further, the behavior in the dual-task could be explained by the separate commands model but not by the common command model. Thus, these results suggest that there are two architectures mediating the initiation of eye-   hand movements and that depending on the task context, the brain is biased to one of the architectures.

Stopping of flexibly initiated eye-hand movements
As mentioned before, there is debate as to whether stopping of eye-hand movements is mediated by a unitary or separate STOP process/es [49][50][51]68]. These heterogeneous results might be because the movements were generated in task contexts which biased the brain towards a coupled or decoupled mode. Thus, we hypothesized that depending on the mode in which coordinated eye-hand movements are generated, the brain can flexibly employ a unitary or separate STOP process/es to inhibit the movements [69]. To test this, we added a redirect component to the coupled and decoupled Go tasks to study stopping Observed % Observed % behavior in these contexts. The extension of the search task was called the search redirect task ( Figure 5A). Here, 60% trials were no-step where eye-hand movements had to be made to the target among distractors, while 40% were step trials where the target jumped to another location after a target step delay. In these trials, movements had to be made to the final target location and not the initial target location. Thus, the search redirect task represented the task context where presumably there was a common command and unitary STOP at work. Conversely, the extension of the dual-task was the dual redirect task ( Figure 5B), again with 60% no-step and 40% step trials. Thus, the dual redirect task represented the task context where presumably there were separate commands and separate STOPs at work.

Figure 5 | Summary of tasks, observed behavior and simulation results. A)
We initially tested the no-step RT distributions and correlations and found convincing evidence that responses in the no-step search task were generated by a common command in the coupled mode, while responses in the no-step dual-task were generated by separate commands in the decoupled mode. Next, we compared the eye and hand compensation functions from the step trials between the two task contexts. We reasoned that in the coupled mode (search redirect task), the movements would be inhibited by a unitary STOP process, resulting in similar eye and hand compensation functions ( Figure 5C). In the decoupled mode (dual redirect task), the movements would be inhibited by separate STOP processes, resulting in dissimilar compensation functions ( Figure 5E). The results were consistent with this prediction ( Figure 5D). We also estimated the variability of the STOP process/es by replotting the compensation functions using ZRFT (z-score relative finishing time) normalization and observed that the variability was larger in the decoupled mode compared to the coupled mode. This again suggested that in the coupled and decoupled conditions, unitary STOP and separate STOP processes are responsible for inhibiting the eye and hand movements, respectively.
Further support to this conclusion came from simulation results comparing unitary and separate STOPs models. The unitary STOP ( Figure 5F) but not the separate STOPs model ( Figure 5G) was able to predict the behavior in the search redirect task. Conversely, the separate STOPs ( Figure 5H) but not the unitary STOP ( Figure 5I) model was able to explain the behavior in the dual redirect task. Taken together, this highlights the flexibility that exists in the brain, where the task context biases the architecture towards a coupled mode (common command and unitary STOP) or a decoupled mode (separate command and separate STOPs).

Parallels between common and separate stops and global and selective stopping
Thus far, we have suggested that there are two neural architectures that mediate the stopping of coordinated eye-hand movements-1) a unitary STOP used in the coupled mode, and, 2) separate STOPs used in the decoupled mode. How do these computational architectures relate to neural mechanisms? Interestingly, our conclusions largely parallel the research on the topic of global vs. selective stopping, i.e. whether stopping has a broad impact on motor cortical areas or is selective to the motor area that needs to be inhibited.
Global stopping is thought to be recruited when there is a rapid reactive need to stop a movement and is thought to be implemented by rapid inhibition of all motor cortical areas. This rapid stopping is thought to be mediated by the hyper-direct pathway that connects prefrontal cortical areas to the subthalamic nucleus (STN) of the basal ganglia [70,71]. This activation of the STN via the output of the basal ganglia rapidly cuts off the thalamocortical drive leading to global inhibition of the motor cortex within a few hundreds of milliseconds [59,72]. This global inhibition of the motor cortex can be measured using transcranial magnetic stimulation (TMS). Numerous TMS studies have demonstrated that in trials where a response is successfully stopped there is a decrease in TMS evoked muscle responses (motor evoked potential, MEP) in those muscles which are not involved in the task -e.g. decreased MEP of leg when stopping hands [73], decreased MEP of hand when stopping eyes [74], decreased MEP of hand when stopping speech [75], and decreased MEP of left hand when stopping right hand [76].
On the other hand, selective stopping is thought to be recruited when there a need to selectively stop of effector without affecting other effectors. Such stopping is slower and more effortful. This stopping is thought to be mediated by the slower indirect pathway of the basal ganglia [77], and in this case decreased MEP is seen only in the muscle that has to be inhibited [73,78] [but see 59].
Taken together, these neural mechanisms fit our conclusions. We propose that the stopping of coordinated eye-hand movement that is mediated by a computational unitary stop is neurally implemented by a global stopping mechanism. On the other hand, the stopping of coordinated eye-hand movements that is mediated by computationally separate stops is implemented by a selective stopping mechanism. This gives clear predictions that can be tested in future studies.

Conclusions
Using a combination of behavior, computational modeling, and electromyography we propose a computational architecture that mediates the initiation and inhibition of coordinated eye-hand movements. We suggest that depending on the task context, eye-hand movements may be initiated and inhibited in one of two modes-a coupled mode where a common command initiates and a unitary STOP inhibits both movements and a decoupled mode where separate commands initiate, and separate STOPs inhibit the two movements. Such an architecture can explain eye-hand behavior in numerous contexts and reconciles heterogenous results reported by previous studies. We hope that the computational framework that we have developed and tested rigorously will aid neurophysiologists in discerning the neural mechanism of eye-hand initiation and inhibition.