Next Article in Journal
Thermally Stratified Darcy Forchheimer Flow on a Moving Thin Needle with Homogeneous Heterogeneous Reactions and Non-Uniform Heat Source/Sink
Next Article in Special Issue
Analysis of Three-Dimensional Circular Tracking Movements Based on Temporo-Spatial Parameters in Polar Coordinates
Previous Article in Journal
Innovative Approaches in Sports Science—Lexicon-Based Sentiment Analysis as a Tool to Analyze Sports-Related Twitter Communication
Previous Article in Special Issue
Influence of Pairing Startling Acoustic Stimuli with Postural Responses Induced by Light Touch Displacement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auditory Coding of Reaching Space

1
Faculty of Humanities, Institute of Sports Science, Leibniz University Hannover, 30167 Hanover, Germany
2
School of Physical & Occupational Therapy (SPOT), McGill University, Montreal, QC H3G 1Y5, Canada
3
Honda Research Institute Japan Co., Ltd., Wako-shi, Saitama 351-0188, Japan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 429; https://doi.org/10.3390/app10020429
Submission received: 19 November 2019 / Revised: 26 December 2019 / Accepted: 27 December 2019 / Published: 7 January 2020
(This article belongs to the Special Issue Movement Biomechanics and Motor Control)

Abstract

:
Reaching movements are usually initiated by visual events and controlled visually and kinesthetically. Lately, studies have focused on the possible benefit of auditory information for localization tasks, and also for movement control. This explorative study aimed to investigate if it is possible to code reaching space purely by auditory information. Therefore, the precision of reaching movements to merely acoustically coded target positions was analyzed. We studied the efficacy of acoustically effect-based and of additional acoustically performance-based instruction and feedback and the role of visual movement control. Twenty-four participants executed reaching movements to merely acoustically presented, invisible target positions in three mutually perpendicular planes in front of them. Effector-endpoint trajectories were tracked using inertial sensors. Kinematic data regarding the three spatial dimensions and the movement velocity were sonified. Thus, acoustic instruction and real-time feedback of the movement trajectories and the target position of the hand were provided. The subjects were able to align their reaching movements to the merely acoustically instructed targets. Reaching space can be coded merely acoustically, additional visual movement control does not enhance reaching performance. On the basis of these results, a remarkable benefit of kinematic movement acoustics for the neuromotor rehabilitation of everyday motor skills can be assumed.

1. Introduction

Reaching movements are everyday actions essential for coping with everyday life. Information about the position of the hand is transmitted via the visual and the proprioceptive sense [1]. In reaching movements, the movement of the arm is continuously adjusted [2]. Information about the target position and the location of the hand are continuously monitored [2]. Apparently, at least when the moving limb is not visible, but the target point is, proprioceptive information about the location of the arm is continuously compared to visual information about the target position and thus the ongoing movement is smoothly corrected [3].
Not only visual and proprioceptive, but also auditory information can be used to guide arm movements [4,5]. Boyer et al. [6] completed a reaching study with merely acoustically coded artificial targets in the transverse plane with blindfolded participants. Directional information was given by stereo headphones reconstructing the sound pressure of a sound source at a given position in the transverse plane. The average deviation from the target was lower when the target sound was presented for two seconds than when it was presented for 0.25 s. There was also a feedback condition in which a continuous auditory real-time feedback of the hand position, with the same systematic as for the target presentation, was given. As instruction, the target was also presented here for 0.25 s. Reaching performance in this condition did not differ from that of the two conditions without any feedback. When this feedback was shifted 18.5° left from the real hand position, reaching performance was worse than in the condition without any feedback and a target presentation of two seconds [6].
Sound and motion are closely linked. Sound is vibration and vibration is movement. There is no sound if there is no movement. The auditory perception has a high temporal resolution but can also convey spatial information and does not require focused attention. Most movements, however, do not provoke considerable acoustic effects. Additional artificial auditory movement information is needed to enhance the amount of auditory information of usually almost silent movements.
The idea to provide movement information via the auditory channel has been pursued for several decades [7]. Research has focused on the benefits of additional auditory movement information on motor perception, motor control, motor learning and cooperative motion [8,9,10,11,12,13,14,15]. As underlying neurophysiologic functions, multisensory integration [16,17,18] and intermodal sensorimotor movement representations [19,20,21,22,23] are discussed. Human ability to emerge supramodal action representations might enable the support of proprioception by additional acoustic information.
Acoustic movement information can be used to augment the perception of another modality, but movement acoustics alone also provides information about a movement pattern for the support of motor perception and motor performance. Research in this area has focused on natural [24,25,26] but also on artificial movement sounds [27,28,29]. Levy-Tzedek et al. [30] even developed a visual- to- auditory sensory substitution device to enable blind people to guide reaching movements.
Movement acoustics has been used in neurorehabilitation dedicated to sensory-motor deficits, for example in the rehabilitation of target-orientated arm movements [12,31,32,33,34]. Additional work has been realized with guiding- and feedback-acoustics. Thaut et al. succeeded in the rhythmic facilitation of gait for Parkinson’s disease [35] and stroke patients [36], and also in the rhythmic cuing of reaching movements for stroke patients [37]. Ghai et al. reviewed further benefits of auditory cuing for gait impairments caused by aging [38], cerebral palsy [39], and Parkinson’s disease [40]. Thaut provided an overview about therapeutic usable effects of music in neuromotor rehabilitation [41]. The findings of Schauer and Mauritz suggest that walking in time with music adjusted to the actual individual step frequency can improve not only step length and therefore also gait velocity but also gait symmetry in stroke patients more than traditional gait therapy [42]. Young et al. differentiated the benefit of different kinds of non-sonification auditory instruction for various gait parameters in Parkinson’s disease [43].
There has been already some evidence that real-time movement acoustics can support neuromotor rehabilitation, albeit most of the studies use movement data to create auditory error feedback [44], target-orientated feedback [45] or more musical-oriented movement acoustics [46]. These kinds of acoustic information usually need processing on a conscious level of perception.
Initial approaches to the use of kinematic movement acoustics in neuromotor rehabilitation have also been developed [12,47]. Here, movement sonification is configured as continuous real-time kinematic auditory feedback intending to initiate audio-motor couplings and becoming integrated directly in multimodal perception as described above. Based on this “lower-level” efficiency, movement sonification should be appropriate for substituting reduced proprioception in stroke patients and for example so enhance the effectiveness of the rehabilitation process.
Such kind of sensory enhancements should increase the neuronal plasticity and support the functional reorganization of the sensory-motor system within the central nervous system. A first step is intended to focus on simple everyday actions of the upper extremities as goal-directed arm movements in three-dimensional action space. Preparatorily, an effective approach to represent this reaching space acoustically and the development of an appropriate sonification device is required.
This explorative study analyzes if it is possible, on the basis of our sonification device developed in the first steps, to perform reaching movements to merely acoustically instructed, invisible targets and to use acoustic feedback for the movement control. The aim was to study the implicit effectiveness of the available acoustic information about the target position and the movement. Therefore, the employed subjects were not experienced in the use of movement sonification and had no explicit knowledge of the used sound composition. Additionally, the impact of visual control of the movement on the precision of merely acoustically guided reaching movements in three planes in healthy subjects was analyzed. In this way, we expected to evoke an activation of audio-motor respectively audio-visuo-motor processes and to address sensory and motor control mechanisms.
Human movements are continuously exposed to alterations in direction, velocity, and forces. For this reason, a continuous mapping is required to adequately depict them. We decided to use continuous kinematic real-time auditory movement information to achieve a high level of auditory- proprioceptive congruence. Since the processing of error-related auditory information requires conscious cognitive processes as attention and we wanted to address implicit processes instead, we did not choose error-related, but kinematic movement sonification. These considerations are for example supported by Rosati et al. [34] who showed superiority of a task-related audio feedback compared to an error-related audio feedback in a visual tracking task.
Based on the considerations of Graziano [2], we decided to use data of the right metacarpophalangeal joints to generate acoustic information about the target position and the reaching movement. For the acoustic representation of the target position of the hand, three- dimensional spatial coordinates were sonified. For the continuous representation of the reaching movement, acoustic information about movement velocity was added. The systematics of the sonification was adopted from Vinken et al. [27], who found a high efficacy of such a four- dimensional sonification of a continuous kinematic trajectory in the auditory discrimination of arm movements. The coding of the vertical spatial component by pitch was also in accordance with Melara and O’Brien [48] and Scholz et al. [33,46]. The results of Küssner et al. [49], who studied the relations between musical parameters and intuitive gestures and found a positive correlation between pitch and height, also supported the choice of this mapping. The horizontal coding by stereo characteristics conformed to that applied by Boyer et al. [6]. The spatial depth was mapped on spectral composition. In this way, we intended to create an auditory space to enable the localization of reaching targets. Regarding the thoughts of Boyer et al. [6] and the findings of Vinken et al. [27], we added a velocity component to the spatial information for the movement sonification. In the light of the above-mentioned results of Boyer et al. [6] suggesting that the auditory position stimulus presentation of 0.25 s is too short for the generation of a precise and reliable target representation, our instruction sound was longer than 0.25 s in all conditions. In order to control the influence of the extent of given information, we worked with two different durations of target presentation.
In contrast to Boyer et al. [6] and to facilitate the task, we chose to design the instruction and feedback in the same manner. That means that the subjects were prompted to produce by their reaching movement the same sound as they had heard for instruction. To realize a clear temporal separation between instruction and feedback to avoid confusion, the participants’ movement did not start before the instruction was finished. We therefore accepted a reduced reaching performance observed by other authors for the removal of target presentation prior to the completion of the reaching movement [3,6].
Since we think that it is possible to merely acoustically instruct target positions for reaching movements and that our kind of movement sonification is highly effective, we formulated the hypothesis that the blind as well as the sighted experimental group is able to hit the target points with a precision clearly above chance. In the light of the described important role of vision in reaching movements and the beneficial effects of audio-visual integration and audio-visuo-motor processing associated with movement sonification, we expect the sighted experimental group to be superior to the blind group in reaching precision. In spite of the merely acoustic target instruction, we suppose that the visibility of the moving arm enhances the precision of the reaching action. Furthermore, we hypothesize that additional continuous auditory instruction and real-time feedback about the movement is superior to a mere discrete sonification of target point. Considering the learning effect proved by Effenberg et al. [14] and the improvement of the performance over time observed by Vinken et al. [27] even without the availability of feedback, we assume an improvement of the reaching performance over time in the present study.

2. Materials and Methods

2.1. Participants

Twenty-four subjects (14 females, 10 males), aged 19 to 30 years (mean age: 22.2 ± 3.0), participated in this study. All participants had a normal or corrected-to-normal vision (standard vision test), a normal hearing (hearing test: HTTS, Version 2.10, 00115.04711, SAX GmbH, Berlin, Germany) and were right-handed (Edinburgh Inventory, 10-item version). To ensure musical sense, MBEA (Montreal Battery of Evaluation of Amusia) online version [50] was conducted. Following Peretz et al. [51], subjects who scored less than 70% were excluded. The investigations were conducted in accordance with the Declaration of Helsinki. All experimental procedures were approved by the “Central Ethics Committee” ZEK-LUH at the Leibniz University Hannover, Hanover, Germany. All participants gave their informed consent for inclusion prior to their participation. After the completion of the experiment, the subjects received a modest monetary compensation.

2.2. Experimental Setup

The participants were seated on a height-adjustable chair in front of an experimental apparatus resting on a table and wore circumaural headphones (Beyer Dynamic, DT 100, 30–20.000 Hz) as well as inertial sensors.
A digitizing tablet (WACOM Intuos4 XL PTK-1240, active surface 487.7 mm × 304.8 mm) was positioned successively in the xy-plane (with the long tablet side along the y-axis), yz-plane (with the long tablet side along the z-axis), or xz-plane (with the long tablet side along the x-axis) as shown in Figure 1.
The digitizing pen was positioned next to the tablet. A chin rest, positioning the chin at a height of 30 cm from table surface, in combination with a height-adjustable chair ensured a firm and standardized head position throughout the whole experiment. An arm rest provoking approximately a 90-degree abduction in the right shoulder joint was used in the calibration process of the inertial sensors. The subjects were instructed to execute reaching actions with a digitizing pen in their right hand towards invisible, merely acoustically coded target positions on the tablet (see Figure 2).

2.3. Stimulus Material

A human model was equipped with four inertial sensors (MTx miniature inertial 3DOF orientation tracker; Xsens Technologies BV, Enschede, The Netherlands) on the right arm. The sensors (size: 38 mm × 53 mm × 21 mm, weight: 30 g) were attached on the middle of the right shoulder, the right upper arm, the right lower arm, and the back of the right hand. The inertial sensors provided 3D acceleration data (up to eighteen times of the acceleration of gravity), 3D rate of turn (up to 1200°/s), three degrees of freedom orientation and a sampling rate up to 100 Hz depending on the number of sensors used [52,53]. Data of the inertial sensors were recorded while the model was executing reaching actions with the digitizing pen towards one of nine different target positions on the tablet (see Figure 3).
In a next step, a forward kinematic model of the right arm was used to determine trajectory and velocity information of the right metacarpophalangeal joints [54]. Four specific kinematic parameters were chosen to be mapped on four different parameters of a sound (patch 100 “jupiter lead” from “Preset D Group”, SonicCell, Roland) [54]: (a) The absolute velocity of the right metacarpophalangeal joints was mapped on the volume of the sound (the faster the movement, the louder the sound). (b) The position of the right metacarpophalangeal joints on the y-axis was mapped on the pitch (the higher the metacarpophalangeal joints in space, the higher the sound). The pitch was modulated by two octaves from G2 to G4. (c) The position of the right metacarpophalangeal joints on the z-axis was mapped on the spectral composition (the higher the value, the less overtones the sound has; standard MIDI Continuous Controller No. 74, controls the cutoff frequency of the low-pass filter for an overall brightness control). (d) The position of the right metacarpophalangeal joints on the x-axis was mapped on the stereo characteristics (the further the metacarpophalangeal joints were on the left side from the acting subject’s perspective, the louder the sound was on the left audio channel and the quieter the sound was on the right audio channel and vice versa). In this way, sonification-recordings (1.0–1.65 s) of different reaching actions towards target positions were generated. Additionally, the position of the right metacarpophalangeal joints when having reached the target point with the digitizing pen was transformed into sound by using the three above mentioned positional mappings (b, c, and d). In this way, the instruction sounds were formed. The participants’ real-time sonification feedback was created in the same way.

2.4. Experimental Conditions

The subjects, parallelized for sex, were randomized into two groups. While executing identical reaching tasks to invisible targets, 12 subjects were blindfolded and 12 subjects had their eyes open during the experiment. The experimental task was to hit the instructed target positions as exactly as possible. All participants received auditory instruction and feedback. There were three different conditions:
  • Continuous condition:
    • Instruction: Sonification of the model’s reaching movement towards the target positions followed by the sonification of the model’s final position for 1 s.
    • Feedback: Real-time sonification of the participant’s reaching movement followed by the sonification of the participant’s final position for 1 s.
  • Discrete short condition:
    • Instruction: Sonification of the model’s final position for 1 s.
    • Feedback: Real-time sonification of the participant’s final position for 1 s.
  • Discrete long condition:
    • Instruction: Sonification of the model’s final position for the duration of the model’s reaching movement and one more second (in total 2.0–2.65 s).
    • Feedback: Real-time sonification of the participant’s final position for 1 s.

2.5. Procedure

After a standardized familiarization protocol, the test session began without any further practice trials. The experimental protocol took approximately 60 min (without including the familiarization). A total of 108 trials were given, consisting of nine trials (nine different target positions on the tablet, see Table S1) multiplied by four conditions (one discrete short block, one discrete long block, and two continuous blocks) multiplied by three tablet positions (xy-plane, yz-plane, xz-plane). The stimuli were displayed in blocks of nine trials (each target position was randomly presented once in the sequence). Four blocks (one set of blocks) were repeated in all three tablet positions, in the same block order (balanced and randomized among the subjects) but with a varied trial order. The order of the tablet positions was also balanced and randomized among the subjects. Between the three sets of blocks there were breaks of 5 min. The trials were presented to the participants in a standardized setting. Each trial consisted of (1) reaching for and grasping the pen, (2) the sonification sequence (see Instruction-Audios S1–81) presented once via headphones, (3) the execution of the reaching movement towards the digitizing tablet, (4) putting the pen down, and (5) putting the arm on the arm rest. The overall duration of a single trial was about 15 s.

2.6. Data Acquisition and Analysis

The two-dimensional Cartesian coordinates of the reaching point on the digitizing tablet were measured for each trial. The tablet sampled the x and y coordinates of the stylus hit with a spatial accuracy of 0.005 mm. The collected data were stored for later analyses. The reaching error was represented by the vector from the target to the reaching position. The absolute values of the single vector coordinates describe the sizes of the absolute deviations along the three directions in space (termed ‘Absolute differences’). The magnitude of the vector represents the distance between the target and the reaching position (termed ‘Absolute distance’). T-tests and analyses of variance with repeated measurements were used to compute the significance of the deviations to the target point. For the post hoc test, the Bonferroni test was used. A significance criterion of α = 5% was established for all results reported.

3. Results

Figure 4 exemplifies the reaching positions of all participants for one target point. The endpoints of the reaching movements spread around the target point.
The mean absolute distance from the target point was 11.68 cm (SD: 2.46 cm) over all subjects, planes and conditions and therefore significantly lower than 18.60 cm, the value for the absolute distance expected at random reaching (t(23) = −13.49, p < 0.001). Accordingly, the averaged absolute differences between reaching and target point coordinates in the three directions in space over all subjects, axes and conditions was 7.25 cm (SD: 1.53 cm). This value is significantly lower than 11.75 cm, the value for the absolute axial differences expected at random reaching (t(23) = −14.13, p < 0.001).
This error remained stable over time, repeated measures ANOVA (3 sequences × 4 blocks × 9 trials, sorted chronologically) revealed no significant main effect or interaction for absolute distance to target point. Figure 5 shows the time course of the average absolute distance.
In the following, the absolute differences between reaching and target point coordinates in the three directions in space are considered.
For repeated measures ANOVA (3 axes × 2 tablet sides × 2 sonification classes × 2 blocks, sorted by condition), main effect “treatment” did not reach level of significance (F(1, 22) = 0.02, p = 0.887, ηp2 < 0.01). The blindfolded group (M = 7.30 ± 1.46 cm) did not reach significantly worse than the sighted group (M = 7.21 ± 1.59 cm). None of the interactions with the factor “treatment” reached level of significance.
There was a significant main effect “axis” (F(2, 44) = 8.92, p = 0.001, ηp2 = 0.29). The differences between the longitudinal axis and the transverse axis (p = 0.022) as well as between the longitudinal axis and the sagittal axis (p < 0.001) became significant, whereas the differences in the reaching precision between the transverse axis and the sagittal axis did not (p = 0.580) (see Figure 6). As the three axes were acoustically represented by three different sound parameters, this result cannot be interpreted purely regarding the spatial orientation.
There was also a significant main effect “sonification class” (F(1, 22) = 6.81, p = 0.016, ηp2 = 0.24) with a better reaching performance in conditions with mere discrete instruction and feedback than in conditions with additional continuous instruction and feedback, as depicted in Figure 7.
Looking for interactions with the factor “sonification class”, a significant interaction “axis” × “sonification class” (F(2, 44) = 5.47, p = 0.008, ηp2 = 0.20) was revealed. The difference in the reaching precision between discrete and additional continuous auditory feedback was significant only for the longitudinal axis (p < 0.001). Figure 8 shows the interaction pattern.
Main effect “block” (F(1, 22) < 0.01, p = 0.955, ηp2 < 0.01) did not reach significance. The reaching performance in the discrete blocks in which both instruction and feedback took 1 s did not differ from that in the discrete blocks in which both instruction and feedback took as long as the continuous acoustic information did in the blocks with additional continuous information about the reaching movement.

4. Discussion

The present work investigated the efficacy of mere acoustic instruction and feedback of reaching movements of healthy subjects and the role of sight. We tested if the sonification device, developed by our work group is appropriate to represent reaching space merely acoustically. The participants were instructed to execute reaching movements to invisible, solely acoustically coded targets positioned in the frontal, the transverse and the sagittal plane in front of them. Differences in detail of the acoustic information shaped the experimental conditions. One of the two experimental groups was blindfolded.
Reaching is originally a visual action, but in this study, it was transformed into an auditory action. The mean absolute distance from the target point over all participants and measuring points was significantly lower than expected at random reaching. It can be concluded that it is possible to instruct action areas merely acoustically. Although the participants were not informed about how the sound was composed, they were able to orientate themselves towards the sonification. The developed sonification device enables the mere acoustic representation of reaching space. Regarding the invisibility of the target, with an average absolute distance of 11.68 cm from the target point in an active surface of 48.77 cm × 30.48 cm, the reaching performance was surprisingly high. Kinematic movement sonification can provide information for perception and action for humans, being classified as musical. This result corresponds to other studies with movement sonification [8,14,27,55].
The participants did not improve over time. No temporal effect became evident, even if the blocks with continuous and discrete instruction and feedback were regarded separately from those with only discrete sounds. The feedback did not seem to have any effect on the subsequent reaching performance. The reasons for the absence of a temporal effect need further investigation.
There was no difference between the two treatment groups with vs. without the possibility for visual movement control, indicating that visual perception is not required for the control of solely acoustically instructed reaching movements. Studies emphasizing the importance of sight for the precision of reaching movements work with visual targets [1,3,56,57]. In this case, vision is used for the adjustment of the movement to the target position. However, if the target cannot be seen, vision apparently cannot be applied to improve the reaching accuracy. This result indicates that we achieved a close proximity between the proprioceptive and auditory perception of the reaching position with the chosen sound design. For an application of kinematic movement acoustics in neuromotor rehabilitation, this result could mean that the sonification device is effective by itself and that additional visual focusing is redundant.
The continuous acoustic presentation of the hand position in the condition with discrete and continuous instruction and real-time feedback was supposed to function as auditory counterpart to the prevented continuous visual adjustments of the reaching movement, but it did not. The reaching accuracy in the experimental condition with additional continuous sonification as instruction and feedback was even impaired. A related effect was reported by Rosati et al. [34] who observed a worse performance in a visual tracking task with two kinds of visual feedback than in the task with only one visual feedback. The authors explain this effect with a saturation of the visual channel, so that the additional feedback acts as distraction, rather than providing useful information [34]. Boyer et al. [6] observed a similar effect of a continuous auditory real-time feedback of the hand position in reaching movements. In their above-described study, the addition of feedback in form of movement acoustics to an otherwise feedback-free acoustically instructed target did not improve the reaching accuracy. The authors discuss whether auditory feedback about the limb position provides benefitting information about a motor action. They consider that the participants might have been confused because the auditory modality overflowed with both the sonification of the target position and the positional movement sonification [6].
Possibly, complex acoustic sound sequences are not suitable to mark movements to discrete target points. For reaching movements, for which the target points are the focus, the movement itself might lose relevance. Another possibility is that the combination of a discrete and a continuous sonification component was responsible for the negative effect of the movement sonification. Either simply the additional cognitive load or the difference in the characteristic of the two sonification sounds might have led to an informational overload. Possibly, the combination of two different sonification characteristics (discrete vs. continuous or “knowledge of results” vs. “knowledge of performance”) prevented the subjects from shaping an audio-proprioceptive action representation. In this way, the performance-based sonification might have functioned as an interference factor that disturbed the memory of the effect-/result-based sonification. Boyer et al. [6] assume that in this case the subjects do not use the acoustic feedback information but rely on the familiar and reliable proprioceptive information. Because the difference in the absolute differences in reaching precision between the two conditions dealt with here was 0.81 cm, the negative impact is not that serious.
The reported superiority of mere discrete instruction and feedback compared to the combination of a discrete and a continuous component for instruction and feedback becomes significant only for the longitudinal axis, but not for the transverse or the sagittal axis. Since the effect of alignment of axis and sound parameter are mixed here, it cannot be decided which factor is responsible for the differences in the reaching precision. We think it is more plausible that it is the coding by the pitch, rather than the longitudinal alignment of the axis, that is associated with the negative impact of the movement sonification on the reaching precision. Coincidently, the reaching precision on the longitudinal axis was significantly better than on the transverse and sagittal axes. The fact that the pitch (coding the position on the longitudinal axis) apparently transmits more information than the other two sound parameters might be the reason for the higher susceptibility to a disturbance by an additional movement sonification for target positions coded by pitch.
The acoustic presentation of the invisible target positions provided appropriate information for the alignment of the reaching movements. Acoustical instruction and feedback about the course of the reaching movement was not useful. However, for an application of continuous movement acoustics to patients with sensorimotor disabilities, the prerequisites differ fundamentally. For example, stroke patients partially show a dramatically reduced proprioceptive control. Here, the application of continuous movement acoustics should provide a possibility to substitute proprioceptive control rudimentally and thus support recovery in neuromotor rehabilitation. The implicit effectiveness of the acoustic information and the redundancy of focused attention should be a considerable advantage over the visual movement control. First steps towards the use of continuous kinematic movement sonification in stroke rehabilitation have already been realized [58,59]. Also, for patients with hip arthroplasty, kinematic movement sonification has the potential to accelerate the recovery, as shown by Reh et al. [60]. These are however only initial steps towards the use of real- time kinematic sonification in the neuromotor rehabilitation. Further research is clearly needed. It remains to be seen whether a prospective sonification device for the neuromotor rehabilitation of proprioceptively impaired patients will remain permanently necessary like a prosthesis or will become redundant after the treatment. It could be assumed that after a period of being exposed to an attendant auditory perception of one’s own movements, the proprioception is resensitized. Auditory information might become redundant and internal feedback might be sufficient to sustain the acquired movement technique. The persisting learning effects following exposure to real-time movement sonification [14,61] are consistent with this assumption. Danna and Velay [12] studied this issue in two deafferented subjects but could not confirm the assumption. They discussed the reason for the absence of a learning effect. Motor learning might be per se impossible without proprioceptive feedback and sonification might also perspectively only serve as a prosthesis that the patients are dependent on for the rest of their life [12]. They counter that another explanation might be that the duration of the intervention has been too short to prove a learning effect and that motor learning simply takes more time in deafferented patients, possibly because most of the brain capacity is oriented towards coping with the task [12]. That would mean that the prosthesis might become dispensable one day because the patients might have regained their motor performance with the aid of movement sonification. This would offer the prospect of a sonification assisted recovery from proprioceptive deficits. A brain study from Ripollés et al. [62] supports the latter explanation. Music- supported-therapy in stroke rehabilitation results in a recovery of activation and connectivity between auditory and motor regions accompanied by an improvement of motor function [62].
In the present experiment, the hand position was coded by pitch, stereo characteristics, and spectral composition. Except for stereo characteristics, having a clear zero point with a clear spatial allocation, the parameters did not provide any natural indicator about the spatial position. For these parameters, meaning is only generated by the comparison of two target sounds. Moreover, because of the rather arbitrary choice of the limits of the parameter range, even the comparison between the target sounds at best provides information about the direction of the deviation but not about the extent of the deviation. This difficulty was compensated to some degree by the standardized familiarization protocol in which the participants gained experience with the proportions of the action area and got to know the sonification by drawing sinuous lines. Nevertheless, it remains remarkable that the participants were able to draw information from the sound. The participants were not experienced in using movement sonification and they were not informed about the systematic of the sonification. Still, they were able to use it to align target movements in space.
The reaching precision to acoustically coded targets could possibly be further enhanced. For example, the pitch range used to generate the stimuli was chosen arbitrarily. It must be considered that the choice might have been suboptimal. Moreover, there exists no reference, that spectral composition is particularly appropriate to map spatial depth. Possibly, a more informative sound characteristic can be found for this purpose. A follow-up study might focus on an expedient choice of the coding sound parameter and the optimal spectra of the used sound parameter to optimize the informational content of the alteration of the coding sound parameter.
In this study, we provide evidence that reaching space can be coded solely by artificial acoustics. Our sonification device developed in the first steps seems to be appropriate to represent target positions acoustically. The sonification of the invisible target position of the hand as instruction enabled reaching movements with respectable reaching precision. Although the participants were not informed in detail about how the sound was composed and had no previous experience with the use of movement sonification, they were able to use the kinematic acoustics to align their reaching movements. This implicit informational effect became evident, even when the possibility of an additional visual control of the reaching movement was excluded. Based on these results, a considerable benefit of a future application of an improved model of the developed sonification device in the neuromotor rehabilitation of proprioceptively impaired patients can be assumed.

Supplementary Materials

The following are available online at https://www.mdpi.com/2076-3417/10/2/429/s1, Table S1: Localization of the target points, Instruction-Audios S1–81: plane, condition, point.

Author Contributions

Conceptualization, U.F., A.O.E., G.S. and D.H.; methodology, U.F. and A.O.E.; software, H.B.; validation, U.F.; formal analysis, U.F., G.S., A.O.E., D.H., and S.G.; investigation, U.F. and D.H.; resources, A.O.E.; data curation, U.F., H.B., and D.H.; writing—original draft preparation, U.F.; writing—review and editing, U.F., G.S., D.H., S.G., H.B., and A.O.E.; visualization, U.F. and S.G.; supervision, A.O.E.; project administration, A.O.E.; funding acquisition, A.O.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Regional Development Fund (ERDF), project number W2-80118660.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rossetti, Y.; Desmurget, M.; Prablanc, C. Vectorial coding of movement: Vision, proprioception, or both? J. Neurophysiol. 1995, 74, 457–463. [Google Scholar] [CrossRef] [PubMed]
  2. Graziano, M.S. Is reaching eye-centered, body-centered, hand-centered, or a combination? Rev. Neurosci. 2001, 12, 175–186. [Google Scholar] [CrossRef] [PubMed]
  3. Prablanc, C.; Pelisson, D.; Goodale, M.A. Visual control of reaching movements without vision of the limb. Exp. Brain Res. 1986, 62, 293–302. [Google Scholar] [CrossRef] [PubMed]
  4. Oscari, F.; Secoli, R.; Avanzini, F.; Rosati, G.; Reinkensmeyer, D.J. Substituting auditory for visual feedback to adapt to altered dynamic and kinematic environments during reaching. Exp. Brain Res. 2012, 221, 33–41. [Google Scholar] [CrossRef]
  5. Schmitz, G.; Bock, O. A Comparison of Sensorimotor Adaptation in the Visual and in the Auditory Modality. PLoS ONE 2014, 9, e107834. [Google Scholar] [CrossRef] [Green Version]
  6. Boyer, E.O.; Babayan, B.M.; Bevilacqua, F.; Noisternig, M.; Warusfel, O.; Roby-Brami, A.; Hanneton, S.; Viaud-Delmon, I. From ear to hand: The role of the auditory-motor loop in pointing to an auditory source. Front. Comput. Neurosci. 2013, 7, 26. [Google Scholar] [CrossRef] [Green Version]
  7. Schaffert, N.; Janzen, T.B.; Mattes, K.; Thaut, M.H. A review on the relationship between sound and movement in sports and rehabilitation. Front. Psychol. 2019, 10, 244. [Google Scholar] [CrossRef] [Green Version]
  8. Effenberg, A.O. Movement sonification: Effects on perception and action. IEEE Multimed. 2005, 12, 53–59. [Google Scholar] [CrossRef]
  9. Hwang, T.H.; Schmitz, G.; Klemmt, K.; Brinkop, L.; Ghai, S.; Stoica, M.; Maye, A.; Blume, H.; Effenberg, A.O. Effect-and performance-based auditory feedback on interpersonal coordination. Front. Psychol. 2018, 9, 404. [Google Scholar] [CrossRef] [Green Version]
  10. Schaffert, N.; Mattes, K.; Effenberg, A.O. An investigation of online acoustic information for elite rowers in on-water training conditions. J. Hum. Sport Exerc. 2011, 6, 392–405. [Google Scholar] [CrossRef] [Green Version]
  11. Schaffert, N.; Mattes, K. Effects of acoustic feedback training in elite-standard Para-Rowing. J. Sports Sci. 2015, 33, 411–418. [Google Scholar] [CrossRef]
  12. Danna, J.; Velay, J.L. On the auditory-proprioception substitution hypothesis: Movement sonification in two deafferented subjects learning to write new characters. Front. Neurosci. 2017, 11, 137. [Google Scholar] [CrossRef] [PubMed]
  13. Effenberg, A.O.; Schmitz, G.; Baumann, F.; Rosenhahn, B.; Kroeger, D. SoundScript-Supporting the acquisition of character writing by multisensory integration. Open Psychol. J. 2015, 8, 230–237. [Google Scholar] [CrossRef] [Green Version]
  14. Effenberg, A.O.; Fehse, U.; Schmitz, G.; Krueger, B.; Mechling, H. Movement sonification: Effects on motor learning beyond rhythmic adjustments. Front. Neurosci. 2016, 10, 219. [Google Scholar] [CrossRef] [PubMed]
  15. Sigrist, R.; Rauter, G.; Riener, R.; Wolf, P. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychon. Bull. Rev. 2013, 20, 21–53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Beauchamp, M.S. See me, hear me, touch me: Multisensory integration in lateral occipital-temporal cortex. Curr. Opin. Neurobiol. 2005, 15, 145–153. [Google Scholar] [CrossRef]
  17. Bidet-Caulet, A.; Voisin, J.; Bertrand, O.; Fonlupt, P. Listening to a walking human activates the temporal biological motion area. Neuroimage 2005, 28, 132–139. [Google Scholar] [CrossRef]
  18. Scheef, L.; Boecker, H.; Daamen, M.; Fehse, U.; Landsberg, M.W.; Granath, D.O.; Mechling, H.; Effenberg, A.O. Multimodal motion processing in area V5/MT: Evidence from an artificial class of audio-visual events. Brain Res. 2009, 1252, 94–104. [Google Scholar] [CrossRef]
  19. Bangert, M.; Altenmüller, E.O. Mapping perception to action in piano practice: A longitudinal DC-EEG study. BMC Neurosci. 2003, 4, 26. [Google Scholar] [CrossRef] [Green Version]
  20. Gazzola, V.; Aziz-Zadeh, L.; Keysers, C. Empathy and the somatotopic auditory mirror system in humans. Curr. Biol. 2006, 16, 1824–1829. [Google Scholar] [CrossRef] [Green Version]
  21. Kaplan, J.T.; Iacoboni, M. Multimodal action representation in human left ventral premotor cortex. Cogn. Process. 2007, 8, 103–113. [Google Scholar] [CrossRef] [PubMed]
  22. Lahav, A.; Saltzman, E.; Schlaug, G. Action representation of sound: Audiomotor recognition network while listening to newly acquired actions. J. Neurosci. 2007, 27, 308–314. [Google Scholar] [CrossRef] [PubMed]
  23. Schmitz, G.; Mohammadi, B.; Hammer, A.; Heldmann, M.; Samii, A.; Münte, T.F.; Effenberg, A.O. Observation of sonified movements engages a basal ganglia frontocortical network. BMC Neurosci. 2013, 14, 32. [Google Scholar] [CrossRef] [Green Version]
  24. Murgia, M.; Hohmann, T.; Galmonte, A.; Raab, M.; Agostini, T. Recognising one’s own motor actions through sound: The role of temporal factors. Perception 2012, 41, 976–987. [Google Scholar] [CrossRef]
  25. Young, W.; Rodger, M.; Craig, C.M. Perceiving and reenacting spatiotemporal characteristics of walking sounds. J. Exp. Psychol. Hum. Percept. Perform. 2013, 39, 464–476. [Google Scholar] [CrossRef]
  26. Agostini, T.; Righi, G.; Galmonte, A.; Bruno, P. The relevance of auditory information in optimizing hammer throwers performance. In Biomechanics and Sports; Pascolo, P.B., Ed.; Springer: Vienna, Austria, 2004; pp. 67–74. [Google Scholar]
  27. Vinken, P.M.; Kröger, D.; Fehse, U.; Schmitz, G.; Brock, H.; Effenberg, A.O. Auditory coding of human movement kinematics. Multisens. Res. 2013, 26, 533–552. [Google Scholar] [CrossRef] [Green Version]
  28. Fehse, U.; Weber, A.; Krüger, B.; Baumann, J.; Effenberg, A.O. Intermodal Action Identification. Presented at the International Conference on Multisensory Motor Behavior: Impact of Sound [Online], Hanover, Germany, 30 September–1 October 2013; p. 4. Available online: http://www.sonification-online.com/wp-content/uploads/2013/12/Postersession_neu.pdf (accessed on 12 October 2019).
  29. Auvray, M.; Hanneton, S.; O’Regan, J.K. Learning to perceive with a visuo-auditory substitution system: Localisation and object recognition with ‘The Voice’. Perception 2007, 36, 416–430. [Google Scholar] [CrossRef] [PubMed]
  30. Levy-Tzedek, S.; Hanassy, S.; Abboud, S.; Maidenbaum, S.; Amedi, A. Fast, accurate reaching movements with a visual-to-auditory sensory substitution device. Restor. Neurol. Neurosci. 2012, 30, 313–323. [Google Scholar] [CrossRef] [Green Version]
  31. Làdavas, E. Multisensory-based Approach to the recovery of unisensory deficit. Ann. N. Y. Acad. Sci. 2008, 1124, 98–110. [Google Scholar] [CrossRef]
  32. Bevilacqua, F.; Boyer, E.O.; Françoise, J.; Houix, O.; Susini, P.; Roby-Brami, A.; Hanneton, S. Sensori-motor learning with movement sonification: Perspectives from recent interdisciplinary studies. Front. Neurosci. 2016, 10, 385. [Google Scholar] [CrossRef] [Green Version]
  33. Scholz, D.S.; Wu, L.; Pirzer, J.; Schneider, J.; Rollnik, J.D.; Großbach, M.; Altenmüller, E.O. Sonification as a possible stroke rehabilitation strategy. Front. Neurosci. 2014, 8, 332. [Google Scholar] [CrossRef] [PubMed]
  34. Rosati, G.; Oscari, F.; Spagnol, S.; Avanzini, F.; Masiero, S. Effect of task-related continuous auditory feedback during learning of tracking motion exercises. J. Neuroeng. Rehabil. 2012, 9, 79. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Thaut, M.H.; McIntosh, G.C.; Rice, R.R.; Miller, R.A.; Rathbun, J.; Brault, J.M. Rhythmic auditory stimulation in gait training for Parkinson’s disease patients. Mov. Disord. Off. J. Mov. Disord. Soc. 1996, 11, 193–200. [Google Scholar] [CrossRef] [PubMed]
  36. Thaut, M.H.; McIntosh, G.C.; Rice, R.R. Rhythmic facilitation of gait training in hemiparetic stroke rehabilitation. J. Neurol. Sci. 1997, 151, 207–212. [Google Scholar] [CrossRef]
  37. Thaut, M.H.; Kenyon, G.P.; Hurt, C.P.; McIntosh, G.C.; Hoemberg, V. Kinematic optimization of spatiotemporal patterns in paretic arm training with stroke patients. Neuropsychologia 2002, 40, 1073–1081. [Google Scholar] [CrossRef]
  38. Ghai, S.; Ghai, I.; Effenberg, A.O. Effect of rhythmic auditory cueing on aging gait: A systematic review and meta-analysis. Aging Dis. 2018, 9, 901–923. [Google Scholar] [CrossRef] [Green Version]
  39. Ghai, S.; Ghai, I.; Effenberg, A.O. Effect of rhythmic auditory cueing on gait in cerebral palsy: A systematic review and meta-analysis. Neuropsychiatr. Dis. Treat. 2018, 14, 43–59. [Google Scholar] [CrossRef] [Green Version]
  40. Ghai, S.; Ghai, I.; Schmitz, G.; Effenberg, A.O. Effect of rhythmic auditory cueing on parkinsonian gait: A systematic review and meta-analysis. Sci. Rep. 2018, 8, 506. [Google Scholar] [CrossRef]
  41. Thaut, M.H. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy. Prog. Brain Res. 2015, 217, 253–266. [Google Scholar] [CrossRef]
  42. Schauer, M.; Mauritz, K.H. Musical motor feedback (MMF) in walking hemiparetic stroke patients: Randomized trials of gait improvement. Clin. Rehabil. 2003, 17, 713–722. [Google Scholar] [CrossRef]
  43. Young, W.R.; Shreve, L.; Quinn, E.J.; Craig, C.; Bronte-Stewart, H. Auditory cueing in Parkinson’s patients with freezing of gait. What matters most: Action-relevance or cue-continuity? Neuropsychologia 2016, 87, 54–62. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Maulucci, R.A.; Eckhouse, R.H. A real-time auditory feedback system for retraining gait. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 5199–5202. [Google Scholar] [CrossRef]
  45. Robertson, J.V.; Hoellinger, T.; Lindberg, P.; Bensmail, D.; Hanneton, S.; Roby-Brami, A. Effect of auditory feedback differs according to side of hemiparesis: A comparative pilot study. J. Neuroeng. Rehabil. 2009, 6, 45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Scholz, D.S.; Rohde, S.; Nikmaram, N.; Brückner, H.P.; Großbach, M.; Rollnik, J.D.; Altenmüller, E.O. Sonification of arm movements in stroke rehabilitation—A novel approach in neurologic music therapy. Front. Neurol. 2016, 7, 106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Rodger, M.W.; Young, W.R.; Craig, C.M. Synthesis of walking sounds for alleviating gait disturbances in Parkinson’s disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 543–548. [Google Scholar] [CrossRef] [PubMed]
  48. Melara, R.D.; O’Brien, T.P. Interaction between synesthetically corresponding dimensions. J. Exp. Psychol. Gen. 1987, 116, 323–336. [Google Scholar] [CrossRef]
  49. Küssner, M.B.; Tidhar, D.; Prior, H.M.; Leech-Wilkinson, D. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time. Front. Psychol. 2014, 5, 789. [Google Scholar] [CrossRef] [Green Version]
  50. Online Evaluation of Amusia—Public. Available online: http://www.brams.org/amusia-public (accessed on 12 October 2019).
  51. Peretz, I.; Gosselin, N.; Tillmann, B.; Cuddy, L.L.; Gagnon, B.; Trimmer, C.G.; Paquette, S.; Bouchard, B. On-line identification of congenital amusia. Music Percept. 2008, 25, 331–343. [Google Scholar] [CrossRef] [Green Version]
  52. Fong, D.T.P.; Chan, Y.Y. The use of wearable inertial motion sensors in human lower limb biomechanics studies: A systematic review. Sensors 2010, 10, 11556–11565. [Google Scholar] [CrossRef] [Green Version]
  53. Helten, T.; Brock, H.; Müller, M.; Seidel, H.P. Classification of trampoline jumps using inertial sensors. Sports Eng. 2011, 14, 155–164. [Google Scholar] [CrossRef]
  54. Brock, H.; Schmitz, G.; Baumann, J.; Effenberg, A.O. If motion sounds: Movement sonification based on inertial sensor data. In Procedia Engineering; Drahne, P., Sherwood, J., Eds.; Elsevier: Amsterdam, The Netherlands, 2012; Volume 34, pp. 556–561. [Google Scholar] [CrossRef] [Green Version]
  55. Effenberg, A.O.; Schmitz, G. Acceleration and deceleration at constant speed: Systematic modulation of motion perception by kinematic sonification. Ann. N. Y. Acad. Sci. 2018, 1425, 52–69. [Google Scholar] [CrossRef]
  56. Ghez, C.; Gordon, J.; Ghilardi, M.F. Impairments of reaching movements in patients without proprioception. II. Effects of visual information on accuracy. J. Neurophysiol. 1995, 73, 361–372. [Google Scholar] [CrossRef] [PubMed]
  57. Graziano, M.S. Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position. Proc. Natl. Acad. Sci. USA 1999, 96, 10418–10421. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Schmitz, G.; Kroeger, D.; Effenberg, A.O. A mobile sonification system for stroke rehabilitation. In Proceedings of the Presented at the 20th International Conference on Auditory Display, New York, NY, USA, 22–25 June 2014. [Google Scholar] [CrossRef]
  59. Schmitz, G.; Bergmann, J.; Effenberg, A.O.; Krewer, C.; Hwang, T.H.; Müller, F. Movement sonification in stroke rehabilitation. Front. Neurol. 2018, 9, 389. [Google Scholar] [CrossRef] [PubMed]
  60. Reh, J.; Hwang, T.H.; Michalke, V.; Effenberg, A.O. Instruction and real-time sonification for gait rehabilitation after unilateral hip arthroplasty. In Proceedings of the Human Movement and Technology, Book of Abstracts—11th Joint Conference on Motor Control and Learning, Biomechanics and Training, Darmstadt, Germany, 28–30 September 2016; Wiemeyer, J., Seyfarth, A., Kollegger, G., Tokur, D., Schumacher, C., Hoffmann, K., Schöberl, D., Eds.; Shaker: Aachen, Germany, 2016. [Google Scholar]
  61. Baudry, L.; Leroy, D.; Thouvarecq, R.; Chollet, D. Auditory concurrent feedback benefits on the circle performed in gymnastics. J. Sports Sci. 2006, 24, 149–156. [Google Scholar] [CrossRef]
  62. Ripollés, P.; Rojo, N.; Grau-Sánchez, J.; Amengual, J.L.; Càmara, E.; Marco-Pallarés, J.; Juncadella, M.; Vaquero, L.; Rubio, F.; Duarte, E.; et al. Music supported therapy promotes motor plasticity in individuals with chronic stroke. Brain Imaging Behav. 2016, 10, 1289–1307. [Google Scholar] [CrossRef]
Figure 1. The tablet positions in the three used planes. X-axis is later called “transverse axis”, y-axis is later called “longitudinal axis” and z-axis is later called “sagittal axis”. The subject sits at the front edge of the table, looking in the direction of the z-axis.
Figure 1. The tablet positions in the three used planes. X-axis is later called “transverse axis”, y-axis is later called “longitudinal axis” and z-axis is later called “sagittal axis”. The subject sits at the front edge of the table, looking in the direction of the z-axis.
Applsci 10 00429 g001
Figure 2. The experimental arrangement for the measurements in the yz-plane.
Figure 2. The experimental arrangement for the measurements in the yz-plane.
Applsci 10 00429 g002
Figure 3. The originally invisible target point positions on the digitizing tablet. The lengths of the axes correspond to the dimension of the digitizing tablet.
Figure 3. The originally invisible target point positions on the digitizing tablet. The lengths of the axes correspond to the dimension of the digitizing tablet.
Applsci 10 00429 g003
Figure 4. Reaching results for one target point in xy-plane. Target point: black, the participants’ pointing results: grey.
Figure 4. Reaching results for one target point in xy-plane. Target point: black, the participants’ pointing results: grey.
Applsci 10 00429 g004
Figure 5. Time course of average absolute distance for the 12 blocks.
Figure 5. Time course of average absolute distance for the 12 blocks.
Applsci 10 00429 g005
Figure 6. Means and standard deviations of absolute differences for the three axes. Longitudinal axis corresponds to x-axis, transverse axis to y-axis and sagittal axis to z-axis in Figure 1.
Figure 6. Means and standard deviations of absolute differences for the three axes. Longitudinal axis corresponds to x-axis, transverse axis to y-axis and sagittal axis to z-axis in Figure 1.
Applsci 10 00429 g006
Figure 7. Means and standard deviations of absolute differences for the two sonification classes for instruction and feedback.
Figure 7. Means and standard deviations of absolute differences for the two sonification classes for instruction and feedback.
Applsci 10 00429 g007
Figure 8. Means of absolute differences for the two sonification classes considered separately for the three axes.
Figure 8. Means of absolute differences for the two sonification classes considered separately for the three axes.
Applsci 10 00429 g008

Share and Cite

MDPI and ACS Style

Fehse, U.; Schmitz, G.; Hartwig, D.; Ghai, S.; Brock, H.; Effenberg, A.O. Auditory Coding of Reaching Space. Appl. Sci. 2020, 10, 429. https://doi.org/10.3390/app10020429

AMA Style

Fehse U, Schmitz G, Hartwig D, Ghai S, Brock H, Effenberg AO. Auditory Coding of Reaching Space. Applied Sciences. 2020; 10(2):429. https://doi.org/10.3390/app10020429

Chicago/Turabian Style

Fehse, Ursula, Gerd Schmitz, Daniela Hartwig, Shashank Ghai, Heike Brock, and Alfred O. Effenberg. 2020. "Auditory Coding of Reaching Space" Applied Sciences 10, no. 2: 429. https://doi.org/10.3390/app10020429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop