Next Article in Journal
A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems
Next Article in Special Issue
Case Report: Modulation of Effective Connectivity in Brain Networks after Prosthodontic Tooth Loss Repair
Previous Article in Journal
A Survey on MIMO-OFDM Systems: Review of Recent Trends
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept

by
Ana S. Santos Cardoso
,
Rasmus L. Kæseler
,
Mads Jochumsen
* and
Lotte N. S. Andreasen Struijk
Center for Rehabilitation Robotics, Department of Health Science and Technology, Aalborg University, 9220 Aalborg, Denmark
*
Author to whom correspondence should be addressed.
Signals 2022, 3(2), 396-409; https://doi.org/10.3390/signals3020024
Submission received: 31 March 2022 / Revised: 12 May 2022 / Accepted: 8 June 2022 / Published: 16 June 2022
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)

Abstract

:
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed.

1. Introduction

Individuals with tetraplegia from, e.g., amyotrophic lateral sclerosis (ALS) experience progressive muscle weakness and loss of motor function. As a result, individuals with ALS may suffer extensive physical [1,2] and emotional distress [3,4] and must rely on caregivers for simple daily tasks. Thus, assistive technologies (ATs) seek to enhance the quality of life for individuals with severe physical disabilities by improving or substituting motor functions, consequently boosting their autonomy [5,6,7]. However, as ALS progresses to the late stages, most commercially available interfaces, such as joystick controllers [8,9], tongue-based systems [10,11], and eye-gaze-operated devices [12,13], become unusable. Brain–Computer Interfaces (BCIs) bypass the lack of motor function, as they translate brain activity directly into control commands for the assistive device [14]. Electroencephalography (EEG) is a non-invasive method for recording brain activity, commonly used in BCI development given its high time resolution and relatively low cost compared with other techniques.
Within the scope of AT, BCI systems have been employed in devices for assisted communication, computer cursor control and web browsing, mobility, and environmental control [15,16,17,18,19]. State-of-the-art BCI-based directional control systems, such as those used to obtain control of robotic devices, often require the user to split their attention between the task and the BCI feedback, on, e.g., a computer screen. This is critical in instances of environmental control, such as operating a robotic arm to pick up an object, as distractions from both the task and surrounding environment should be minimal for safety reasons. Dual tasking also increases the user’s mental workload [20]. Systems that rely on visual stimulation (steady-state visually evoked potentials, SSVEP, and P300) often exhibit this problem since they make the user avert their eyes from the task and require extended periods of sustained high mental workload. Visual displays can encumber the user by blocking their field of view, and continuous strong visual stimuli may lead to fatigue and discomfort [21,22,23], reducing signal quality and impacting system performance [24]. While Motor Imagery (MI)-based systems do not share these bottlenecks, they still require extensive training and suffer from target amount limitations since their accuracy decreases when the number of classes increases [25,26]. For that reason, many MI-based systems use a few classes [27,28,29] or implement MI as a fast brain switch to select commands from a cyclic menu [30,31].
Albeit to a lesser extent, parieto-occipital alpha waves (or the alpha rhythm) have been considered for asynchronous BCI system applications [32], given their high signal-to-noise ratio and unresponsiveness to environmental conditions. The alpha rhythm is a rhythmic pattern in EEG observed at frequencies between 8 and 12 Hz and is typically found over the occipital cortex [17]. Previous findings have demonstrated that the alpha rhythm can be suppressed by visual attention; conversely, periods of mental inactivity with closed eyes trigger an increase in alpha power [33]. Alpha band activity has been studied both in hybrid systems [34,35] and on its own for simple switch controls and directional controls, such as in Korovesis et al. [36].
Numerous studies have proposed EEG-based or hybrid BCI for controlling assistive robotic manipulators (ARMs). Many implement manual end-point control strategies, in which the user is in direct control of the movement of the robot’s end-effector [23,28,37,38,39,40]. These control strategies are flexible and intuitive but are often unable to provide fast and precise continuous control without fatiguing the user. For that reason, other approaches using goal selection or shared control strategies have been proposed. For example, Lillo et al. [41] proposed a P300-based BCI capable of performing drinking and object manipulation tasks using commands comprised of several automated actions. Indeed, many state-of-the-art approaches consist of combining BCIs with computer vision-based object recognition to obtain fast autonomous control during target manipulation [29,42,43,44]. While these automated strategies are often faster and less fatiguing than manual end-point control, they cannot be easily implemented in unknown environments. Additionally, autonomous systems have been shown to be frustrating for the user and reduce agency [45,46].
In this paper, we report a proof-of-concept dependent BCI using parieto-occipital alpha wave modulation. We propose a direct cartesian robotic arm control system for performing functional tasks, such as handling objects. This exploratory study seeks to overcome the distraction and visual fatigue bottlenecks by implementing auditory cues during target selection. By closing their eyes, the user may select the desired motion and switch into asynchronous continuous control. The robotic device then performs the motion continuously while the user keeps their eyes open, allowing them to focus on the movement and environment without distractions. In Ron-Angevin et al. [31], the authors propose a similar approach for controlling a wheelchair, which provided users with four navigation commands that could be selected using two mental tasks. However, very limited evidence exists about this paradigm when used in combination with continuous control, which, when used to manually control an assistive robotic device in 3D space, requires a high degree of responsiveness for fine position adjustments. This proof-of-concept study sought to investigate whether operating a cyclic auditory interface using a two-class dependent BCI provides good continuous manual end-point control of an assistive robotic arm and if there is an effect of learning across days. Moreover, the proposed interface introduces tolerance intervals, which allow users to cancel incorrect selections. The system’s robustness was tested with able-bodied volunteers, who used an ARM to pick up and displace an item. To investigate potential learning effects, experiments were conducted over the course of two consecutive days.

2. Materials and Methods

2.1. Subjects

Eight able-bodied volunteers (one woman and seven men, average age: 29.1 ± 3.4 years) participated in this study. Six of them were naïve BCI users. In total, 12 subjects were screened for the experiment. However, one chose to withdraw from the experiment, while three were deemed unable to participate as they obtained offline classification accuracies under 80%. All subjects gave their written informed consent. All procedures were accepted by the local ethical committee (N-20130081).

2.2. Data Acquisition

OpenBCI Cyton Board was used for EEG acquisition. Eight passive Ag/AgCl electrodes were placed at the locations POz, PO3, PO4, PO7, PO8, O1, O2, and Oz according to the standard international 10–20 system. The reference and ground electrodes were placed on the mastoid bony surfaces behind the right and left ear, respectively. The signals were sampled at 250 Hz. The electrode-skin impedance was kept below 10 kΩ during all experiments.

2.3. System Implementation

The system was developed using the Robot Operating System (ROS kinetic) and written in Python 3. An overview of the proposed system is illustrated in Figure 1. Firstly, the EEG signals are measured and sent to a PC via WiFi. The system processes the acquired signal, extracting the features used for classification. A controller node operates a cyclic menu with the available commands. The menu is presented to the user primarily through audio prompts, but icons are also displayed through a monitor. All other feedback is provided via audio cues. The controller translates the classification results into the intended action. Commands are then sent to the ARM, which carries them out. The Kinova JACO (Gen2), a six-DOF ARM equipped with a three-fingered one-DOF end effector, was used in this experiment. The manipulator was set to move in the cartesian space at a linear speed of 50 mm/s.

2.3.1. Signal Preprocessing

EEG data were bandpass filtered from 0.5 to 35 Hz using a 4th-order Butterworth filter. Common average referencing was applied. Alpha rhythms show high amplitudes at occipital and parietal areas; thus, only channels PO8 and PO7 are used in the rest of the pipeline. The training data were divided into non-overlapping 1-second epochs, and 80% were used for training the classifier, while 20% were used for testing (the collection of training data is outlined in Section 2.4).

2.3.2. Feature Extraction

The discrete wavelet transform was used to decompose the EEG signals into multiple levels. A Daubechies 10 mother wavelet was used to perform a 4-level wavelet decomposition. Each time window was split into four details (D1–D4) and one approximation (A4). Wavelet coefficients were extracted from component D4, which encompasses the frequency range of 7.813–15.625 Hz and includes the alpha rhythm frequency band (8–12 Hz). We then used the wavelet coefficients to calculate three statistical features: the mean of the absolute value, the standard deviation, and the average power of the wavelet coefficients. Since two channels were considered for this study, a total of 6 statistical features were used. This feature extraction method was introduced by Xu and Song [47] and previously used by León et al. [48] to classify alpha activity. Features were normalized using min–max scaling.

2.3.3. Classification

A linear discriminant analysis classifier was trained using eight-second-long epochs of both regular/idle brain activity and increased alpha rhythms. The system records the user’s brain activity continuously in 1-second-long overlapping epochs (with a 24-millisecond overlap), checking for high alpha activity whenever necessary.

2.3.4. Online Control Loop

The online BCI for controlling the ARM was designed to have a selection mode, relying on a discrete control command, and a self-paced control mode, offering continuous control of the robot’s motion, as seen in Figure 2.
During selection, the system cycles through the targets sequentially in 3-second intervals. Targets are highlighted on the monitor and presented through its unique audio cue. Each target corresponds to the cartesian direction commands for the end effector (movement along the x, y, and z-axis) and a gripper state control switch (open or closed). Accordingly, a target’s unique audio cue corresponds to a word that describes the command it pertains to (“up”, “down”, “left”, etc.). Auditory cues were included so subjects could focus on the task without needing to check the screen.
To select a command, the user must wait for its cue. Selections are triggered by high alpha activity, meaning the subject must close their eyes in time with the auditory cue. Upon selecting a target, a confirmation tone plays, and the system enters a tolerance period, which lasts 4 s. The user must open their eyes to proceed with the current selection and can henceforth attend to the robot’s movement. If the system detects the occurrence of high alpha waves during the last 2-second epoch, it will return to the selection menu. Consequently, to interrupt selections, the user must keep their eyes closed. A different tone plays when cancelling the selection. Returning to the menu at this point will not reset the cycle, leaving the user at the current target.
Entering asynchronous control, the robotic arm performs the chosen motion continuously, as long as the subject exhibits low alpha waves. The only exception is the gripper command, which will close or open the fingers immediately upon selection and return to the menu. When high-intensity alpha is detected, a stop command is issued, and the system enters the tolerance period again. Users can evaluate if they wish to continue performing that motion or return to the selection menu. High alpha waves will return the system to the selection menu, while low alpha waves will reissue the command, continuing the current motion. Returning to the menu after entering continuous control will always reset the selection menu cycle. The grip command does not include this tolerance phase.

2.4. Experimental Procedure

The experimental procedure was divided into training and testing phases and lasted approximately 2 h 30 min, including rest periods. Experiments were conducted over the course of two consecutive days to investigate potential learning effects. During training, the subject’s alpha activity was first recorded to calibrate the classifier. Afterwards, the subject performed the task successfully thrice, using a keyboard controller during the first two trials and the BCI system during the last trial. As a result, they became acquainted with the control loop and ARM’s range of movement. Using the keyboard control, users could simulate high alpha detection by keeping the spacebar pressed. During the experiment, subjects were seated in a comfortable chair, facing the table where the task was to be performed. A monitor displaying the selection menu was set up to the left of the subject; to their right, the robotic arm was mounted on top of a stand at the same level as the table (see Figure 3).

2.4.1. Training Session

The training session’s purpose was to record both increased alpha activity (eyes closed) and idle brain activity (eyes open). Throughout it, the subject faces the monitor and remains still. After a 5-second preparation phase, an audio cue signals the beginning of the recording, instructing the user to fix their sight on the middle of the screen. In each trial, 10 s of brain activity is recorded for each condition. After 25 repetitions, a different audio cue plays, signaling the end of the recording. Data are extracted two seconds after each acoustic stimulus to avoid ambiguous signals derived from the subject’s reaction period (see Figure 4). The eight-second-long epochs were separated into regular brain activity and high alpha rhythms.

2.4.2. Testing Sessions

To test the system, subjects commanded the robotic arm to pick up an empty cup from a fixed location, moved it across the table and placed it at a designated spot. They could choose the path they deemed more efficient but were instructed to try to always take the same path during testing.
Two testing sessions were conducted each day. During a session, subjects had to perform the task as many times as possible within 30 min. Every time the arm was homed and the cup repositioned, the countdown would be paused. Subjects could start a new attempt if there was still time left, even if that attempt was expected to take longer than the remaining time. A trial was successful if the subject managed to pick up the cup at the pickup point without tipping it over and place it at the drop-off marking. Failing to grasp the cup or pushing it away from the pickup point constituted a Type 1 failure while failing to place it a Type 2.
The table’s surface was marked at the cup’s pickup and drop-off placements, as seen in Figure 3. The cup’s pickup point was positioned approximately 40 cm away from the ARM’s base. The drop-off point was placed approximately 40 cm to the left of the pickup point.

2.5. Performance Metrics

The system’s performance during target selection was evaluated for each subject using the Youden Index [49]. The following variables were also tallied: number of voluntary selections confirmed during tolerance, i.e., a true positive followed by a true negative; number of involuntary selections canceled during tolerance, i.e., a false positive followed by a true positive; number of involuntary selections not canceled, i.e., a false positive followed by a false negative; number of voluntary selections canceled involuntarily, i.e., a true positive followed by a false positive.
The overall system performance was also evaluated using the following criteria: the number of trials completed in two 30-min sessions, the success rate and completion time, path efficiency and average number of selections per trial. The success rate was determined by calculating the percentage of successful trials, while completion time was defined as the amount of time subjects took to complete a trial. The path efficiency was defined as the ratio between the length of the ideal path taken by the subject, and the length of the path the subject took in the trials. Furthermore, the best path obtained by the subject during the second day training session using the possible direction commands implemented by a manual controller was deemed the optimal path. The average number of selections per trial can be interpreted as the average number of commands sent intentionally to the robotic arm, i.e., the number of actions performed voluntarily, per trial. Only successful trials were considered for evaluation.

3. Results

As shown in Table 1, subjects took on average 7 min to complete each trial, but there was a standard deviation of more than 3 min. Subject 2 was the fastest, taking 4 min and 36 s per trial, while subject 8 took the longest with an average completion time of approximately 14 min. A Mann–Whitney test indicated that there was no statistically significant difference, U(CT1 = 70, CT2 = 67) = 2479, p = 0.57, between both days’ trial completion times. On average, the robot was only in motion 10% of the time spent completing each trial. Path efficiency remained above 80% for all subjects. Subject 7’s path efficiency score (which exceeds 100%) shows that the subject performed shorter paths during BCI control. This was most likely due to cutting corners by grasping the top of the cup or dragging it across the table to minimize the number of selections, for example. No significant difference between path efficiency during the first (88.9% ± 11.7%) and second (92.2% ± 9.6%) day was observed, t(135) = −1.8, p = 0.08. Subject performance for the first day is compared to subject performance for the second day in Figure 5. Note that subject 4 was excluded from both previous statistical analysis since the subject was not able to complete any trial on the second day successfully.
Table 2 shows the number of trials completed, along with failures, and the completion rate for each subject for both days. Ideally, subjects would be able to perform 14 trials in two 30-minute sessions considering the time taken with manual control. There was no statistically significant difference, t(7) = 0.43, p = 0.68, between the average number of successful trials for the first (8.8 ± 3.6) and second day (8.4 ± 4.9). Subject 4 and 7 had a performance dip on the second day. Type 2 failures occurred with more frequency on the first day, caused by the subject’s inexperience with the robotic arm. Subjects obtained an overall completion rate of 83.5%. No statistically significant difference was found for the average completion rates obtained for the first (84.0% ± 15.0%) and second day (74.4% ± 34.5%), t(7) = 1.29, p = 0.24.
The system’s performance during target selection is presented in Table 3. Only subjects 4 and 8 obtained Youden Indexes lower than 0.50. Both subjects also obtained the lowest percentages of voluntary selections. Five out of eight subjects were able to obtain a voluntary selection percentage above 70%. For subjects 4, 6, and 8, the results suggest a high false-positive rate, triggering a cascade of involuntary target selections followed by cancellations. During continuous control, erroneous stop commands were also sent often, increasing trial duration times.

4. Discussion

A simple BCI system for direct cartesian control of an ARM was developed using alpha wave modulation. All users were able to quickly learn to operate the system, and five out of the eight subjects, besides those excluded, were able to attain reliable control, with very few mistakes during selection and fast response times during continuous control. The latter provided users with tight control of the ARM, which avoided excessive overshooting. Users relied heavily on the audible cues and checked the monitor mainly during idle periods, focusing entirely on the position of the ARM during motion.

4.1. Task Performance

Five subjects managed to obtain fast, uninterrupted continuous control. However, for subjects 4, 6, 7 (for the second day), and 8, continuous control was stilted, given frequent occurrences of false positives that interrupted movement. Subjects 4 and 8 were not able to obtain reliable control of the robotic arm due to slow continuous control and frequent erroneous target selections. During the second day, subject 4 was unable to complete a single trial. This, along with subject 7’s dip in performance during the second day, could be tied to the subjects’ mental state during the experiment, as multiple studies have linked mental fatigue, frustration, and prolonged workload to an increase in power in the alpha band, e.g., [24,50].
Success rates were high for most subjects, except for subjects 4 and 8. Type 2 failures occurred mostly during the first day because subjects would risk dropping the cup from too high up, causing it to topple over. The inability to rotate the end-effector when gripping the cup also contributed to this trend. By the second day, subjects would adapt to these hurdles by dropping the cup from a lower height or grasping it at a lower point to avoid tilting it when gripping.
It stands to reason that, with long-term usage, participants will familiarize themselves with both the robotic arm and the interface. However, we observed no significant statistical differences between the performance metrics for the first and second days. More sessions may be needed for some subjects to achieve better performances. Furthermore, it is important to note that this improvement could either stem from users becoming more familiar with the robot arm or from an enhanced BCI performance.
Users could on average perform approximately five selections per minute, being able to perform eight selections at most and two selections at least. Its speed is comparable to that of P300-based BCI, such as in Lillo et al. [41] and Johnson et al. [51], both capable of issuing approximately three commands per minute, taking 21 s per choice. However, while in Lillo et al. [41], users are given only five unique targets, the system proposed by Johnson et al. [51] provides 16 targets. In contrast, the MI-based BCI proposed by Xu et al. [30] for continuous 2D control is remarkably faster since its cyclic menu uses shorter times between cues. Nevertheless, all of them fail to meet the performance of contemporary SSVEP-based BCI. The SSVEP system described in Chen et al. [39] allows users to select a command from 15 possible targets in four seconds, while the one developed in Han et al. [40] provides 11 commands and takes two seconds per selection.
The BCI proposed in this study achieved an average trial completion time of 439.7 ± 203.3 s. Several past studies investigating the use of BCI for manual end-point control of robotic arms have obtained faster trial completion times. Zhu et al. [23] used a 15-target SSVEP-BCI with an EOG-based switch to control a robotic arm and complete a pick-and-place task. Subjects performed the task three times; an average completion time of 387.33 ± 112.43 s was obtained, and all subjects were able to complete them. In Peng et al. [38], subjects were asked to complete two different paths in 3D space, including several checkpoints using an SSVEP-controlled robotic arm with six targets and obtained an average completion time of 174 ± 11.4 s. However, this type of approach presents a different set of limitations that our interface does not. For instance, Zhu et al. [23] noted that the flickering stimuli fatigued the users. Furthermore, both interfaces can only perform movements in increments, meaning more commands are needed to perform a task, and movement precision depends on the size of the increment used. Shorter increments allow users to perform fine adjustments but require them to issue more commands. Indeed, in Zhu et al. [23], subjects issued 68.73 ± 19.38 commands on average, which is a much greater amount compared to the one reported in our study (9.4 ± 1.2). Meng et al. [37] proposed an MI-based BCI system and performed a variety of tasks using a robotic arm, such as moving fixed targets to a shelf. Subjects who could theoretically grasp a maximum of six targets per run managed to grasp an average of 4.6 ± 0.9 targets. Moving a block to the shelf took on average 63.8 ± 5.1 s, while making direct comparisons between the two systems is difficult, it is clear that, compared to our BCI, theirs allowed users to complete the tasks very quickly. However, their interface only allowed for movement in a two-dimensional plane, meaning the reach-and-grasp tasks were divided into four sequential steps. The robot also performed some of the movements automatically, such as returning to the starting position after grasping the object on the table. In Sharma et al. [52], a similar study with an EOG-controlled robotic arm, subjects obtained slightly faster average trial completion times (357.5 s). However, the system proposed in Sharma et al. [52] utilized rapid blinking as a stop command, which could be more fatiguing for individuals with ALS who have difficulty blinking.
The proposed BCI provides users with flexible manual end-point control of an ARM; yet, compared to interfaces with shared control [29,43] or goal selection control [39,41,44], its task throughput is much lower. Depending on the specificity of the actions provided, goal selection strategies will offer vastly different experiences to the user. Goal selection strategies that present users with specific actions, such as drinking from a glass of water, are the least flexible, as they require every operation to be set previously. In contrast, by giving users control over gross movements, while automation is used for fine adjustments, shared control interfaces provide users with a more engaging experience.

4.2. System Improvements

Tolerance periods, while allowing users to cancel involuntary selections with ease, introduced more waiting times into the control loop. When combined with the time spent waiting for the right target cue, it slowed down mode selection. To overcome this hurdle, idle times, i.e., the time spent on each target and tolerance periods, can potentially be reduced according to the user’s performance. Another approach could be to introduce another control class, e.g., modulated frontal alpha activity from mental arithmetic.
Three subjects were stuck in target selection loops, where the system would perform a selection erroneously, only for the subject to cancel it right after. When looking at the number of involuntary selections canceled in Table 3, we can verify that subjects 4, 6, and 8 experienced these loops often. This flaw can be fixed by implementing a maximum loop limit after a predefined number of cancelations; instead of presenting the same target, the system would move onto the next.

4.3. Limitations and Future Perspectives

In the present study, we used a single BCI class, the detection of elevated alpha activity, and implemented a cyclic menu to increase the number of available commands. This approach, while simple and intuitive, has a few limitations. Adding more items to the menu greatly increases waiting times, reducing the system’s responsiveness. A possible solution would be to introduce another control signal that could, e.g., be used to swap between different menus or have it cycle through the available items. Doing so would increase the number of commands without extending the time spent waiting. Another option is to have said control signal cycle through the available targets instead, reducing idle times altogether. Future work could involve introducing EEG rhythm modulation, e.g., frontal theta, frontocentral alpha, and gamma power [53,54,55], during cognitive tasks, such as mental arithmetic tasks [56,57,58], to control the robot. Since this system uses only two electrodes, its possible to replace the EEG cap with a headband, an inconspicuous alternative that is faster and easier to mount for caretakers and relatives. Additionally, frontal modulated alpha could still be introduced.
As mentioned in Section 2.1, three individuals who were initially screened for the study were excluded since the BCI was unable to detect adequate alpha. BCI-illiterate users who fail to achieve efficient BCI control are not uncommon; it is estimated that they make up about 15–30% of the population [59]. Further, it is well documented that both the amplitude and peak frequency of alpha rhythms vary greatly between individuals [60]. Future BCIs driven by alpha wave modulation could account for an individual’s neurophysiology by, for example, selecting subject-specific frequency bands, a method commonly used in MI-BCI [61,62,63] to optimize classification. Ultimately, users may also benefit from neurofeedback training protocols, such as the one proposed in Zhou et al. [64], which have been shown to successfully help increase alpha power.
The proposed system is contingent on the subject’s ability to blink. According to the telephone survey of people with ALS (n = 61) by Huggins et al. [65], 20% of respondents reported having some difficulty blinking, while 7% reported having no control over eye blinking movements. For these users, self-regulated slow cortical potentials or sensorimotor rhythms could be used as an alternative to the alpha waves [66]. However, training of the user may be needed to achieve adequate BCI control; thus, alpha waves may be better suited for earlier stages of ALS where blinking is still possible, as it allows users’ familiarization with the BCI and robot with an expected higher success rate. Nevertheless, tests with end-users are also needed to assess the robustness of the system.

5. Conclusions

This study entailed the development and implementation of a proof-of-concept BCI-based end-point control system for robotic arm control using auditory cues. Five out of the eight subjects that participated in this study were able to obtain reliable control of the robotic arm across days while relying on auditory cues. The proposed system is intuitive, requires minimal training, and provides users with reliable continuous movement control of an ARM in 3D space. The introduction of tolerance periods allowed users to cancel involuntary selections with ease. However, a lot of time is spent waiting for the menu to cycle through the commands, which detracts from its overall responsiveness, and not all subjects could produce adequate alpha rhythm modulation to operate the BCI satisfactorily. Further work must be carried out to reduce idle times during target selection, including testing the system using shorter cue and tolerance periods. Moreover, testing should be conducted with individuals with severe motor impairments—the intended end-users—and alternative control signals should be tested to serve as alternatives for users who cannot produce adequate alpha rhythm modulation.

Author Contributions

Conceptualization, A.S.S.C., R.L.K., M.J. and L.N.S.A.S.; methodology, A.S.S.C.; software, A.S.S.C. and R.L.K.; investigation, A.S.S.C.; resources, M.J. and L.N.S.A.S.; data curation, A.S.S.C.; writing—original draft preparation, A.S.S.C.; writing—review and editing, A.S.S.C., R.L.K., M.J. and L.N.S.A.S.; visualization, A.S.S.C.; supervision, L.N.S.A.S. and M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the North Denmark Region Committee on Health Research Ethics (protocol number N-20130081, approved the 15 January 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in FigShare at https://doi.org/10.6084/m9.figshare.20060267.v1.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCIBrain-computer interface
ALSAmyotrophic lateral sclerosis
ATAssistive technologies
EEGElectroencephalography
SSVEPSteady-state visual evoked potentials
MIMotor Imagery
ARMAssistive robotic manipulator
EOGElectrooculography

References

  1. Abraham, A.; Drory, V.E. Fatigue in motor neuron diseases. Neuromuscul. Disord. 2012, 22, S198–S202. [Google Scholar] [CrossRef] [PubMed]
  2. Ramirez, C.; Piemonte, M.E.P.; Callegaro, D.; Da Silva, H.C.A. Fatigue in amyotrophic lateral sclerosis: Frequency and associated factors. Amyotroph. Lateral Scler. Off. Publ. World Fed. Neurol. Res. Group Mot. Neuron Dis. 2008, 9, 75–80. [Google Scholar] [CrossRef] [PubMed]
  3. Silva-Moraes, M.H.; Bispo-Torres, A.C.; Barouh, J.L.; Lucena, P.H.; Armani-Franceschi, G.; Dórea-Bandeira, I.; Vieira, F.; Miranda-Scippa, Â.; Quarantini, L.C.; Lucena, R.; et al. Suicidal behavior in individuals with Amyotrophic Lateral Sclerosis: A systematic review. J. Affect. Disord. 2020, 277, 688–696. [Google Scholar] [CrossRef] [PubMed]
  4. Paganoni, S.; McDonnell, E.; Schoenfeld, D.; Yu, H.; Deng, J.; Atassi, H.; Sherman, A.; Yerramilli-Rao, P.; Cudkowicz, M.; Atassi, N. Functional Decline is Associated with Hopelessness in Amyotrophic Lateral Sclerosis (ALS). J. Neurol. Neurophysiol. 2017, 8, 423. [Google Scholar] [CrossRef] [Green Version]
  5. Kübler, A. The history of BCI: From a vision for the future to real support for personhood in people with locked-in syndrome. Neuroethics 2020, 13, 163–180. [Google Scholar] [CrossRef]
  6. Eicher, C.; Kiselev, J.; Brukamp, K.; Kiemel, D.; Spittel, S.; Maier, A.; Meyer, T.; Oleimeulen, U.; Greuèl, M. Experiences with assistive technologies and devices (ATD) in patients with amyotrophic lateral sclerosis (ALS) and their caregivers. Technol. Disabil. 2019, 31, 203–215. [Google Scholar] [CrossRef]
  7. Ward, A.L.; Hammond, S.; Holsten, S.; Bravver, E.; Brooks, B.R. Power Wheelchair Use in Persons with Amyotrophic Lateral Sclerosis: Changes Over Time. Assist. Technol. 2015, 27, 238–245. [Google Scholar] [CrossRef]
  8. Basha, S.G.; Venkatesan, M. Design of joystick controlled electrical wheelchair. J. Adv. Res. Dyn. Control Syst. 2018, 10, 1990–1994. [Google Scholar]
  9. Zhang, H.; Agrawal, S.K. An Active Neck Brace Controlled by a Joystick to Assist Head Motion. IEEE Robot. Autom. Lett. 2018, 3, 37–43. [Google Scholar] [CrossRef]
  10. Andreasen Struijk, L.N.S.; Egsgaard, L.L.; Lontis, R.; Gaihede, M.; Bentsen, B. Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia. J. NeuroEng. Rehabil. 2017, 14, 110. [Google Scholar] [CrossRef] [Green Version]
  11. Andreasen Struijk, L.N.S.; Lontis, E.R.; Gaihede, M.; Caltenco, H.A.; Lund, M.E.; Schioeler, H.; Bentsen, B. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons. Disabil. Rehabil. Assist. Technol. 2017, 12, 631–640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Rotariu, C.; Costin, H.; Bozomitu, R.G.; Petroiu-Andruseac, G.; Ursache, T.I.; Doina Cojocaru, C. New assistive technology for communicating with disabled people based on gaze interaction. In Proceedings of the 2019 E-Health and Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2019; pp. 1–4. [Google Scholar] [CrossRef]
  13. Saha, D.; Sayyed, A.Q.M.S.; Saif, A.F.M.; Shahnaz, C.; Fattah, S.A. Eye Gaze Controlled Immersive Video Navigation System for Disabled People. In Proceedings of the 2019 IEEE R10 Humanitarian Technology Conference (R10-HTC)(47129), Depok, Indonesia, 12–14 November 2019; pp. 30–35. [Google Scholar] [CrossRef]
  14. Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-computer interfaces in medicine. Mayo Clin. Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [Green Version]
  15. Millán, J.D.R.; Rupp, R.; Mueller-Putz, G.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kubler, A.; Leeb, R.; et al. Combining Brain–Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges. Front. Neurosci. 2010, 4, 161. [Google Scholar] [CrossRef] [PubMed]
  16. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain–Computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  18. Choi, B.; Jo, S. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition. PLoS ONE 2013, 8, e74583. [Google Scholar] [CrossRef]
  19. Spataro, R.; Chella, A.; Allison, B.; Giardina, M.; Sorbello, R.; Tramonte, S.; Guger, C.; La Bella, V. Reaching and grasping a glass of water by locked-In ALS patients through a BCI-controlled humanoid robot. Front. Hum. Neurosci. 2017, 11, 68. [Google Scholar] [CrossRef] [Green Version]
  20. Leeb, R.; Perdikis, S.; Tonin, L.; Biasiucci, A.; Tavella, M.; Creatura, M.; Molina, A.; Al-Khodairy, A.; Carlson, T.; Millan, J.D. Transferring brain-computer interfaces beyond the laboratory: Successful application control for motor-disabled users. Artif. Intell. Med. 2013, 59, 121–132. [Google Scholar] [CrossRef] [Green Version]
  21. Dreyer, A.M.; Herrmann, C.S. Frequency-modulated steady-state visual evoked potentials: A new stimulation method for brain–computer interfaces. J. Neurosci. Methods 2015, 241, 1–9. [Google Scholar] [CrossRef]
  22. Stawicki, P.; Gembler, F.; Rezeika, A.; Volosyak, I. A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI. Brain Sci. 2017, 7, 35. [Google Scholar] [CrossRef]
  23. Zhu, Y.; Li, Y.; Lu, J.; Li, P. A Hybrid BCI Based on SSVEP and EOG for Robotic Arm Control. Front. Neurorobot. 2020, 14, 95. [Google Scholar] [CrossRef] [PubMed]
  24. Cao, T.; Wan, F.; Wong, C.M.; da Cruz, J.N.; Hu, Y. Objective evaluation of fatigue by EEG spectral analysis in steady-state visual evoked potential-based brain-computer interfaces. Biomed. Eng. Online 2014, 13, 28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Scherer, R.; Vidaurre, C. Chapter 8—Motor imagery based brain–computer interfaces. In Smart Wheelchairs and Brain-Computer Interfaces; Diez, P., Ed.; Academic Press: Cambridge, MA, USA, 2018; pp. 171–195. [Google Scholar] [CrossRef]
  27. Zeng, H.; Wang, Y.; Wu, C.; Song, A.; Liu, J.; Ji, P.; Xu, B.; Zhu, L.; Li, H.; Wen, P. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback. Front. Neurorobot. 2017, 11, 60. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Bousseta, R.; El Ouakouak, I.; Gharbi, M.; Regragui, F. EEG Based Brain Computer Interface for Controlling a Robot Arm Movement through Thought. IRBM 2018, 39, 129–135. [Google Scholar] [CrossRef]
  29. Xu, Y.; Ding, C.; Shu, X.; Gui, K.; Bezsudnova, Y.; Sheng, X.; Zhang, D. Shared control of a robotic arm using non-invasive brain–computer interface and computer vision guidance. Robot. Auton. Syst. 2019, 115, 121–129. [Google Scholar] [CrossRef]
  30. Xu, R.; Dosen, S.; Jiang, N.; Yao, L.; Farooq, A.; Jochumsen, M.; Mrachacz-Kersting, N.; Dremstrup, K.; Farina, D. Continuous 2D control via state-machine triggered by endogenous sensory discrimination and a fast brain switch. J. Neural Eng. 2019, 16, 056001. [Google Scholar] [CrossRef]
  31. Ron-Angevin, R.; Velasco-Álvarez, F.; Fernández-Rodríguez, A.; Díaz-Estrella, A.; Blanca-Mena, M.J.; Vizcaíno-Martín, F.J. Brain-Computer Interface application: Auditory serial interface to control a two-class motor-imagery-based wheelchair. J. NeuroEng. Rehabil. 2017, 14, 49. [Google Scholar] [CrossRef]
  32. Garrison, H.; McCullough, A.; Yu, Y.C.; Gabel, L.A. Feasibility study of EEG signals for asynchronous BCI system applications. In Proceedings of the 2015 41st Annual Northeast Biomedical Engineering Conference (NEBEC), Troy, NY, USA, 17–19 April 2015; pp. 1–2. [Google Scholar] [CrossRef]
  33. Geller, A.S.; Burke, J.F.; Sperling, M.R.; Sharan, A.D.; Litt, B.; Baltuch, G.H.; Lucas, T.H.; Kahana, M.J. Eye closure causes widespread low-frequency power increase and focal gamma attenuation in the human electrocorticogram. Clin. Neurophysiol. Off. J. Int. Fed. Clin. Neurophysiol. 2014, 125, 1764–1773. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, L.; Wu, X.; Guo, X.; Liu, J.; Zhou, B. Design and Implementation of an Asynchronous BCI System with Alpha Rhythm and SSVEP. IEEE Access 2019, 7, 146123–146143. [Google Scholar] [CrossRef]
  35. Bi, L.; He, T.; Fan, X. A driver-vehicle interface based on ERD/ERS potentials and alpha rhythm. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1058–1062. [Google Scholar] [CrossRef]
  36. Korovesis, N.; Kandris, D.; Koulouras, G.; Alexandridis, A. Robot Motion Control via an EEG-Based Brain–Computer Interface by Using Neural Networks and Alpha Brainwaves. Electronics 2019, 8, 1387. [Google Scholar] [CrossRef] [Green Version]
  37. Meng, J.; Zhang, S.; Bekyo, A.; Olsoe, J.; Baxter, B.; He, B. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks. Sci. Rep. 2016, 6, 38565. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Peng, F.; Li, M.; Zhao, S.N.; Xu, Q.; Xu, J.; Wu, H. Control of a Robotic Arm with an Optimized Common Template-Based CCA Method for SSVEP-Based BCI. Front. Neurorobot. 2022, 16, 855825. [Google Scholar] [CrossRef] [PubMed]
  39. Chen, X.; Zhao, B.; Wang, Y.; Xu, S.; Gao, X. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI. Int. J. Neural Syst. 2018, 28, 1850018. [Google Scholar] [CrossRef]
  40. Han, X.; Lin, K.; Gao, S.; Gao, X. A novel system of SSVEP-based human–robot coordination. J. Neural Eng. 2018, 16, 016006. [Google Scholar] [CrossRef] [PubMed]
  41. Lillo, P.D.; Arrichiello, F.; Vito, D.D.; Antonelli, G. BCI-controlled assistive manipulator: Developed architecture and experimental results. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 91–104. [Google Scholar] [CrossRef]
  42. Chen, X.; Zhao, B.; Wang, Y.; Gao, X. Combination of high-frequency SSVEP-based BCI and computer vision for controlling a robotic arm. J. Neural Eng. 2019, 16, 026012. [Google Scholar] [CrossRef]
  43. Xu, B.; Li, W.; Liu, D.; Zhang, K.; Miao, M.; Xu, G.; Song, A. Continuous Hybrid BCI Control for Robotic Arm Using Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking. Mathematics 2022, 10, 618. [Google Scholar] [CrossRef]
  44. Ying, R.; Weisz, J.; Allen, P.K. Grasping with Your Brain: A Brain-Computer Interface for Fast Grasp Selection. In Robotics Research: Volume 1; Bicchi, A., Burgard, W., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 325–340. [Google Scholar] [CrossRef]
  45. Kim, D.; Hazlett-Knudsen, R.; Culver-Godfrey, H.; Rucks, G.; Cunningham, T.; Portee, D.; Bricout, J.; Wang, Z.; Behal, A. How Autonomy Impacts Performance and Satisfaction: Results from a Study with Spinal Cord Injured Subjects Using an Assistive Robot. IEEE Trans. Syst. Man Cybern.—Part A Syst. Hum. 2012, 42, 2–14. [Google Scholar] [CrossRef]
  46. Muelling, K.; Venkatraman, A.; Valois, J.S.; Downey, J.E.; Weiss, J.; Javdani, S.; Hebert, M.; Schwartz, A.B.; Collinger, J.L.; Bagnell, J.A. Autonomy infused teleoperation with application to brain computer interface controlled manipulation. Auton. Robot. 2017, 41, 1401–1422. [Google Scholar] [CrossRef]
  47. Xu, B.; Song, A. Pattern Recognition of Motor Imagery EEG using Wavelet Transform. J. Biomed. Sci. Eng. 2008, 1, 64–67. [Google Scholar] [CrossRef] [Green Version]
  48. León, M.; Orellana, D.; Chuquimarca, L.; Acaro, X. Study of Feature Extraction Methods for BCI Applications. In Advances in Emerging Trends and Technologies; Advances in Intelligent Systems and Computing; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Chapter 2; pp. 13–23. [Google Scholar] [CrossRef]
  49. Youden, W.J. Index for rating diagnostic tests. Cancer 1950, 3, 32–35. [Google Scholar] [CrossRef]
  50. Käthner, I.; Wriessnegger, S.C.; Müller-Putz, G.R.; Kübler, A.; Halder, S. Effects of mental workload and fatigue on the P300, alpha and theta band power during operation of an ERP (P300) brain–computer interface. Biol. Psychol. 2014, 102, 118–129. [Google Scholar] [CrossRef] [PubMed]
  51. Johnson, G.D.; Waytowich, N.R.; Cox, D.J.; Krusienski, D.J. Extending the discrete selection capabilities of the P300 speller to goal-oriented robotic arm control. In Proceedings of the 2010 3rd IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, Tokyo, Japan, 26–29 September 2010; pp. 572–575. [Google Scholar] [CrossRef]
  52. Sharma, K.; Jain, N.; Pal, P.K. Detection of eye closing/opening from EOG and its application in robotic arm control. Biocybern. Biomed. Eng. 2020, 40, 173–186. [Google Scholar] [CrossRef]
  53. Ishii, R.; Canuet, L.; Ishihara, T.; Aoki, Y.; Ikeda, S.; Hata, M.; Katsimichas, T.; Gunji, A.; Takahashi, H.; Nakahachi, T.; et al. Frontal midline theta rhythm and gamma power changes during focused attention on mental calculation: An MEG beamformer analysis. Front. Hum. Neurosci. 2014, 8, 406. [Google Scholar] [CrossRef] [Green Version]
  54. Magosso, E.; De Crescenzio, F.; Ricci, G.; Piastra, S.; Ursino, M. EEG Alpha Power Is Modulated by Attentional Changes during Cognitive Tasks and Virtual Reality Immersion. Comput. Intell. Neurosci. 2019, 2019, 7051079. [Google Scholar] [CrossRef] [Green Version]
  55. Katahira, K.; Yamazaki, Y.; Yamaoka, C.; Ozaki, H.; Nakagawa, S.; Nagata, N. EEG Correlates of the Flow State: A Combination of Increased Frontal Theta and Moderate Frontocentral Alpha Rhythm in the Mental Arithmetic Task. Front. Psychol. 2018, 9, 300. [Google Scholar] [CrossRef] [Green Version]
  56. Fatimah, B.; Javali, A.; Ansar, H.; Harshitha, B.G.; Kumar, H. Mental Arithmetic Task Classification using Fourier Decomposition Method. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 0046–0050. [Google Scholar] [CrossRef]
  57. So, W.K.Y.; Wong, S.W.H.; Mak, J.N.; Chan, R.H.M. An evaluation of mental workload with frontal EEG. PLoS ONE 2017, 12, e0174949. [Google Scholar] [CrossRef]
  58. Nuamah, J.K.; Seong, Y.; Yi, S. Electroencephalography (EEG) classification of cognitive tasks based on task engagement index. In Proceedings of the 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, USA, 27–31 March 2017; pp. 1–6. [Google Scholar] [CrossRef]
  59. Becker, S.; Dhindsa, K.; Mousapour, L.; Al Dabagh, Y. BCI Illiteracy: It’s Us, Not Them. Optimizing BCIs for Individual Brains. In Proceedings of the 2022 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon-do, Korea, 21–23 February 2022; pp. 1–3. [Google Scholar] [CrossRef]
  60. Mierau, A.; Klimesch, W.; Lefebvre, J. State-dependent alpha peak frequency shifts: Experimental evidence, potential mechanisms and functional implications. Neuroscience 2017, 360, 146–154. [Google Scholar] [CrossRef]
  61. Kumar, S.; Sharma, A.; Tsunoda, T. Subject-Specific-Frequency-Band for Motor Imagery EEG Signal Recognition Based on Common Spatial Spectral Pattern. In Proceedings of the PRICAI 2019: Trends in Artificial Intelligence; Nayak, A.C., Sharma, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 712–722. [Google Scholar]
  62. Delisle-Rodriguez, D.; Cardoso, V.; Gurve, D.; Loterio, F.; Romero-Laiseca, M.A.; Krishnan, S.; Bastos-Filho, T. System based on subject-specific bands to recognize pedaling motor imagery: Towards a BCI for lower-limb rehabilitation. J. Neural Eng. 2019, 16, 056005. [Google Scholar] [CrossRef] [Green Version]
  63. Lazurenko, D.; Shepelev, I.; Shaposhnikov, D.; Saevskiy, A.; Kiroy, V. Discriminative Frequencies and Temporal EEG Segmentation in the Motor Imagery Classification Approach. Appl. Sci. 2022, 12, 2736. [Google Scholar] [CrossRef]
  64. Zhou, Q.; Cheng, R.; Yao, L.; Ye, X.; Xu, K. Neurofeedback Training of Alpha Relative Power Improves the Performance of Motor Imagery Brain-Computer Interface. Front. Hum. Neurosci. 2022, 16, 831995. [Google Scholar] [CrossRef] [PubMed]
  65. Huggins, J.E.; Wren, P.A.; Gruis, K.L. What would brain-computer interface users want? Opinions and priorities of potential users with amyotrophic lateral sclerosis. Amyotroph. Lateral Scler. 2011, 12, 318–324. [Google Scholar] [CrossRef] [PubMed]
  66. Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: Control signals review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
Figure 1. Schematic of the proposed system. EEG is recorded, followed by feature extraction. The classifier detects high power alpha waves and sends the selected commands to the robotic arm while providing feedback primarily through auditory cues.
Figure 1. Schematic of the proposed system. EEG is recorded, followed by feature extraction. The classifier detects high power alpha waves and sends the selected commands to the robotic arm while providing feedback primarily through auditory cues.
Signals 03 00024 g001
Figure 2. Control Loop Schematic. The user selects a target command from the cyclic menu. Commands are presented in the following sequence: up, down, left, right, front, back, and grip. The robot executes said command (except for “grip”) until a stop order is issued by the user.
Figure 2. Control Loop Schematic. The user selects a target command from the cyclic menu. Commands are presented in the following sequence: up, down, left, right, front, back, and grip. The robot executes said command (except for “grip”) until a stop order is issued by the user.
Signals 03 00024 g002
Figure 3. Experimental Setup. (a) Illustration of ARM performing the pick-and-drop task. The end-effector moves from the starting position, P0, to grasp the cup at P1, to drop it at P2. (b) Illustration of the experimental setup’s layout.
Figure 3. Experimental Setup. (a) Illustration of ARM performing the pick-and-drop task. The end-effector moves from the starting position, P0, to grasp the cup at P1, to drop it at P2. (b) Illustration of the experimental setup’s layout.
Signals 03 00024 g003
Figure 4. Alpha activity recorded during training by electrodes in positions PO7 and PO8. The shaded epoch corresponds to the timespan the subject had their eyes open; the clear epoch relates to the moment the subject closed their eyes after the auditory cue at the 10 s mark. An increase in amplitude can be seen approximately two seconds after the cue.
Figure 4. Alpha activity recorded during training by electrodes in positions PO7 and PO8. The shaded epoch corresponds to the timespan the subject had their eyes open; the clear epoch relates to the moment the subject closed their eyes after the auditory cue at the 10 s mark. An increase in amplitude can be seen approximately two seconds after the cue.
Signals 03 00024 g004
Figure 5. Average trial completion time (a) and average path efficiency (b) over two days. The whiskers correspond to the standard deviation. Subject 4 was excluded as the subject was unable to complete any trial on the second day.
Figure 5. Average trial completion time (a) and average path efficiency (b) over two days. The whiskers correspond to the standard deviation. Subject 4 was excluded as the subject was unable to complete any trial on the second day.
Signals 03 00024 g005
Table 1. Overall performance for successful trials.
Table 1. Overall performance for successful trials.
SUBTD (s)RMT (s)PE (%)AST
1312.5 ± 76.940.6 ± 8.384.2 + 13.510 ± 2
2278.8 ± 38.836.6 ± 2.496.1 + 4.98 ± 0
3324.4 ± 119.835.4 ± 7.486.9 ± 11.210 ± 3
4677.0 ± 289.133.7 ± 3.193.0 ± 8.612 ± 4
5309.1 ± 42.736.8 ± 2.987.0 ± 6.79 ± 2
6415.1 ± 120.637.4 ± 2.486.2 ± 5.610 ± 2
7367.7 ± 118.834.5 ± 3.1101.6 ± 8.18 ± 1
8833.0 ± 214.236.8 ± 4.686.2 ± 8.312 ± 3
Mean ± SD439.7 ± 203.336.5 ± 2.190.2 ± 6.19.4 ± 1.2
TD = Total Duration; RMT = Robot Motion Time; PE = Path Efficiency; AST = Average Selections per Trial; SD = standard deviation.
Table 2. Number of successful and failed trials for two consecutive days.
Table 2. Number of successful and failed trials for two consecutive days.
SUBDay 1Day 2
Success 1FailureSuccess 1Failure
Type 1Type 2Type 1Type 2
110 (90.9)0113 (100.0)00
212 (85.7)1113 (92.9)10
310 (100.0)0012 (92.3)01
43 (50.0)120 (0.0)70
510 (83.3)1110 (71.4)40
68 (88.9)108 (88.9)01
713 (92.9)019 (100.0)00
84 (80.0)012 (50.0)20
1 Values are n (%).
Table 3. System performance during target selection.
Table 3. System performance during target selection.
SUBYIACCVS 1VSCI 1ISC 1ISNC 1
10.950.98247 (94)0 (0)14 (5)1 (0)
20.970.98224 (93)0 (0)18 (7)0 (0)
30.890.93222 (74)5 (2)71 (24)0 (0)
40.450.69112 (26)57 (13)242 (55)28 (6)
50.910.96235 (91)1 (0)23 (9)0 (0)
60.770.87175 (57)28 (9)99 (32)3 (1)
70.890.95192 (82)2 (1)40 (17)0 (0)
80.420.58104 (23)14 (3)331 (72)8 (2)
YI = Youden Index; ACC = Accuracy; VS = Voluntary Selections; VSCI = Voluntary Selections Canceled Involuntarily; ISC = Involuntary Selections Canceled; ISNC = Involuntary Selections Not Canceled. 1 Values are in n (%).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Santos Cardoso, A.S.; Kæseler, R.L.; Jochumsen, M.; Andreasen Struijk, L.N.S. Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept. Signals 2022, 3, 396-409. https://doi.org/10.3390/signals3020024

AMA Style

Santos Cardoso AS, Kæseler RL, Jochumsen M, Andreasen Struijk LNS. Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept. Signals. 2022; 3(2):396-409. https://doi.org/10.3390/signals3020024

Chicago/Turabian Style

Santos Cardoso, Ana S., Rasmus L. Kæseler, Mads Jochumsen, and Lotte N. S. Andreasen Struijk. 2022. "Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept" Signals 3, no. 2: 396-409. https://doi.org/10.3390/signals3020024

Article Metrics

Back to TopTop