Next Article in Journal
Non-Contact Temperature Control System Applicable to Polymerase Chain Reaction on a Lab-on-a-Disc
Previous Article in Journal
Image Fusion for High-Resolution Optical Satellites Based on Panchromatic Spectral Decomposition
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Towards the Recognition of the Emotions of People with Visual Disabilities through Brain–Computer Interfaces

Computer Science Department, Universidad Carlos III de Madrid, Av. Universidad 30, 28911 Leganés, Madrid, Spain
Author to whom correspondence should be addressed.
Sensors 2019, 19(11), 2620;
Received: 11 April 2019 / Revised: 22 May 2019 / Accepted: 7 June 2019 / Published: 9 June 2019
(This article belongs to the Section Intelligent Sensors)


A brain–computer interface is an alternative for communication between people and computers, through the acquisition and analysis of brain signals. Research related to this field has focused on serving people with different types of motor, visual or auditory disabilities. On the other hand, affective computing studies and extracts information about the emotional state of a person in certain situations, an important aspect for the interaction between people and the computer. In particular, this manuscript considers people with visual disabilities and their need for personalized systems that prioritize their disability and the degree that affects them. In this article, a review of the state of the techniques is presented, where the importance of the study of the emotions of people with visual disabilities, and the possibility of representing those emotions through a brain–computer interface and affective computing, are discussed. Finally, the authors propose a framework to study and evaluate the possibility of representing and interpreting the emotions of people with visual disabilities for improving their experience with the use of technology and their integration into today’s society.

1. Introduction

In essence, a brain–computer interface (BCI) is a system that aims to read the activity of the human brain, thought to be the most complex biological system in the world [1]. Through a BCI an individual with a disability can have effective control over devices and computers, speech synthesizers, assistance applications and neuronal prostheses [2]. Currently, there are different research studies that focus on brain signals as a central point for assisting people with disabilities, considering it viable to analyze brain signals to convert these signals into instructions that are executed by external devices or to interpret people’s emotions [3]. Assistive technologies for people with disabilities are of great interest, and in fact, there are numerous studies aimed at improving their quality of life. However, it is still difficult to access assistive technologies, because usually they are only focused on one type of disability [4,5]. Most BCI research presents mental tasks and paradigms related to visual stimulation which then later analyze brain signals [6].
This work evaluates different research of the last decade, from 2009 to 2018, based on the application of the BCI for people with disabilities. The main search included BCI for people with visual or motor disabilities and the detection of affective states or emotions in people with visual disabilities. The research identifies an area that can still be explored, which is discussed throughout this manuscript.
For the realization of this work, the authors considered the recommendations and guidelines of Brereton et al. described in [7] and by Kitchenham et al. proposed in [8] for a systematic review of the literature.
In Figure 1, different areas of interest for this research are presented: BCI, affective computing (AC) and visually disability. As shown in Figure 1, all areas together are relevant for the acquisition of brain signals through a BCI, to recognize the affective states of people with a visual disability using AC.
The following questions are a fundamental part of this research:
  • Why is it important to study the emotions of a person with a visual disability?
  • Can artificial intelligence through affective computing obtain information of interest to represent the emotions of a person with a visual disability?
The answer to these questions is exposed in the discussion section.
The most common way to identify an emotion is through facial expressions and speech, although these expressions are not commonly available in all situations. In some cases bio-signals are required to examine the emotional state [9]. Accordingly, the analysis of the emotions of a person with a visual disability through a BCI and current technological tools could lead to improvement in their quality of life and integration into society.
In [10], a pilot system for communication between the brain and computer was proposed, based on evoked potentials (EP), which served as the basis for the BCI. In recent years, the study of BCI has grown exponentially [11], where the main objective was to provide a channel of communication between an individual and a computer through the analysis of brain signals.
Current data show that the efforts made have been developed with the implementation of BCI systems, seeking to improve people’s quality of life. In [2,12], they classify BCI into seven groups, according to the neural mechanisms and recording technology used. The continuous advancement of technology and its inclusion in people’s lives is resulting in improvements in accessing technologies and also new forms of communication between people and things [13].
The goal of this work was defined as follows: firstly, the evaluation of BCI systems and its impact on the lives of people with a disability, and secondly the integration of BCI and AC in the detection of emotions to people with visual disabilities. The results obtained from the analysis are detailed in the discussion section.
The text is organized as follows: Section 2 shows a review of the technologies for the implementation of BCI systems and those related to AC and visual disabilities. Section 3 presents an analysis of related research with this work. Section 4 compares the different studies presented in Section 3, and in the Section 5, possible challenges in the research of systems that integrate BCI and AC for people with visual disabilities are presented. Finally, in Section 6, conclusions and future work are presented.

2. Perspective

In this section a general description of brain–computer interfaces, affective computing and visual disability is provided.

2.1. Brain–Computer Interface

A BCI involves the work of the brain and a device that is shared to enable a communication channel between the brain and an object that is controlled externally, as described by Prasat et al. [14]. Their study describes the implementation of the classification of the movement of the left hand and the right, through a BCI.
Lebedev et al. in [15] proposed a two-fold classification of BCI: invasive and non-invasive. The first types are implanted at the level of the brain (intracranially) and their goal is to obtain signals of the highest quality. At the same time, the non-invasive ones that are placed on the scalp are based on electroencephalogram (EEG) recordings of the surface of the head.
In [16], Minguillon et al. determined that EEG recordings are generally contaminated with noise generated during the acquisition of the signals, which can be caused by endogenous reasons (physiological sources such as eye, muscle and/or cardiac activity) or exogenous ones (non-physiological sources, such as impedance mismatch, coupling of power lines, etc.).
A method for the extraction of characteristics without noise is proposed in [17]. The results obtained by Jafarifarmand et al. demonstrate the effectiveness in the extraction of the desired EEG characteristics.
In [18], Arvaneh et al. proposed an algorithm for EEG channel selection. The proposed algorithm is formulated as an optimization problem to select the least number of channels within a constraint of classification accuracy. As such, the proposed approach can be customized to yield the best classification accuracy by removing the noisy and irrelevant channels.
In [19], Kübler et al. exemplified the objective of a BCI, providing people a tool to interact with a computer without the need for any muscular participation.
The evoked potential stimuli are related to the electrophysiological measurements of the processes that have to do with certain cognitive functions of the brain (attention given when performing an activity) [20].
The evoked potentials (EP) are identified by fluctuations of the electrical potentials of the brain in a cognitive process caused either by the occurrence of an event or the presentation of a stimulus [21]. With reference to the polarity, the components of the EP can be of two types: positive P, positive polarity and negative N, negative polarity. The P300 is a type of positive EP, which appears at 300 ms after the presentation of a stimulus or event, and has proven to be one of the main approaches of the BCI to provide an effective communication channel [22].
The P300 evoked potential occurs after the start of the stimulus, which can be physical, visual or auditory. EEG and EP techniques have been used to evaluate brain activity (brain functions) and sensory function. However, EP related to events have not been used regularly [23].
In [24], electrodes that do not require gel or even a direct coupling of the scalp have been considered for practices of the BCI. This study compares wet electrodes with dry and non-contacting electrodes within a BCI paradigm of visual evoked potential. They present the development of a new capacitive electrode, without contact, that uses a custom integrated high impedance analog front-end. The contactless electrode data, which work on the upper part of the hair, show 100% accuracy compared to wet electrodes.

2.2. Affective Computing

Part of human interaction involves expressing emotions, which can be through speech or facial expressions [25]. AC is considered a discipline of artificial intelligence which seeks to develop computational methods oriented to recognizing human emotions, in addition to generating artificial emotions using computers. Emotions are a psychophysical response to an external stimulus. People express their emotions based on communication with other people [26]. In an attempt to capture the emotions of a person through a computer and the need to improve the interaction between people and the computer. Picard [27] established the main concepts of affective computing and its relationship with people with disabilities.

2.3. Visual Disability

A visual disability is a condition that directly affects the perception of images, whether partially or totally. Vison is a global sense that allows us to identify objects at a distance and at the same time. A visual disability is related to visual acuity and visual field. The term visual disability is used when there is a significant decrease in visual acuity even with the use of glasses, or a significant decrease in the visual field. People with some degree of visual disability must make a greater effort to interact with the world around them and to thereby achieve social inclusion [28].

3. Related Work

This section considers different research related with the implementation of BCI for people with disabilities. The study performed divides the research into those focused on people with visual or motor disabilities and those that integrate BCI and AC for detection of emotions for people with visual disabilities. Although not all of these technologies were identified together in a single investigation, they were considered separately as part of this review, because they include criteria related to the main search—BCI and disability or BCI and affective computing.

3.1. BCI for People with a Visual Disability

In an effort to make a BCI usable for people with a visual disability, in [19] the authors included a BCI that was adapted to auditory stimulation. The proposal consisted of presenting letters of the alphabet in a 5 × 6 matrix. The individual must first choose the row and then the column. The results showed that the subjects obtained a performance above the probability level. The considerations of the experiment indicate that the accuracy of the spelling was significantly lower compared to a BCI system of visual stimulation.
A non-invasive BCI based on EEG for people with a visual disability proposes the conversion of captured moving images through a camera and converting them into visual signals for the optic nerve as a concept of a dnon-invasive artificial vision system [29].
The BCI based on visual mobility has been shown to be highly effective and is widely used. However, for patients who have vision problems or lose control of their eye movements, the possibility of interacting with a BCI based on vision is limited.
Guo et al. investigated a brain computer auditory interface using the mental response [30]. The research proposed the use of auditory stimuli that allows a person to mentally select a target between a random sequence of spoken digits. The reported results indicated an average accuracy of 85% with five trials.
In [31], Hinterberger et al. proposed a BCI called “Thought Translation Device”, which operates with the voluntary response to auditory stimuli (auditory instructions) and feedback. One of their cited objectives was to provide a new tool to people with visual disabilities. For the experiment, three groups of people were trained to be stimulated in visual, auditory and visual/auditory combined. The results showed an average of 67% correct responses in the visual condition, 59% in the auditory condition and 57% in the combined condition. Although the results indicated that the visual stimulation was slightly higher, the research assumed that a BCI with auditory stimulation could be used for communication between the brain and a person with a visual disability.
An exploratory study of the viability of an auditory BCI is presented in [32] by Nijboer et al. Sixteen healthy volunteers participated in the training that consisted of thirty tests, lasting from two to three minutes. In those experiments, the increase or decrease of sensorimotor rhythms was achieved. Half of the participants were stimulated with visual stimuli and the other half with auditory stimuli in order to evaluate the evoked potentials of the affective state and the motivation that were considered in each session. The results showed that, although the performance of the visually stimulated participants was greater than the auditory stimuli, with enough training time an auditory BCI could be as efficient as a visual BCI. In addition, the viability of an auditory BCI has been investigated in a few studies using different EEG input signals, for example P300 evoked potentials [32]. The evaluations contemplated for this point were aimed at helping people with a visual disability.
Klobassa et al. in [33] indicated that people with severe disabilities or visual limitations require auditory BCI. This research studied whether six environmental sounds were useful to operate a P300 speller. The results of the analysis showed that the participants of the experiment achieved a precision score between 50% and 75%.

3.2. BCI for People with Disabilities

The implementation of BCI systems to analyze P300 visual evoked potentials in people with motor disabilities (progressive muscular problems that produce physical disability) are still being studied.
In [22], the authors worked with a group of people and their possible interaction with a BCI based on P300. The results indicated specific technical data on the EEG channels and the frequencies for obtaining and analyzing the P300.
The case study implemented in [34], showed that a person with a severe motor disability was able to use a non-invasive BCI system for communicating messages with his family. This system is based on visual evoked potentials, taking as reference the P300, for the evaluation of a spelling module. The results indicate that the use of an BCI system can result in benefits for people with severe motor disabilities.
A research study revealed the effectiveness of a BCI based on P300 for a group of people (eight individuals with severe motor disability problems and eight healthy individuals without disabilities). BCI was operated successfully by both groups of individuals and the results indicated a non-significant difference in terms of the operation of the BCI [35].
In [36], a portable non-invasive BCI was presented to move a mobile robot in a home environment and operate a virtual keyboard. The results showed two participants successfully handling a robot between several rooms, while other participants managed to write messages with a virtual keyboard. They also observed that one of the volunteers was a person with physical disabilities who suffered from spinal muscular atrophy (severe motor disability).
An experiment involving a BCI based on EEG and for supporting people with disabilities is described in [37]. The BCI implements the concept of EP through P300 waves and N2PC (Evoked Potential with a negative deviation in the waveform that occurs approximately 200 ms after the stimulus is presented). The authors developed three applications: the first was an internet browser, the second was an application that controls a robotic arm and the third, was an application that allows people with severe disabilities to use basic commands related to emotions and their needs.
Motivated by the specific problems experienced by people who are paralyzed (severe motor disability), in [38], Hill et al. described a BCI that stimulates auditory in a group of people. The results indicated that the users modulated the brain signals in a single trial, which allowed the conscious direction of the attention with enough assertiveness to be useful in a BCI system.
The development and testing of a BCI based on the study of an EEG that was intended for use by completely paralyzed people was reported in [39]. The participants were stimulated in an auditory way. The group consisted of 13 individuals, of which the results showed a score between 76% and 96% for the task of choosing left and right. Hill et al. considered auditory EP to be a competent technique for the development of communication systems in people with disabilities.
Suwa et al. in [40] presented a new paradigm of BCI that uses the P300 and P100 responses, which occur in the frontal lobe and the temporal lobe, respectively; they used these responses stimulated by an audio in a single task. The main advantage of a designed paradigm is to get two different types of responses in a single EEG test task.
To improve the performance of the BCI, Yin et al. in [41] proposed a bimodal BCI approach that simultaneously uses auditory and tactile stimuli. The proposed BCI was an independent vision system because visual interaction of the user was not required.
An invasive BCI was developed for the neurological control of a high-performance prosthesis. Exposed by Collinger et al. in [42] the authors implanted two 96-channel intracortical microelectrodes in the motor cortex of a 52-year-old with tetraplegia. The participant was able to move the prosthetic limb freely in the three-dimensional workspace.
Moving a BCI from the laboratory to real-life applications still presents challenges. The objective of [43] was to integrate a mobile and wireless EEG and a signal processing platform based on a cellular phone in a portable and wireless BCI. The results of this study showed that the performance of the proposed cell phone-based platform was comparable, in terms of the rate of information transfer, with other BCI systems.

3.3. BCI for Detection of Emotions

A system of music generation according to the state of affectivity of a person was presented by Daly et al. in [44]. This proposal contemplated a BCI for acquiring an EEG for visualization and analysis of brain signals, a module for detecting the affective state of a person and a set of rules that allowed the system to generate music. Together with the BCI that detects the emotions of a person, in [45] the authors developed a system for generating music that served as a support for musical stimulation with short pieces.
In [46], the evaluation of emotions was presented using electroencephalogram (EEG) signals. The linear classifiers were used to classify discrete emotions (happiness, surprise, fear, disgust and a neutral state). Audiovisual stimulation was used to evoke the emotions. The evaluated results represented a possibility to determine the emotional changes of the human mind through EEG signals.
Miranda et al. [47] presented a new type of BCI: the brain–computer musical interface (BCMI). The study mentions three principal problems: extracting information from significant control of signals emanating directly from the brain, designing generative musical techniques that respond to information and training subjects to use the system. A BCMI test system was implemented that used electroencephalogram information to generate music online. Likewise, it was mentioned that other research based on a better understanding of brain activity associated with music cognition and the development of new tools and techniques for implementing generative music systems controlled by the brain, point to a bright future for the development of BCMI.
In [9] Hamdi et al. implemented a BCI system and a sensor that measures heart rate, to identify the six basic emotions proposed by Ekman (anger, disgust, fear, joy, sadness and surprise). The results revealed that it was possible to identify the emotional state of the person.
Khosrowabadi et al. in [48] presented a system for the detection of emotions based on EEG. This system uses a self-organized map for the detection of the limits of emotions. The characteristics of the EEG signals are classified considering the emotional responses of the subjects, using the SAM (self-assessment maniki) study and their scores. The audiovisual stimuli that were used reflected the results of the proposed method in improving the accuracy to 84.5%.
An affective BCI was described in [49], based on an exploratory study for the modality of an affective response. The case of 24 subjects and their neurophysiological responses during visual, auditory and audiovisual stimulation were analyzed. The results showed that during visual stimulation the alpha parietal power signals decrease, while they increase during auditory stimulation.
In [50], they described the recognition of emotions through an EEG as a field of computation with problems related to the induction of emotions, the extraction of characteristics and their classification. In addition, they present a characteristic extraction technique with a concept called the mirror neuron system that was adapted for the induction of emotions through the process of imitation.
In order to find the relationships between EEG signals and human emotions, in [51] Nie et al. studied brain signals, which were used to classify two types of positive and negative emotions. Results with an average test accuracy of 87.53% were obtained.
In [52], they mentioned the emotional recognition of objects as one of the research topics for continued work. They also observed that recognition and classification of musical emotions is still difficult. They used and EEG by means of a non-invasive BCI, to analyze brain signals, and finally proposed a personalized model, based on evidence for the recognition of the emotion of music.
Studies on the relationship between emotions and musical stimuli that use an EEG are increasing. Byun et al. investigated the characteristics for the EEG pattern classifiers, related to musical stimuli [53]. Feature extraction methods were applied with a database for the analysis of emotions. For future work the authors mentioned classifying the emotional state according to the music listened to.
In [54,55] Liu and Sourina used electroencephalograms to make more intuitive interfaces. Their research included the development of different affective applications, emotional games and emotional avatars. The authors implemented an algorithm of recognition of emotions in real-time. The results indicated that the algorithm was able to recognize eight emotions with good precision.
In a study by Tseng et al. [56], a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state.
The study conducted by Xu et al. in [56] analyzed whether the performance of an auditory BCI can be further improved by increasing the mental efforts associated with the execution of the attention-related task.
Koelstra et al. presented a database for the analysis of emotions using psychological signals with a set of data for the analysis of the affective states of a human [57]. The classification was performed for the scales of arousal, valence and liking using features extracted from the EEG and other modalities. The results were shown to be significantly better than random classification.
The main research performed in this work includes different studies specifically related to BCI for the detection of emotions in people with visual disabilities. Although not all the found technologies were integrated in a single research, they were considered as part of this review. To the best of our knowledge, the results do not show evidence of the integration of BCI and AC for detection of emotions in people with visual disabilities. For this reason, development of new research which integrates these topics of interest is proposed in this area of opportunity.

4. Results

In this section a comparative analysis of the reviewed works is presented. It is observed that there is a trend towards the creation of BCI systems based on EEG, as a support technology for people with disabilities. In addition, it is possible to visualize the combination between BCI and AC systems, where the results of this analysis indicate that this combination is possible. The authors also analyze general purpose studies, that is, BCI for experimental research and its behavior with other technologies.
The results of the analysis of the integration of support technologies for people with visual disabilities are shown in Table 1. The features that have been considered show that the basis of the systems is BCI and an EGG. The type of potential visual or auditory evoked stimulus depends on the disability in question. Finally, AC was considered as a field that allows identifying the affective state of a person.
Figure 2 concentrates the works reviewed in the field of investigation. The results show that the efforts performed to implement BCI systems and how to detect the affective state of a person with a visual disability still requires work.

5. Discussion

The answer to the first proposed question is discussed below: Why is it important to study the emotions of a person with a visual disability? Emotions are the way in which a person expresses their feelings—joy, anger, sadness, pleasure, etc.—before a certain situation or stimulus. However, this is difficult for individuals with a disability because they are not able to interact naturally. People with visual disabilities commonly require an intermediary that allows them to recognize and interact in the environment around them.
Affective computing has been shown to be applicable in the treatment of disorders such as autism, Asperger syndrome or depression, as well as in the recognition of stress and its mitigation. The study of affective states of a person with visual disabilities could be useful as a virtual assistant, which allows this type of person to express, recognize and interpret their emotions to improve their interaction with the environment, without the need to depend on someone else.
Although there are several ways of recognizing a person’s emotions, either through facial expressions, speech or bio-sensors, the study of brain signals by means of affective computation and a BCI is the main object of investigation of this work.
As indicated by Pantic et al. in [59] human–computer interaction should include the ability to recognize the affective states of users, to make systems more human, effective and efficient.
Regarding the second question stated in this paper: “Can affective computing obtain information of interest to represent the emotions of a person with a visual disability?”; to the best of our knowledge, the results do not show evidence of the integration of BCI and AC for detection of emotions in people with visual disabilities. However, to improve the efficiency in the interaction between the human and the computer, affective computation plays an important role; it could provide people with visual disabilities a new experience with the use of technology, through the detection of their emotions. Therefore, the authors identify that there is still a motivation to continue exploring areas that integrate affective computing, BCI systems and visual disabilities.
Based on the related research and on the results reported and analyzed, our manuscript shows that a BCI gives the opportunity for people with or without disabilities to communicate and interact with their environment through the interpretation of their brain signals. Under this approach, a BCI system widely used in the interpretation of brain signals can be based on a visual stimulus as a trigger; however, in people with visual disabilities, a BCI based on visual information is not entirely useful, which makes it necessary to move from visual stimuli to auditory stimuli in order to adjust the system to the needs of these people.
Emotions represent the affective state of a person and are expressions of mental states, given as a response to the stimuli produced in the environment. In addition, emotions influence the perception, communication and decision-making of people with or without disabilities. People with visual disabilities require auditory stimuli due to their condition. There are works related to the field of AC and BCI, which have positively reported the possibility of recognizing the affective states of a person who has been stimulated in an auditory way.
Based on the observed lack of the related works and trends on the integration of BCI systems for the detection of the affective states of a person with a visual disability, the authors propose a framework for covering this gap. Our proposal is visualized in Figure 3.
The modules that compose our proposal are: (1) auditory stimulation of a person with visual disabilities; (2) use of a BCI to obtain the brain signals given by the evoked potentials; (3) an offline module to analyze the data set of the brain signals; (4) apply techniques for the extraction and classification of emotions, to finally pass to the module; (5) the identification of the affective state of the person with a visual disability.
There is also some research that identifies the emotions of people through physiological signals, as proposed by Healey et al. [60]. In this case, the authors mention that it is possible to recognize emotions using heart rate or muscle activity.
In [61], Hamdi et al. evaluated the emotional states of a person using human–machine interfaces and biofeedback sensors. Their work evaluated the data in real-time, defined as a behavioral engine to allow a realistic multimodal dialogue between an incorporated conversational agent and the person.
Kousarrizi et al. mentioned that detecting artifacts produced in electroencephalography data by muscle activity, eye blinks and electrical noise is a common and important problem in electroencephalography research [62]. Also, researchers and scientists must consider the needs of users during the design and testing of BCI systems [63].
In BCI systems, users explicitly manipulate their brain activity instead of using motor movements to produce signals that can be used to control computers or communication devices [64].
A BCI offers people the opportunity to increase their capacities by providing a new bond of interaction with the outside world and is especially relevant as an aid to paralyzed people [65]. A BCI system provides people with visual, motor, severe motor or basic communication abilities with the ability to express their wishes, emotions or to communicate and even operate external devices [66].

6. Conclusions

The results of this review show that the efforts made in this area have implemented BCI based on auditory stimulation for people with visual disabilities. On the other hand, the affective computing has detected emotional states in people who do not have a visual disability, however, the implementation of a BCI using auditory stimuli in conjunction with affective computing, for the detection of emotions in people with a visual disability, still has not been proven. Therefore, the authors consider that the use of a BCI and AC for such individuals should be evaluated and they propose a framework architecture for integrating these areas.
A BCI provides another method of communication for those who have difficulty communicating with the outside world [67], researchers have used BCI technology to build systems that allow communication between the brain and the computer through brain signals.
The construction of robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are the associated technical problems [68]. The needs of people with visual disabilities are greater every day. Technology will continue to make an impact on the lives of people with visual disabilities in ways that were not possible before [69]. In this sense, future research is needed in several areas, in addition to developing high performance BCI systems to allow people with needs to perform activities of daily living [70].
Therefore, research that is functional, and not just experimental, is a priority for people with visual disabilities as it could enable them to live a new experience with the use of technology. The priority goal was to improve the experience with technology and promote the integration of people with visual disabilities into society.
In future work the authors aim to implement the proposed framework in order to test its impact on people with visual disabilities. In addition, other future lines of research should be focused on the effect of audiovisual stimulation in healthy people and auditory stimulation in people with visual disabilities, in order to offer a similar experience with the use of technology. In the same way, further research is required in the response to the evoked potentials from a person with visual disabilities based on auditory stimulation. It is necessary to continue evaluating the effects that occur during these types of experiments in people with visual disabilities and evaluate the results in comparison to other types of stimuli presented to people without disabilities; that is, systems that adapt to the degree of stimulation required by people with visual disabilities. Another aspect that could be evaluated is the generation of adaptive recommendation systems for people with visual disabilities, which allow these people to select the audio according to their emotional states in real-time. Also, another future line would be the creation of an emotional virtual assistant for people with visual disabilities that identifies their emotions according to the environment in which they interact and gives them alternative communication and improvement of their affective state.

Author Contributions

Writing—original draft preparation, J.L.L.-H., I.G.-C. and J.L.L.-C.; writing—review and editing, J.L.L.-H., I.G.-C., J.L.L.-C. and B.R.-M.; supervision and funding acquisition, I.G.-C. and B.R.-M.


This work was supported by the Consejo Nacional de Ciencia y Tecnología CONACyT, through the number 709656 and by the Research Program of the Ministry of Economy and Competitiveness—Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R).

Conflicts of Interest

The authors declare no conflicts of interest.


  1. Gao, S.; Wang, Y.; Gao, X.; Hong, B. Visual and auditory brain-computer interfaces. IEEE Trans. Biomed. Eng. 2014, 61, 1436–1447. [Google Scholar] [PubMed]
  2. Bashashati, A.; Fatourechi, M.; Ward, R.K.; Birch, G.E. A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals. J. Neural Eng. 2007, 4. [Google Scholar] [CrossRef] [PubMed]
  3. Domingo, M.C. An overview of the Internet of Things for people with disabilities. J. Netw. Comput. Appl. 2012, 35, 584–596. [Google Scholar] [CrossRef]
  4. Millán, J.D.R.; Rupp, R.; Müller-Putz, G.R.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kübler, A.; Leeb, R.; et al. Combining brain-computer interfaces and assistive technologies: State-of-the-art and challenges. Front. Neurosci. 2010, 4, 1–15. [Google Scholar] [CrossRef] [PubMed]
  5. Deng, J.; Yao, J.; Dewald, J.P.A. Classification of the intention to generate a shoulder versus elbow torque by means of a time-frequency synthesized spatial patterns BCI algorithm. J. Neural Eng. 2005, 2, 131–138. [Google Scholar] [CrossRef] [PubMed]
  6. Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F. Eye-gaze independent EEG-based brain-computer interfaces for communication. J. Neural Eng. 2012, 9. [Google Scholar] [CrossRef] [PubMed]
  7. Brereton, P.; Kitchenham, B.A.; Budgen, D.; Turner, M.; Khalil, M. Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 2007, 80, 571–583. [Google Scholar] [CrossRef]
  8. Kitchenham, B.; Charters, S. Procedures for Performing Systematic Literature Reviews in Software Engineering; Technical Report; Durham University: Durham, UK, 2007. [Google Scholar]
  9. Hamdi, H.; Richard, P.; Suteau, A.; Allain, P. Emotion assessment for affective computing based on physiological responses. In Proceedings of the 2012 IEEE International Conference on Fuzzy Systems, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
  10. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef] [PubMed]
  11. Jeanmonod, D.J.; Suzuki, K. We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists TOP 1% Control of a Proportional Hydraulic System. Intech. Open 2018, 2, 64. [Google Scholar]
  12. Blankertz, B.; Curio, G.; Vaughan, T.M.; Schalk, G.; Wolpaw, J.R.; Neuper, C.; Pfurtscheller, G.; Hinterberger, T.; Birbaumer, N. The BCI Competition 2003: Progress and Perspectives in Detection and Discrimination of EEG Single Trials. IEEE Trans. Biomed. Eng. 2004, 51, 1044–1051. [Google Scholar] [CrossRef]
  13. Khan, R. Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges. In Proceedings of the 2012 10th International Conference on Frontiers of Information Technology, Islamabad, India, 17–19 December 2012; pp. 257–260. [Google Scholar]
  14. Pattnaik, P.K.; Sarraf, J. Brain Computer Interface issues on hand movement. J. King Saud. Univ. Comput. Inf. Sci. 2018, 30, 18–24. [Google Scholar] [CrossRef][Green Version]
  15. Lebedev, M.A.; Nicolelis, M.A.L. Brain-machine interfaces: past, present and future. Trends Neurosci. 2006, 29, 536–546. [Google Scholar] [CrossRef] [PubMed]
  16. Minguillon, J.; Lopez-Gordo, M.A.; Pelayo, F. Trends in EEG-BCI for daily-life: Requirements for artifact removal. Biomed. Signal Process. Control 2017, 31, 407–418. [Google Scholar] [CrossRef]
  17. Jafarifarmand, A.; Badamchizadeh, M.A. Artifacts removal in EEG signal using a new neural network enhanced adaptive filter. Neurocomputing 2013, 103, 222–231. [Google Scholar] [CrossRef]
  18. Arvaneh, M.; Guan, C.; Ang, K.K.; Quek, C. Optimizing the channel selection and classification accuracy in EEG-based BCI. IEEE Trans. Biomed. Eng. 2011, 58, 1865–1873. [Google Scholar] [CrossRef]
  19. Kübler, A.; Furdea, A.; Halder, S.; Hammer, E.M.; Nijboer, F.; Kotchoubey, B. A brain-computer interface controlled auditory event-related potential (p300) spelling system for locked-in patients. Ann. N. Y. Acad. Sci. 2009, 1157, 90–100. [Google Scholar] [CrossRef]
  20. Shukla, R.; Trivedi, J.K.; Singh, R.; Singh, Y.; Chakravorty, P. P300 event related potential in normal healthy controls of different age groups. Indian J. Psychiatry 2000, 42, 397–401. [Google Scholar]
  21. Núñez-Peña, M.I.; Corral, M.J.; Escera, C. Potenciales evocados cerebrales en el contexto de la investigación psicológica: Una actualización. Anu. Psicol. 2004, 35, 3–21. [Google Scholar]
  22. Utsumi, K.; Takano, K.; Okahara, Y.; Komori, T.; Onodera, O.; Kansaku, K. Operation of a P300-based braincomputer interface in patients with Duchenne muscular dystrophy. Sci. Rep. 2018, 8, 1753. [Google Scholar] [CrossRef]
  23. Polich, J. Clinical application of the P300 event-related brain potential. Phys. Med. Rehabil. Clin. N. Am. 2004, 15, 133–161. [Google Scholar] [CrossRef]
  24. Chi, Y.M.; Wang, Y.T.; Wang, Y.; Maier, C.; Jung, T.P.; Cauwenberghs, G. Dry and noncontact EEG sensors for mobile brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 228–235. [Google Scholar] [CrossRef] [PubMed]
  25. Terven, J.R.; Salas, J.; Raducanu, B. New Opportunities for computer vision-based assistive technology systems for the visually impaired. Computer (Long. Beach. Calif.) 2014, 47, 52–58. [Google Scholar]
  26. Tsai, T.-W.; Lo, H.Y.; Chen, K.-S. An affective computing approach to develop the game-based adaptive learning material for the elementary students. In Proceedings of the 2012 Joint International Conference on Human-Centered Computer Environments (HCCE ’12), New York, NY, USA, 8–13 March 2012. [Google Scholar]
  27. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  28. Lobera, J.; Mondragón, V.; Contreras, B. Guía Didáctica para la Inclusión en Educación inicial y Básica; Technical Report; Secretaria de Educación Pública: México city, México, 2010.
  29. Sarwar, S.Z.; Aslam, M.S.; Manarvi, I.; Ishaque, A.; Azeem, M. Noninvasive imaging system for visually impaired people. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology (ICCSIT 2010), Chengdu, China, 9–11 July 2010. [Google Scholar]
  30. Guo, J.; Gao, S.; Hong, B. An Auditory Brain–Computer Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 230–235. [Google Scholar] [PubMed]
  31. Hinterberger, T.; Hi, J.; Birbaume, N. Auditory brain-computer communication device. In Proceedings of the IEEE International Workshop on Biomedical Circuits and Systems, Singapore, 1–3 December 2004. [Google Scholar]
  32. Nijboer, F.; Furdea, A.; Gunst, I.; Mellinger, J.; McFarland, D.J.; Birbaumer, N.; Kübler, A. An auditory brain-computer interface (BCI). J. Neurosci. Methods 2008, 167, 43–50. [Google Scholar] [CrossRef] [PubMed]
  33. Klobassa, D.S.; Vaughan, T.M.; Brunner, P.; Schwartz, N.E.; Wolpaw, J.R.; Neuper, C.; Sellers, E.W. Toward a high-throughput auditory P300-based brain-computer interface. Clin. Neurophysiol. 2009, 120, 1252–1261. [Google Scholar] [CrossRef] [PubMed]
  34. Sellers, E.W.; Ryan, D.B.; Hauser, C.K. Noninvasive brain-computer interface enables communication after brainstem stroke. Sci. Transl. Med. 2014, 6. [Google Scholar] [CrossRef] [PubMed]
  35. Okahara, Y.; Takano, K.; Komori, T.; Nagao, M.; Iwadate, Y.; Kansaku, K. Operation of a P300-based brain-computer interface by patients with spinocerebellar ataxia. Clin. Neurophysiol. Pract. 2017, 2, 147–153. [Google Scholar] [CrossRef] [PubMed]
  36. Millán, J.d.R.; Renkens, F.; Mouriño, J.; Gerstner, W. Brain-actuated interaction. Artif. Intell. 2004, 159, 241–259. [Google Scholar] [CrossRef][Green Version]
  37. Sirvent Blasco, J.L.; Iáñez, E.; Úbeda, A.; Azorín, J.M. Visual evoked potential-based brain-machine interface applications to assist disabled people. Expert Syst. Appl. 2012, 39, 7908–7918. [Google Scholar] [CrossRef]
  38. Hill, N.J.; Lal, T.N.; Bierig, K.; Birbaumer, N.; Scholkopf, B. Attentional modulation of auditory event-related potentials in a brain-computer interface. In Proceedings of the IEEE International Workshop on Biomedical Circuits and Systems, Singapore, 1–3 December 2004. [Google Scholar]
  39. Hill, N.J.; Schölkopf, B. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli. J. Neural Eng. 2012, 9, 1–23. [Google Scholar] [CrossRef]
  40. Suwa, S.; Yin, Y.; Cui, G.; Tanaka, T.; Cao, J.; Algorithm, A.E.M.D. A design method of an auditory P300 with P100 brain computer interface system. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012. [Google Scholar]
  41. Yin, E.; Zeyl, T.; Saab, R.; Hu, D.; Zhou, Z.; Chau, T. An Auditory-Tactile Visual Saccade-Independent P300 Brain–Computer Interface. Int. J. Neural Syst. 2015, 26, 1650001. [Google Scholar] [CrossRef] [PubMed]
  42. Collinger, J.L.; Wodlinger, B.; Downey, J.E.; Wang, W.; Tyler-Kabara, E.C.; Weber, D.J.; McMorland, A.J.C.; Velliste, M.; Boninger, M.L.; Schwartz, A.B. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 2013, 381, 557–564. [Google Scholar] [CrossRef][Green Version]
  43. Wang, Y.T.; Wang, Y.; Jung, T.P. A cell-phone based brain-computer interface for communication in daily life. J. Neural Eng. 2011, 8, 025018. [Google Scholar] [CrossRef] [PubMed]
  44. Daly, I.; Williams, D.; Malik, A.; Weaver, J.; Kirke, A.; Hwang, F.; Miranda, E.; Nasuto, S.J. Personalised, Multi-modal, Affective State Detection for Hybrid Brain-Computer Music Interfacing. IEEE Trans. Affect. Comput. 2018, 3045, 1–14. [Google Scholar] [CrossRef]
  45. Williams, D.; Kirke, A.; Miranda, E.; Daly, I.; Hwang, F.; Weaver, J.; Nasuto, S. Affective calibration of musical feature sets in an emotionally intelligent music composition system. ACM Trans. Appl. Percept. 2017, 14, 1–13. [Google Scholar] [CrossRef]
  46. Murugappan, M.; Nagarajan, R.; Yaacob, S. Combining spatial filtering and wavelet transform for classifying human emotions using EEG Signals. J. Med. Biol. Eng. 2011, 31, 45–51. [Google Scholar] [CrossRef]
  47. Miranda, E.R.; Durrant, S.; Anders, T. Towards brain-computer music interfaces: Progress and challenges. In Proceedings of the 2008 First International Symposium on Applied Sciences on Biomedical and Communication Technologies (ISABEL 2008), Aalborg, Denmark, 25–28 October 2008. [Google Scholar]
  48. Khosrowabadi, R.; Quek, H.C.; Wahab, A.; Ang, K.K. EEG-based emotion recognition using self-organizing map for boundary detection. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar]
  49. Mühl, C.; Brouwer, A.M.; van Wouwe, N.; van den Broek, E.; Nijboer, F.; Heylen, D.K.J. Modality-specific affective responses and their implications for affective BCI. In Proceedings of the Fifth International Brain-Computer Interface Conference 2011, Graz, Austria, 22–24 September 2011. [Google Scholar]
  50. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion Recognition From EEG Using Higher Order Crossings. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 186–197. [Google Scholar] [CrossRef]
  51. Nie, D.; Wang, X.W.; Shi, L.C.; Lu, B.L. EEG-based emotion recognition during watching movies. In Proceedings of the 2011 5th International IEEE/EMBS Conference on Neural Engineering, Cancun, Mexico, 27 April–1 May 2011. [Google Scholar]
  52. Hsu, J.L.; Zhen, Y.L.; Lin, T.C.; Chiu, Y.S. Personalized music emotion recognition using electroencephalography (EEG). In Proceedings of the 2014 IEEE International Symposium on Multimedia, Taichung, Taiwan, 10–12 December 2014. [Google Scholar]
  53. Byun, S.W.; Lee, S.P.; Han, H.S. Feature selection and comparison for the emotion recognition according to music listening. In Proceedings of the 2017 International Conference on Robotics and Automation Sciences (ICRAS), Hong Kong, China, 26–29 August 2017. [Google Scholar]
  54. Sourina, O.; Liu, Y. EEG-enabled affective applications. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013. [Google Scholar]
  55. Diesner, J.; Evans, C.S. Little Bad Concerns: Using Sentiment Analysis to Assess Structural Balance in Communication Networks. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris, France, 25–28 August 2015. [Google Scholar]
  56. Tseng, K.C.; Lin, B.S.; Wong, A.M.K.; Lin, B.S. Design of a mobile brain computer interface-based smart multimedia controller. Sensors 2015, 15, 5518–5530. [Google Scholar] [CrossRef]
  57. Xu, H.; Zhang, D.; Ouyang, M.; Hong, B. Employing an active mental task to enhance the performance of auditory attention-based brain-computer interfaces. Clin. Neurophysiol. 2013, 124, 83–90. [Google Scholar] [CrossRef]
  58. Koelstra, S.; Mühl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis; Using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  59. Pantic, M.; Rothkrantz, L.J.M. Toward an affect-sensitive multimodal human-computer interaction. Proc. IEEE 2003, 91, 1370. [Google Scholar] [CrossRef]
  60. Healey, J.; Picard, R. SmartCar: Detecting driver stress. In Proceedings of the 15th International Conference on Pattern Recognition (ICPR-2000), Barcelona, Spain, 3–7 September 2000. [Google Scholar]
  61. Hamdi, H.; Richard, P.; Suteau, A.; Saleh, M. A Multi-Modal Virtual Environment To Train for Job Interview. In Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems (PECCS 2011), Algarve, Portugal, 5–7 March 2011. [Google Scholar]
  62. Kousarrizi, M.R.N.; Ghanbari, A.A.; Teshnehlab, M.; Aliyari, M.; Gharaviri, A. Feature extraction and classification of EEG signals using wavelet transform, SVM and artificial neural networks for brain computer interfaces. In Proceedings of the 2009 International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing, Shanghai, China, 3–5 August 2009. [Google Scholar]
  63. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed]
  64. Allison, B.Z.; Neuper, C. Could Anyone Use a BCI? In Brain-Computer Interfaces; Tan, D., Nijholt, A., Eds.; Springer: London, UK, 2010. [Google Scholar]
  65. De Negueruela, C.; Broschart, M.; Menon, C.; Del R. Millán, J. Brain-computer interfaces for space applications. Pers. Ubiquitous Comput. 2011, 15, 527–537. [Google Scholar] [CrossRef]
  66. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain-computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  67. Suefusa, K.; Tanaka, T. A comparison study of visually stimulated brain-computer and eye-tracking interfaces. J. Neural Eng. 2017, 14. [Google Scholar] [CrossRef] [PubMed]
  68. Makeig, S.; Kothe, C.; Mullen, T.; Bigdely-Shamlo, N.; Zhang, Z.; Kreutz-Delgado, K. Evolving Signal Processing for Brain 2013; Computer Interfaces. Proc. IEEE 2012, 100, 1567–1584. [Google Scholar] [CrossRef]
  69. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172. [Google Scholar] [CrossRef]
  70. Yuan, H.; He, B. Brain–Computer Interfaces Using Sensorimotor Rhythms: Current State and Future Perspectives. IEEE Trans. Biomed. Eng. 2014, 61, 1425–1435. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Topics evaluated in this work.
Figure 1. Topics evaluated in this work.
Sensors 19 02620 g001
Figure 2. Trends in BCI and AC for people with disabilities.
Figure 2. Trends in BCI and AC for people with disabilities.
Sensors 19 02620 g002
Figure 3. Integration of a BCI and AC for the detection of emotions in people with a visual disability.
Figure 3. Integration of a BCI and AC for the detection of emotions in people with a visual disability.
Sensors 19 02620 g003
Table 1. Brain–computer interface and affective computing for people with a visual disability.
Table 1. Brain–computer interface and affective computing for people with a visual disability.
[9] Hamdi et al.2012BCI, EEG, ACRecognition of emotions through a BCI and a heart rate sensorVisualOnlinePositiveAnalysis of variance (ANOVA)
[14] Pattnaik et al.2018BCI, EEGBCI for the classification of the movements of the left hand and the right handVisualOnlinePositiveDiscrete wavelet transform (DWT)
[16] Minguillon et al.2017BCI, EEGIdentification of EEG noise produced by endogenous and exogenous causes--Offline----
[17] Jafarifarmad et al.2013BCI, EEGExtraction of noise-free features for EEG previously recorded--OfflinePositiveFunctional-link neural network (FLN), adaptive radial basis function networks (RBFN)
[18] Arvaneh et al.2011BCI, EEGAlgorithm for EEG channel selectionAuditory/VisualOffline+10%Sparse common spatial pattern (SCSP)
[19] Kübler et al.2009BCI, EEG, EPBCI-controlled auditory event-related potentialAuditoryOnline--Stepwise linear discriminant analysis method (SWLDA), Fisher’s linear discriminant (FLD)
[22] Utsumi et al.2018BCI, EEGBCI for patients with DMD (Duchenne muscular dystrophy) based on the P300VisualOffline71.6%–80.6%Fisher’s linear discriminant analysis
[24] Chi et al.2012BCI, EEGAnalysis of dry and non-contact electrodes for a BCIAuditory/VisualOnlinePositiveCanonical correlation analysis (CCA)
[29] Sarwar et al.2010BCI, EEGNon-invasive BCI to convert images into signals for the optic nerveVisualOnlinePositive--
[30] Guo et al.2010BCI, EEGA brain computer–auditory interface, using the mental responseAuditoryOffline85%Fisher discriminant analysis (FLD), support vector machine (SVM)
[33] Klobassa et al.2009BCI, EEG, EPBCI based on P300AuditoryOffline50%–75%Stepwise linear discriminant analysis method (SWLDA)
[34] Sellers et al.2014BCI, EEGBCI non-invasive for communication of messages from people with motor disabilitiesVisualOnlinePositiveStepwise linear discriminant analysis method (SWLDA)
[35] Okahara et al.2017BCI, EEGBCI based on P300 for patients with spinocerebellar ataxia (SCA)VisualOffline82.9%–83.2%Fisher’s linear discriminant analysis
[37] Blasco et al.2012BCI, EEG, ACBCI based on EEG, for people with disabilitiesVisualOnlinePositiveStepwise linear discriminant analysis (SWLDA)
[39] Hill et al.2012BCI, EEGBCI for completely paralyzed people, based on auditory stimuliAuditoryOnline76%–96%Contrast between stimuli
[40] Suwa et al.2012BCI, EEG, EPBCI that uses the P300 and P100 responsesAuditoryOnline78%Support vector machine (SVM)
[41] Yin et al.2015BCI, EEG, EPBimodal brain–computer interfaceAuditory/TactileOnline+45.43–+51.05%Bayesian linear discriminant analysis (BLDA)
[42] Collinger et al.2013BCI, EEGInvasive brain–computer interface for neurological controlVisualOnlinePositive--
[43] Wang et al.2010BCI, EEGPortable and wireless brain–computer interfaceVisualOnline95.9%Fast Fourier transform (FFT)
[44] Daly et al.2018BCI, EEG, ACAnalysis of brain signals for the detection of a person’s affective stateAuditoryOnlinePositiveSupport vector machine (SVM)
[45] Williams et al.2017BCI, EEG, ACSystem for the generation of music dependent on the affective state of a personAuditoryOnlinePositive--
[46] Murugappan et al.2011BCI, EEG, ACEvaluation of the emotions of a person, using an EEG and auditory stimuliAuditory/VisualOffline79.17%–83.04%Surface laplacian filtering, wavelet transform (WT), linear classifiers
[48] Khosrowabadi2010BCI, EEG, ACSystem for the detection of emotions based on EEGAuditory/VisualOffline84.5%The k-nearest neighbor classifier (KNN)
[49] Mühl et al.2011BCI, EEG, ACAffective BCI using a person’s affective responsesAuditory/VisualOnline--A Gaussian naive Bayes classifier
[50] Pentratonakis et al.2010BCI, EEG, ACRecognition of emotions through the study of EEGVisualOffline62.3%–83.33%K-nearest neighbor (KNN), quadratic discriminant analysis, support vector machine (SVM)
[51] Nie et al.2011BCI, EEG, ACClassification of positive or negative emotions, studying an EEGVisualOffline87.53%Support vector machine (SVM)
[52] Hsu et al.2015BCI, EEG, ACBCI non-invasive for the recognition of the emotions produced by musicVisualOnlinePositiveArtificial neural network model (ANN)
[53] Byun et al.2017BCI, EEG, ACClassification of a person’s emotions using an EEGAuditoryOfflinePositiveBand-pass filter
[54,55] Sourina & Liu2013BCI, EEG, ACAlgorithm of recognition of emotions in real-time, for sensitive interfacesVisualOnlinePositiveSupport vector machine (SVM)
[58] Tseng et al.2015BCI, EEGIntelligent multimedia controller based on BCIAuditoryOnlinePositiveFast Fourier transform (FFT)
[56] Xu et al.2013BCI, EEGPerformance of an auditory BCI based on related evoked potentialsAuditoryOnline+4%–+6%Support vector machine (SVM)
[57] Koelstra et al.2012BCI, EEGA database for the analysis of emotionsVisualOffline--High-pass filter, analysis of variance (ANOVA)
Terms referred to in Table 1: BCI (brain–computer Interface), EEG (electroencephalogram), AC (affective computing), evoked potentials (EP).

Share and Cite

MDPI and ACS Style

López-Hernández, J.L.; González-Carrasco, I.; López-Cuadrado, J.L.; Ruiz-Mezcua, B. Towards the Recognition of the Emotions of People with Visual Disabilities through Brain–Computer Interfaces. Sensors 2019, 19, 2620.

AMA Style

López-Hernández JL, González-Carrasco I, López-Cuadrado JL, Ruiz-Mezcua B. Towards the Recognition of the Emotions of People with Visual Disabilities through Brain–Computer Interfaces. Sensors. 2019; 19(11):2620.

Chicago/Turabian Style

López-Hernández, Jesús Leonardo, Israel González-Carrasco, José Luis López-Cuadrado, and Belén Ruiz-Mezcua. 2019. "Towards the Recognition of the Emotions of People with Visual Disabilities through Brain–Computer Interfaces" Sensors 19, no. 11: 2620.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop