Next Article in Journal
Managing Considerable Distributed Resources for Demand Response: A Resource Selection Strategy Based on Contextual Bandit
Previous Article in Journal
Conditional Encoder-Based Adaptive Deep Image Compression with Classification-Driven Semantic Awareness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

iMouse: Augmentative Communication with Patients Having Neuro-Locomotor Disabilities Using Simplified Morse Code

Department of Electrical Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(13), 2782; https://doi.org/10.3390/electronics12132782
Submission received: 4 June 2023 / Revised: 21 June 2023 / Accepted: 21 June 2023 / Published: 23 June 2023

Abstract

:
Patients with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, an incurable disease in which motor neurons are selectively destroyed, gradually lose their mobility as organ dysfunction begins, and eventually, patients find it challenging to make even minor movements and simple communications. To communicate with patients with quadriplegia, researchers have focused on movements of the eye, the only moving organ for patients with ALS, and they have investigated the detection of eyeblinks using brainwaves or cameras or other ways to select letters on a screen via eyeball movements based on eye-tracking cameras. However, brainwave-based techniques, which use the electrical signals of eye movements to determine patient’s intentions, are sensitive to noise, often resulting in the inaccurate identification of intent. Alternatively, a camera-based method that uses letter selection detects the movement of eye feature-points, and this method makes it easy to identify a patient’s intentions using a predefined decision-making process. However, it has long processing time and is prone to inaccuracy due to errors in either the Morse code implementation assigned to all alphabets or the sequential selection methods. Therefore, we have proposed iMouse-sMc, a simplified Morse code-based user interface model using an eye mouse for faster and easier communication with such patients. Furthermore, we improved the detection performance of the eye mouse by applying image contrast techniques to enable communication with patients even at night. To verify the excellent performance of the proposed eye mouse for a user interface, we conducted comparative experiments with existing camera-based communication models based on various words. The results revealed that the time of communication was reduced to 83 s and the intention recognition accuracy was improved by ~28.16%. Additionally, even in low-light environments, where existing models are unable to communicate with the patients due to difficulties with eye detection, the proposed model demonstrated its eye detection capability and proved that it can be used universally for communication with patients during the day and at night.

1. Introduction

Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, is a progressive paralysis of the skeletal muscles throughout the body caused by the gradual death of the motor neurons that control the body’s skeletal muscles, making it impossible to lead a life alone. As the motor neurons in the brain and spinal cord are damaged, movement of the jaw, lips, and tongue becomes progressively slower, eventually reaching a point where communication is impossible [1,2,3]. For such patients, not being able to communicate well can be more painful than not being able to move around. The frustration of not being able to say what one wants to say and not being able to communicate one’s needs or speak at all can lead to a fear of not being able to call someone even during an emergency. Thus, immediate communication with patients with ALS is one of the most necessary and challenging tasks in daily life [4,5,6].
Efforts to communicate with patients suffering from ALS, whose body’s muscles slowly stiffen, have been focused on eye movements, which are the only possible system of movement, and various studies have been conducted based on these. Chambayil et al. proposed a brain–computer interface (BCI) system that detected signals from eyeblinks using electroencephalography (EEG) and selected blocks and letters on a virtual keyboard [7]. The participants were asked to type in letters by sequentially selecting 1 of 26 alphabets organized into three main blocks and sub-blocks by blinking their eyes. The user experience was enhanced by dividing the selected block into sub-blocks for further selection; however, the average processing time of more than one minute to print a single alphabet was time-consuming for communicating with a user. Similarly, Rusanu et al. proposed a communication scheme using EEG [8]. It was designed to speed up communication with users by having them blink to select one of 32 predetermined emoticons rather than alphabet letters, and it also showed limitations in expressing something other than the predetermined emoticons. Bandara and Nanayakkara proposed a BCI system based on electromyography (EMG) and an electrooculogram (EOG) for partially paralyzed patients who could move some of their muscles and their eyes using a combination of eye blinks and lip movement signals to control electronic devices [9]. They used EMG and EOG rather than of EEG to improve the accuracy of the blink detection by reducing noise. However, this method had some disadvantages: it was limited to controlling electronic devices such as lights and televisions through mouse control rather than communicating with the patients, making it difficult for patients to express specific opinions. Furthermore, another disadvantage was that it took a long time to interpret and identify the signals due to the real-time nature of the EMG and EOG signals.
These methods have a common problem in implementing user interface technology based on biometric signals, which leads to slow processing speed and poor accuracy due to the sensitivity of biosignals to external noises. They also require multiple electrodes to be attached to the scalp or muscles at all times, causing great inconvenience and exposing immunocompromised patients to the possibility of invasive infections [10,11]. Hence, instead of using biosignal-based technologies, researchers have been investigating the use of an eye mouse—based on simple eye movements—that uses cameras to identify patient intentions. Špakov and Majaranta suggested a scrollable QWERTY virtual keyboard system using eyeball tracking [12]. The space occupied by the existing three-row virtual keyboard was reduced by incorporating a scroll function. As a result, the speed of word input improved, and it had the advantage of being accessible by using simple gaze processing to input words. However, its eyeball position detection could be sensitive even to small head movements, and in low-light environments, the eyeball itself can be difficult to detect, making eye-tracking impossible. Attiah and Khairullah proposed a method of typing words via a virtual keyboard by blinking when the letter the patient wanted to select appeared on the screen [13]. While it had an advantage wherein it was easy for a patient to type words with a simple blink of an eye, it also had a long output time because the patient had to wait for the desired letter to appear. Sushmitha et al. proposed an eye-blinking scheme based on Morse code where the Morse code was designed as a combination of short and long blinks [14]. However, with different Morse codes defined for each alphabet, the combination of short and long blinks was complex, which could lead to less accurate words and longer processing times.
In this way, although various studies have been completed to help paralyzed patients communicate, biosignal-based eye-blinking communication methods suffer from low accuracy due to noise, invasiveness of the patient, and the cost of the equipment required to receive and interpret the signals. Existing camera-based methods also have the drawbacks of low accuracy and slow processing speed.
Therefore, this paper proposes an eye mouse that uses simplified Morse code (iMouse-sMc) to improve the processing speed of a camera-based letter entry system that allows ALS patients to communicate with high accuracy without having to wear separate equipment. An eyeblink detector was chosen as a switch to show a patient’s intentions, and only seven simplified Morse codes were used to represent the alphabets, which is a quarter of the original 26 Morse codes, to increase the word processing speed and accuracy. We assigned the alphabets equally to four quadrants of the monitor and let the user select an area containing the desired alphabet via quadrant navigation. Quadrant navigation is a selection process on a virtual keyboard in which the mouse cursor moves clockwise every second in each quadrant, with only seven simplified eyeblink combinations representing the corresponding letters in the selected area. In addition, to enable eyeblink detection in low-light environments, an image enhancement technique (histogram equalization) was introduced to make it universal for day and night communication with patients [15,16]. To confirm the excellent performance of the proposed iMouse-sMC, comparative experiments with existing similar models were conducted. The results showed that the average accuracy of both short and long words was increased by up to 28.33% and 48.05%, respectively, and their processing speed was reduced by up to 20 s and 83 s, respectively. Furthermore, we compared the performance of the iMouse-sMC in low-light environments and found that even when the image brightness was reduced to 30%, the proposed model showed the high detection rate of 79.96%, while the existing similar models were unable to detect eye movements. The main contributions of this research can be summarized as follows:
  • By determining the level of ambient brightness and introducing histogram equalization in low-light environments where eye detection is difficult, the image was preprocessed to detect user’s blinks universally during the day or at night.
  • International Morse code represents the alphabets with a total of 26 long and short combinations, but in conjunction with quadrant navigation, we proposed a simplified Morse code that could output all alphabets using only seven combinations.
  • The communication accuracy was improved by applying the SymSpell algorithm that calculated the candidate typos for typed words and corrected them when they occurred.
The proposed eye mouse-based simplified Morse code system is described step by step in Section 2, and the experimental process and results are presented in Section 3. Finally, in Section 4, we present our conclusions and future research directions.

2. Methodology

Efforts to communicate with patients with ALS have largely focused on movement of the eyes as this is the only motor system available. Among them, the method of communicating with patients through camera-based letter selection is easy to implement, but it requires substantial time to communicate with patients depending on the letter selection method. Further, there are limitations such as poor identification accuracy in the case of long sentences [17,18]. In addition, it is difficult to communicate with patients in low-light environments due to the difficulty in detecting the eye area. Thus, we have proposed an eye mouse that uses simplified Morse code (iMouse-sMc) to solve these problems and achieve faster and more accurate communication, and it includes the below two main processes, as shown in Figure 1.
The first step in implementing the eye mouse is eye detection. However, in order to detect an eye, face detection has to occur first. After that, the facial landmark detector extracts the feature points of the face, connects the outline feature points of the eye, and estimates the eye position to complete eye detection. During this step, whether brightness adjustments are necessary is determined based on image pixels to enable eye detection in environments with insufficient luminance. If necessary, the image is adjusted to maximize the contrast through image enhancement techniques [19,20,21]. The second step involves eye blink detection using the size and position of the pupil, the position and angle of the eyelids, etc. within the detected eye area to determine whether the eye has blinked for a short or long period of time, and the corresponding letter is inferred by matching the time period of the eyeblinks with a predefined, simplified Morse code combination. Finally, we introduced a typo-correcting SymSpell algorithm that efficiently suggests and corrects the most likely correct spelling of a selected word in case it is unintentionally misspelled [22].

2.1. Region of Interest Detection

Eye detection, a region of interest (ROI) in an image, is the most basic and important step for communicating with a patient. In low-light environments, however, it becomes very difficult to detect the ROI, and so preprocessing is conducted to improve image contrast for eye detection.

2.1.1. Image Enhancement

Under low-light conditions at night or in cloudy weather, face detection becomes more difficult, which is a major factor in weakening the performance of an eye mouse based on eyeblinks. To solve these problems, we introduced contrast limited adaptive histogram equalization (CLAHE), which reduces the noise generated during image enhancement by setting an upper bound on the occurring frequency of blocks with a fixed size to avoid excessive smoothing of the histogram, thereby enhancing the contrast of the image more efficiently and naturally [23]. The CLAHE process first divides the image into small non-overlapping blocks of size N × N . If the brightness of each pixel within each partitioned block is T i , then the histogram of block is computed as follows:
H i ( k ) = ( x , y ) T i δ I x y , k ,
where δ is a Kronecker delta function that returns 1 if I x y = k and 0 for any other case. The histograms calculated within each block are then transformed into a cumulative distribution function, C i ( k ) , which maps the histograms evenly so that if the brightness value I x y within a block is k , the mapped value I x y can be summarized as follows:
C i ( k ) = j = 0 k H i ( j )   and
I x y = L 1 N 2 j = 0 k C i ( j ) C i ( L 1 ) ,
where L is the maximum value of the brightness and N 2 is the total number of pixels in a block. If the uniform mapping results in values that are too bright or too dark, then the values are modified by adjusting the cumulative distribution function of the corresponding block. This process improves the local contrast of the image and results in a sharper, more detailed image with better overall contrast.

2.1.2. Eye Detection

For eye detection, we first estimated the coordinates of 68 feature points on the face using a pre-trained facial landmark detector from the dlib library. The landmark detector generates feature vectors based on histograms of oriented gradients by calculating the gradient in the direction of the histogram at each pixel location in the image. The generated vector uses a support vector machine classifier to detect 68 facial landmarks that can describe the eyes, nose, ears, and mouth, as well as the facial shape. As a result, it detects landmarks that surround feature parts of the face, including the eyes and nose, as shown in Figure 2b, by numbering each facial feature point, as shown in Figure 2a, and applying it to the face image. In the figure, the left eye corresponds to feature points from 37 through 42 and the right eye corresponds to feature points from 43 through 48. To detect blinking, the corresponding feature points are extracted, but they are extracted as large enough to cover the area surrounding the landmark—approximately 1.2 times the area of the landmark—to minimize the error in detecting eye blinks. Figure 2c,d shows the results of extracting eyes when the eyes are closed and when the eyes are open, respectively, and the eye is marked with six feature points ( p 1 , ,   p 6 ).

2.2. Eyeblink Detection

As a second step to implement the eye mouse, eyeblinks were detected based on the extracted eye feature points. This step takes advantage of the fact that each time an eye blinks, the length between the inner and outer eyelids becomes smaller, and when they open again, the length becomes bigger. We used the eyes aspect ratio (EAR) as an indicator for determining whether blinking was occurring. EAR is defined as set out below by measuring the horizontal and vertical lengths of the eyes [24]:
EAR = p 2 p 6 + p 3 p 5 2 p 1 p 4 ,
where p 1 , ,   p 6 represents the six feature points around the eye and · represents the Euclidean distance between two points. If the EAR values of both the right and left eyes consistently fall below a certain threshold ( τ EAR , tyically 0.2–0.3) over a certain period of time, it is considered that a blink has occurred. In addition, to determine whether an eyeblink was short or long for implementing the Morse code, we used FR clsd , the frame rate while the eye was closed, as an indicator. If FR clsd was less than a predefined threshold ( τ FR ), it was considered a short blink; otherwise, it was considered a long blink. The flowchart of this eyeblink detection process is shown in Figure 3. To avoid detecting blinks when only one eye was closed, we compared the EARs of the left and right eyes, EAR L and EAR R , respectively, and took them into account when recognizing eyeblinks.

2.3. Simplified Morse Code

Once the eye mouse was implemented through the eye detection and eyeblink detection process, it communicated with patients by expressing their intention in text based on the proposed simplified Morse code. As shown in Figure 4, on a quadrant monitor with quadrant navigation, the circled red mouse cursor began at the top left, and each second, it automatically moved clockwise to the next quadrant. Each quadrant was assigned seven alphabetic or control letters—A to G, H to N, O to U, and V to Z, Space, and Delete—which lasted until the patient selected the appropriate quadrant using two short blinks.
Once the user selected a quadrant containing the desired alphabet, the following predefined simplified Morse code keyer was implemented with an eyeblink. Here, the Morse code consisted of seven combinations of short and long blinks, and the blink combinations were designed, as shown in Figure 5, to minimize the occurrence of blink recognition errors. In the figure, ● refers to a short blink and ▬ to a long blink. This minimized the number of Morse codes the user had to learn, making learning easier and enabling higher accuracy and faster processing. After outputting the desired alphabet, the user entered four short blinks to go to the initial screen to select the alphabet contained in the other quadrants. When the user finished entering the desired alphabet, the user moved to the typo correction process with one long blink and two short blinks.

2.4. Typo Correction

After all the desired alphabets had been output on the screen, the SymSpell algorithm was used to check for typos and correct them if necessary. Symspell is an efficient open-source library widely used for natural language processing tasks such as spell-checking and text autocomplete. The algorithm uses the Damerau–Levenshtein distance metric and N-gram for optimal performance and is particularly effective at handling large amounts of data, and its fast search speeds make it applicable to real-time searches. The SymSpell algorithm first builds a lookup table, or lexicon, from a large number of documents or datasets so that if a typo occurs in an input word, it can find the corresponding word. It also stores the frequency of the word to use for typo correction. Then, using the built dictionary, the input words are split into N-grams, and a Trie data structure that indicates which word each N-gram belongs to is created. Based on this, it performs typo correction on the input word, first searching for words that contain all N-grams of the input word to generate a list of candidate words, and then it scores them using frequency and Damerau–Levenshtein distance, returning the word with the highest score as a typo correction result.

3. Experimental Results

To evaluate the excellence and utility of the proposed eye mouse-based letter output system, we considered two basic tasks. The first task was related to eyeblink detection, where the goal was to determine the presence or absence of eyeblinks in low-light environments. For the second task, eyeblink classification, the goal was to calculate the duration of the detected eyeblinks and distinguish between short and long blinks to determine whether the Morse code was implemented. We selected four types of intentions that ALS patients need the most and tested them against the latest similar technologies with 10 participants.

3.1. Experimental Setup

The eyeblink detection system was implemented on an NVIDIA GeForce GTX 1080 Ti (Intel(R) Core(TM) i7-4790 CPU 3.60 GHZ, with 16 GB RAM) and an HD Pro Webcam c920 was used as the camera. The data used for the performance evaluation were based on the results of a survey of Japanese hospitals and ALS Association branches on the types of intentions that most ALS patients wanted to describe [25]. These were “hot”, “curtain”, “absorption”, and “change body position”. The processing time from eyeblink input to monitor output and the accuracy, defined as the number of alphabets correctly identified in these words, were used as the performance evaluation indicators. Figure 6 shows the 10 subjects who participated in the test evaluations. They were randomly selected, with no regard to the shapes and sizes of their eyes or whether they wore glasses.
In addition, all tests were administered in real-time and in different environments to ensure the objectivity of the test evaluations. To evaluate the performance of eyeblink detection in low-light environments, we divided the ambient brightness into four levels and analyzed the performance based on the average accuracy over 10 word-input attempts for each level. The parameters of the eyeblink detector, τ FR and τ EAR , were optimized as 11 and 0.2, respectively, through trial and error. Figure 7 shows the bench for the test evaluations of the proposed user interface, and all experiments were conducted with the subjects’ faces positioned in the center of the monitor with a camera to ensure good eye detection.

3.2. Eye and Eyeblink Detection in a Low-Light Environment

First, we evaluated the eye detection level by dividing the brightness level into four levels to determine whether it was possible to communicate with the participants in a low-light environment, and we compared the eye detection performance of the proposed iMouse-sMc with an existing virtual keyboard-based eye mouse model [13] and an international Morse code-based eye mouse model [14]. Both models were similar to the proposed iMouse-sMc as techniques for identifying a patient’s intention via eyeblink detection using a camera. The brightness level of the test environment varied from high (100%), which represented daytime brightness, to relatively high (80%), medium (60%), and relatively low (30%), and the eye detection results from the state-of-the-art similar models and the proposed iMouse-sMc model are shown in Figure 8. For each model, starting at the top left and moving clockwise, the brightness level of the test environment ranged from high to relatively low. At the high to medium brightness levels, all techniques had no problems detecting subjects’ eyes, although the reliability was somewhat low. However, at relatively low brightness levels, the similar models could not detect eyes and only the iMouse-sMc could, which confirmed that the image enhancement technique employed in the iMouse-sMc worked well in low-light environments.
Next, we compared the accuracy of the proposed model with the state-of-the-art similar models to identify patient intention, i.e., the word identification performance, by detecting eyeblinks according to changes in the brightness of the surrounding environment. The word used for the performance evaluation was “Hot”, which had the lowest number of alphabets among the types of intentions that ALS patients most wanted to describe, to focus more on analyzing performance by brightness level. After 10 attempts to identify the patient’s intentions according to the brightness level for each model, the average values of the identification accuracy were compared, and the results are summarized in Table 1.
When the brightness level was high, both of the similar models showed the same accuracy (86.46%) while the iMouse-sMc showed the best conveyance of participant intention, and was more than 6% higher (93.32%). The accuracy of all the models gradually decreased as the brightness level decreased due to the low-light environment. Especially when the brightness level was medium, the accuracy of [14] model dropped relatively sharply compared to the other models, showing that it is extremely vulnerable to dark environments. When the brightness level was relatively low, both of the similar models could not detect eyes at all, and communication with the participants became impossible, while the proposed iMouse-sMc was not only able to communicate but also showed a high accuracy of 79.96%. In conclusion, although the overall identification performance decreased as the brightness level of the surrounding environment gradually decreased, the iMouse-sMc was able to recognize low-light environments and apply image enhancement techniques to maintain high accuracy, validating that it could be used universally at night as well as during the day.

3.3. User Interface Performance Evaluation

To evaluate the generalization performance of the proposed model, we tested the processing time to identify patient intention and the accuracy of the intention identification on 10 subjects. We fixed the brightness at 100% and compared how quickly and accurately the model communicated the four intention types of “hot”, “curtain”, “absorption”, and “change body position”, as described above. The average values of the processing times after 10 identification attempts per patient intention (word) by the state-of-the-art similar models and the proposed iMouse-sMc are schematized in Figure 9. Figure 9a shows the processing time by model for the word “Hot”, and we can see that there was little difference in performance between the models due to the small number of alphabets. However, as shown in Figure 9b–d, when the number of alphabets in the word representing the patient’s intention increased, the deviation in word processing time per subject was considerably higher than 100 s for the state-of-art models. Alternatively, the proposed iMouse-sMc showed relatively better stability, with subject-specific deviations of approximately 50 s without being affected by the number of alphabets. In conclusion, although the processing time increased as the length of the word increased, the time-processing graph for the iMouse-sMc was more intensive and curved downward compared to the other models, showing a decrease in the processing time.
The average processing times for the word-by-word identification by the user interface models for all 10 subjects are summarized in Table 2. For the word “Hot”, with its short length, there were no significant differences in the processing times across the models. However, as the word length gradually increased, the models proposed in [13,14] showed similar increases in the average processing time. On the contrary, the proposed iMouse-sMc showed a faster processing performance, with a difference of up to 83 s from the other models as the word length increased. The reason for this performance difference is that [13] required users to wait for the desired alphabet to appear on the virtual keyboard before making a selection, which increased the processing time as the number of alphabets in a word increased, along with the words with later alphabets. Reference [14] implemented the Morse code through 26 combinations of eyeblinks for each alphabet, which was complex and increased the processing time as the number of eyeblinks increased. In contrast, the proposed iMouse-sMc implemented 26 alphabets with only seven combinations of eye blinks, and so it was bound to have a faster processing speed compared to the other models.
The accuracy in identifying words entered by a patient’s eyeblinks is of the utmost importance as a typo can drastically change the meaning of the message conveyed. Hence, after 10 identification attempts by each model for each patient intention (word), the performance of each model was evaluated based on the average value of the accuracy, and the results are schematized in Figure 10. In Figure 10a, we can see that model proposed in [14] is somewhat less accurate than the other models while that proposed in [13] and the iMouse-sMc do not show much difference in word identification accuracy considering the short length of the words and the subject-specific deviations. However, if we look at Figure 10b–d, which shows the results when the word length was gradually increased, we can see that the state-of-the-art models had relatively large drops in identification accuracy compared to the iMouse-sMc, and they also had a wide distribution of subject-specific deviations. Alternatively, we can see that iMouse-sMc was not significantly affected by the length of the word and it maintained high accuracy, with an upward graph.
The average accuracy of word-by-word identification by the user interface models for all 10 subjects is summarized in Table 3. The overall accuracy tended to decrease as the word length increased, with [14] having the lowest average accuracy. This led to a significant decrease in accuracy compared to other models because the combination of eyeblinks, which were specific to each letter of the alphabet, became more complex as the word became longer. In [13], the user’s attention span could be a source of typos when typing relatively long words as the user had to wait for the target alphabet to appear on the virtual keyboard. However, the average accuracy of the iMouse-sMc remained consistently in the low 90th percentile, largely independent of word length, because it could implement all alphabets with only seven optimized combinations of short and long eyeblinks, which reduced the likelihood of typos, and its accuracy was improved due to the SymSpell typo corrections.

4. Conclusions

ALS, also known as Lou Gehrig’s disease, is caused by damage to the motor neurons in the brain and spinal cord, which gradually leads to the loss of movement and eventually, to the inability to communicate. To communicate with patients with quadriplegia, research has been conducted on the methods of letter selection using brainwave or camera-based eyeblink detection. However, brainwave-based techniques are sensitive to noise, which can lead to inaccurate intention identification, and camera-based letter selection techniques have the disadvantage of requiring long processing times due to their use of Morse code or sequential selection methods, which can lead to low accuracy. To overcome these problems and make patient communication faster and easier, we proposed the iMouse-sMc, a simplified Morse code-based decision-making system utilizing an eye mouse. To improve the processing speed of letter selection, we suggested a fusion of the simplified Morse code, which was reduced to one-quarter of the existing 26 Morse codes, and quadrant navigation, and we also applied an image enhancement technique through histogram-smoothing to improve the detection performance of the eye mouse to enable communication with patients even in low-light environments. To evaluate the performance of the proposed iMouse-sMC, we conducted a comparison experiment with two state-of-the-art models based on using various words. The results showed that the communication time was reduced by up to 83 s and the intention recognition accuracy was improved by up to 48.05%, confirming the excellence of the presented model. Additionally, while the existing similar models were unable to detect eyes in low-light environments, the eye recognition rate of the proposed eye mouse with its image contrast enhancement scheme was maintained at 79.96%, validating its excellence in enabling universal communication with patients during the day and at night. We have shown that the proposed system is faster and more accurate than other systems for communicating with patients, and it is also capable of being used in low-light environments. Nevertheless, it suffers from fatigue because it takes a considerable amount of time for patients to type words by blinking their eyes. It also has a limitation wherein a patient’s face must be in the center of the camera for it to detect eye blinking well. Therefore, we will continue to conduct further research to reduce the input time required for words by blinking one’s eyes and to enable the detection of patients’ blinks at various angles.

Author Contributions

H.K., S.H. and J.C. took part in the discussions about the work described in this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MOE) (No. 2021R1I1A3055973) and the Soonchunhyang University Research Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Talbott, E.O.; Malek, A.M.; Lacomis, D. The Epidemiology of Amyotrophic Lateral Sclerosis. In Handbook of Clinical Neurology; Aminoff, M.J., Boller, F., Dick, F., Swaab, D.F., Eds.; Elsevier: Amsterdam, The Netherlands, 2016; Volume 138, pp. 225–238. ISBN 978-0-12-802973-2. [Google Scholar]
  2. ALS Association. Available online: http://www.alsa.org/about-als (accessed on 1 March 2023).
  3. Neurological Disorders and Stroke. Available online: https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Amyotrophic-Lateral-Sclerosis-ALS-Fact-Sheet (accessed on 1 March 2023).
  4. Wankhede, K.V.; Pednekar, S. Aid for ALS Patient Using ALS Specs and IOT. In Proceedings of the International Conference on Intelligent Autonomous Systems, Singapore, 28 February–2 March 2019. [Google Scholar]
  5. Sang, Y.; Xu, J. Evaluation of Motor Neuron Injury in ALS by Different Parameters of Diffusion Tensor Imaging. IEEE Access 2020, 8, 72381–72394. [Google Scholar]
  6. Bekhouche, S.E.; Kajo, I.; Ruichek, Y.; Dornaika, F. Spatiotemporal CNN With Pyramid Bottleneck Blocks: Application to Eye Blinking Detection. Neural Netw. 2022, 152, 150–159. [Google Scholar] [PubMed]
  7. Chambayil, B.; Singla, R.; Jha, R. Virtual Keyboard BCI Using Eye Blinks in EEG. In Proceedings of the IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, Niagara Falls, ON, Canada, 11–13 October 2010. [Google Scholar]
  8. Rușanu, O.A.; Cristea, L.; Luculescu, M.C. LabVIEW and Android BCI Chat App Controlled by Voluntary Eye-Blinks Using Neuro Sky Mindwave Mobile EEG Headset. In Proceedings of the International Conference on e-Health and Bioengineering, Iasi, Romania, 29–30 October 2020. [Google Scholar]
  9. Bandara, V.Y.S.; Nanayakkara, A. Differentiation of Signals Generated by Eye Blinks and Mouth Clenching in a Portable Brain Computer Interface System. In Proceedings of the IEEE International Conference on Industrial and Information Systems, Peradeniya, Sri Lanka, 15–16 December 2017. [Google Scholar]
  10. Zhang, Y.; Zheng, X.; Xu, W.; Liu, H. RT-Blink: A Method Toward Real-Time Blink Detection From Single Frontal EEG Signal. IEEE Sens. J. 2023, 23, 2794–2802. [Google Scholar]
  11. Kuwahara, A.; Hirakawa, R.; Kawano, H.; Nakashi, K.; Nakatoh, Y. Eye Fatigue Prediction System Using Blink Detection Based on Eye Image. In Proceedings of the IEEE International Conference on Consumer Electronics, Las Vegas, NV, USA, 10–12 January 2021. [Google Scholar]
  12. Špakov, O.; Majaranta, P. Scrollable Keyboards for Casual Eye Typing. PsychNology J. 2009, 7, 159–173. [Google Scholar]
  13. Attiah, A.Z.; Khairullah, E.F. Eye-Blink Detection System for Virtual Keyboard. In Proceedings of the National Computing Colleges Conference, Taif, Saudi Arabia, 27–28 March 2021. [Google Scholar]
  14. Sushmitha, M.; Kolkar, N.; Suman, G.; Kulkarni, K. Morse Code Detector and Decoder Using Eye Blinks. In Proceedings of the International Conference on Inventive Research in Computing Applications, Coimbatore, India, 2–4 September 2021. [Google Scholar]
  15. Zhou, X. Eye-Blink Detection Under Low-Light Conditions Based on Zero-DCE. In Proceedings of the IEEE Conference on Telecommunications, Optics and Computer Science, Dalian, China, 11–12 December 2022. [Google Scholar]
  16. Patel, S.; Goswami, M. Comparative Analysis of Histogram Equalization Techniques. In Proceedings of the International Conference on Contemporary Computing and Informatics, Mysore, India, 27–29 November 2014. [Google Scholar]
  17. Tian, X.; Zheng, X.; Ji, Y.; Jiang, B.; Wang, T.; Xiong, S.; Wang, X. iBlink: A Wearable Device Facilitating Facial Paralysis Patients to Blink. IEEE Trans. Mob. Comput. 2019, 18, 1789–1801. [Google Scholar]
  18. Lupu, R.; Bozomitu, R.; Ungureanu, F.; Cehan, V. Eye Tracking Based Communication System for Patient With Major Neoro-Locomotor Disabilities. In Proceedings of the International Conference on System Theory, Control and Computing, Sinaia, Romania, 14–16 October 2011. [Google Scholar]
  19. Choi, S.I.; Lee, Y.; Kim, C. Confidence Measure Using Composite Features for Eye Detection in a Face Recognition System. IEEE Signal Process Lett. 2015, 22, 225–228. [Google Scholar]
  20. Chang, Y.; Jung, C.; Ke, P.; Song, H.; Hwang, J. Automatic Contrast-Limited Adaptive Histogram Equalization With Dual Gamma Correction. IEEE Access 2018, 6, 11782–11792. [Google Scholar]
  21. Wang, S.; Chang, Y.; Wang, C. Dual Learning for Joint Facial Landmark Detection and Action Unit Recognition. IEEE Trans. Affect. Comput. 2021, 12, 1–6. [Google Scholar]
  22. Mon, E.P.P.; Thu, Y.K.; Yu, T.T.; Wai, A.W. SymSpell4Burmese: Symmetric Delete Spelling Correction Algorithm (SymSpell) for Burmese Spelling Checking. In Proceedings of the International Joint Symposium on Artificial Intelligence and Natural Language Processing, Ayutthaya, Thailand, 21–23 December 2021. [Google Scholar]
  23. Kaur, H.; Rani, J. MRI Brain Image Enhancement Using Histogram Equalization Techniques. In Proceedings of the International Conference on Wireless Communications, Signal Processing and Networking, Chennai, India, 23–25 March 2016. [Google Scholar]
  24. Dewi, C.; Chen, R.-C.; Chang, C.-W.; Wu, S.-H.; Jiang, X.; Yu, H. Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety. Electronics 2022, 11, 3183. [Google Scholar] [CrossRef]
  25. Kazuhiro, T.; Akihiko, U.; Yoshiki, M.; Taiji, S.; Kanya, T.; Shigeru, U. A Communication System for ALS Patients Using Eye Blink. Int. J. Appl. Electromagn. Mech. 2003, 18, 3–10. [Google Scholar]
Figure 1. Flowchart of the text entry system for the proposed iMouse-sMc.
Figure 1. Flowchart of the text entry system for the proposed iMouse-sMc.
Electronics 12 02782 g001
Figure 2. Eye detections using facial landmarks: (a) positions of the 68 facial landmarks; (b) facial landmark detection; (c) open eyes with landmarks; and (d) closed eyes with landmarks.
Figure 2. Eye detections using facial landmarks: (a) positions of the 68 facial landmarks; (b) facial landmark detection; (c) open eyes with landmarks; and (d) closed eyes with landmarks.
Electronics 12 02782 g002
Figure 3. Flowchart of the eyeblink detector for short and long blinks.
Figure 3. Flowchart of the eyeblink detector for short and long blinks.
Electronics 12 02782 g003
Figure 4. On-screen mouse panel selected with a blink of an eye via quadrant navigation.
Figure 4. On-screen mouse panel selected with a blink of an eye via quadrant navigation.
Electronics 12 02782 g004
Figure 5. On-screen keyboard arrangement.
Figure 5. On-screen keyboard arrangement.
Electronics 12 02782 g005
Figure 6. Subjects who participated in the performance evaluations.
Figure 6. Subjects who participated in the performance evaluations.
Electronics 12 02782 g006
Figure 7. Test bench for the proposed user interface.
Figure 7. Test bench for the proposed user interface.
Electronics 12 02782 g007
Figure 8. Eye detection performance of the user interface models as the brightness level changed: (a) [13], (b) [14], and (c) the iMouse-sMc.
Figure 8. Eye detection performance of the user interface models as the brightness level changed: (a) [13], (b) [14], and (c) the iMouse-sMc.
Electronics 12 02782 g008
Figure 9. Average processing time for 10 identification attempts per patient intention (words) by the user interface models of Attiah (2021) [13], Sushmitha (2021) [14], and iMouse-sMc: (a) hot, (b) curtain, (c) absorption, and (d) change body position.
Figure 9. Average processing time for 10 identification attempts per patient intention (words) by the user interface models of Attiah (2021) [13], Sushmitha (2021) [14], and iMouse-sMc: (a) hot, (b) curtain, (c) absorption, and (d) change body position.
Electronics 12 02782 g009
Figure 10. Average accuracy for 10 identification attempts per patient intention (words) by the user interface models of Attiah (2021) [13], Sushmitha (2021) [14], and iMouse-sMc: (a) hot, (b) curtain, (c) absorption, and (d) change body position.
Figure 10. Average accuracy for 10 identification attempts per patient intention (words) by the user interface models of Attiah (2021) [13], Sushmitha (2021) [14], and iMouse-sMc: (a) hot, (b) curtain, (c) absorption, and (d) change body position.
Electronics 12 02782 g010
Table 1. Average accuracy per user interface model by brightness level.
Table 1. Average accuracy per user interface model by brightness level.
Brightness LevelUser Interface Model
[13][14]iMouse-sMc
High (100%)86.64%86.64%93.32%
Relatively high (80%)79.96%73.28%86.64%
Medium (60%)79.96%53.30%86.64%
Relatively low (30%)0.00%0.00%79.96%
Table 2. Average processing time for the 10 subjects for identifying patient intention (in words) by the user interface models.
Table 2. Average processing time for the 10 subjects for identifying patient intention (in words) by the user interface models.
Patient IntentionUser Interface Model
[13][14]iMouse-sMc
Hot82 s62 s57 s
Curtain196 s176 s132 s
Absorption256 s220 s173 s
Change body position418 s425 s342 s
Table 3. Average accuracy for 10 subjects in identification per patient intention (words) by user the interface models.
Table 3. Average accuracy for 10 subjects in identification per patient intention (words) by user the interface models.
Patient IntentionUser Interface Model
[13][14]iMouse-sMc
Hot80.00%64.67%93.00%
Curtain72.00%61.14%91.86%
Absorption63.70%60.20%93.50%
Change body position53.21%44.05%92.10%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, H.; Han, S.; Cho, J. iMouse: Augmentative Communication with Patients Having Neuro-Locomotor Disabilities Using Simplified Morse Code. Electronics 2023, 12, 2782. https://doi.org/10.3390/electronics12132782

AMA Style

Kim H, Han S, Cho J. iMouse: Augmentative Communication with Patients Having Neuro-Locomotor Disabilities Using Simplified Morse Code. Electronics. 2023; 12(13):2782. https://doi.org/10.3390/electronics12132782

Chicago/Turabian Style

Kim, Hyeonseok, Seungjae Han, and Jeongho Cho. 2023. "iMouse: Augmentative Communication with Patients Having Neuro-Locomotor Disabilities Using Simplified Morse Code" Electronics 12, no. 13: 2782. https://doi.org/10.3390/electronics12132782

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop