Text Typing Using Blink-to-Alphabet Tree for Patients with Neuro-Locomotor Disabilities
Abstract
1. Introduction
- (1)
- We suggest a very intuitive typing method, the so-called Blink-to-Alphabet Tree (BAT), which gradually narrows down the range to the desired letter through eye blinks and grouped hierarchical letters. For example, if a patient wants to type the alphabet ‘B’, they first select the group ‘ABC/DEF/GHI’, then select the group ‘ABC’, and finally select ‘B’. Blinking of the right eye is used for moving the cursor between letters, and blinking of the left eye is responsible for selecting a group of letters. To customize this method for different patients, BAT can have modified hierarchical structures. For example, we could change the size of each group or expand the tree to allow for typing semantic group of letters (‘for’, ‘to’, ‘ing’, ‘tive’, and so on).
- (2)
- To accelerate the eye gesture-based typing, LLM (large language model) [10,11] is also incorporated into the proposed method. Once the patient selects the ‘send’ button, the input words or sentence can be corrected in typo, spacing between words, and upper/lower cases, via OpenAI (Chat GPT) API [12]. As demonstrated in experiments, this could significantly reduce time for typing and improve accuracy.
2. Related Works
- (1)
- EEG-based methods: A text typing method based on a brain–computer interface using Electro Encephalo Graphic (EEG) signals has been proposed, which is called virtual keyboard [13,14]. Eye blinks are recognized based on EEG signals to control the virtual keyboard. In [13,14], typing is performed by scanning one row of the keyboard at a time and selecting it with an eye blink, then scanning and selecting a group of three [13] or four [14] characters in the selected row and select with an eye blink, and finally choosing the desired alphabet character. This method has shown a high accuracy rate of about 95% [14], but it takes a long time to type because the patient has to wait for the desired group of characters to be displayed (e.g., it takes more than one minute to type five characters). In short, the EEG-based eye blinks method has limitations due to the difficulty of handling brain–computer interface (BCI) equipment, and it takes a long time to input the desired letters.
- (2)
- Camera image-based methods: Some methods that rely only on camera images without any equipment (e.g., EEG) have also been proposed [7,15,16,17,18]. Eye tracker has been widely used to allow patients to use computers or input letters on their own [7]. When the patient is lying down and looking at a specific letter key on a separately mounted monitor, the eye position in the camera image is detected to type the letter. In this process, there is a hassle of calibration. In addition, if the position of the head is slightly off, typing errors could occur [7]. Similarly to the methods in [13,14], Attiah and Khairullah [15] have proposed a method in which letters are activated one by one on a keyboard, and the patient can input the desired letter by blinking when the desired letter is activated (note that the method in [15] uses only camera images (without EEG signals) for eye blink detection, different from the methods in [13,14]). This method has the advantage of being able to input words with simple blinking, but it is still inefficient in that the patient has to wait for the desired letter to be activated. Sushmitha et al. [17] have proposed a more systematic blink-based method, where Morse code was defined as a combination of short and long blinks. However, since all letters from A to Z must be represented by different blink patterns, the patient could feel a heavy burden in inputting letters. In [18], a simplified Morse code method has been proposed, which classifies the alphabet into four groups (‘A to G’, ‘H to N’, ‘O to U’, and ‘V to Z’) and significantly reduces the number of Morse codes. The patient first selects the group (e.g., ‘H to N’) by blinking twice in a row. Then, the Morse code for the desired letter within that group is input. At this time, the seven Morse codes defined can be shared and used in the same way in the four different groups, so the patient has fewer blink patterns to input, which reduces input errors. However, the Morse code-based typing method [16,17,18] requires a considerable amount of time to input letters. The proposed method can be more intuitive and less error-prone than existing blink-based methods because it groups the alphabet in a hierarchical structure and uses blinks for group navigation and selection in the visualized alphabet tree. The experimental results showed that the time required to input a sentence (i.e., ‘Change Body Position’) was about twice as fast as that of the comparison methods.
3. Text Typing with Blinks-to-Alphabet Tree
3.1. Text Navigation and Selection
- (1)
- Left eye landmark points: {263, 387, 385, 362, 380, 373}.
- (2)
- Right eye landmark points: {33, 160, 158, 133, 153, 144}.
3.2. Words/Sentence Corrections Using LLM
4. Experiments
4.1. Experimental Setup
4.2. Comparison Results with State of the Arts
4.3. Further Analysis of Proposed Method
5. Conclusions
- Extend the study to real ALS patients: future work should validate the robustness and usability of the method with real ALS patients under real-world clinical conditions.
- Analyze robustness of EO under variable conditions by investigating how the eye openness (EO) metric behaves under varying lighting, camera angles, involuntary tremors, and user fatigue, as these can significantly affect blink detection reliability.
- Add multilingual support by evaluating how the method can be adapted for different languages (e.g., accented Latin letters, Cyrillic, Korean, etc.), which may require restructuring the tree or modifying character groupings in the BAT.
- Quantify eye blink detection accuracy by including performance metrics such as precision, recall, and false positive/negative rates for blink detection.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Brown, R.H.; Al-Chalabi, A. Amyotrophic lateral sclerosis. N. Engl. J. Med. 2017, 377, 162–172. [Google Scholar] [CrossRef] [PubMed]
- Masrori, P.; Damme, P.V. Amyotrophic lateral sclerosis: A clinical review. Eur. J. Neurol. 2020, 27, 1918–1929. [Google Scholar] [CrossRef] [PubMed]
- Beukelman, D.; Fager, S.; Nordness, A. Communication support for people with ALS. Neurol Res Int. 2011, 2011, 714693. [Google Scholar] [CrossRef] [PubMed]
- Guo, X.; Liu, X.; Ye, S.; Liu, X.; Yang, X.; Fan, D. Eye movement abnormalities in Amyotrophic Lateral Sclerosis. Brain Sci. 2022, 12, 489. [Google Scholar] [CrossRef] [PubMed]
- Harris, D.R.; Goren, M. The ERICA eye gaze system versus manual letter board to aid communication in ALS/MND. Br. J. Neurosci. Nurs. 2009, 5, 227–230. [Google Scholar] [CrossRef]
- Peters, B.; Bedrick, S.; Dudy, S.; Eddy, B.; Higger, M.; Kinsella, M.; McLaughlin, D.; Memmott, T.; Oken, B.; Quivira, F.; et al. SSVEP BCI and eye tracking use by individuals with late-stage ALS and visual impairments. Front. Hum. Neurosci. 2020, 14, 595890. [Google Scholar] [CrossRef] [PubMed]
- Edughele, H.O.; Zhang, Y.; Muhammad-Sukki, F.; Vien, Q.; Morris-Cafiero, H.; Agyeman, M.O. Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis. IEEE Access 2022, 10, 41952–41972. [Google Scholar] [CrossRef]
- Xue, P.; Wang, C.; Lee, Q.; Jiang, G.; Wu, G. Rapid calibration method for head-mounted eye-tracker. In Proceedings of the International Conference on Frontiers of Applied Optics and Computer Engineering (AOCE 2024), Kunming, China, 27–28 January 2024; Volume 13080. [Google Scholar]
- Vadillo, M.A.; Street, C.N.H.; Beesley, T.; Shanks, D.R. A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation. Behav. Res. Methods 2015, 47, 1365–1376. [Google Scholar] [CrossRef] [PubMed]
- Zhao, W.X.; Zhou, K.; Li, J.; Tang, T. A survey of large language models. arXiv 2023, arXiv:2303.18223. [Google Scholar] [PubMed]
- Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
- Santoso, G.; Setiawan, J.; Sulaiman, A. Development of OpenAI API Based Chatbot to Improve User Interaction on the JBMS Website. G Tech 2023, 7, 1606–1615. [Google Scholar] [CrossRef]
- Dobosz, K.; Stawski, K. Touchless virtual keyboard controlled by eye blinking and EEG signals. In Proceedings of the International Conference on Man-Machine Interactions, Kraków, Poland, 3–6 October 2017. [Google Scholar]
- Harshini, D.; Ranjitha, M.; Rushali, J.; Natarajan. A single electrode blink for text interface (BCI). In Proceedings of the IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020. [Google Scholar]
- Attiah, A.Z.; Khairullah, E.F. Eye-blink detection system for virtual keyboard, In Proceedings of the National Computing Colleges Conference, Taif, Saudi Arabia, 27–28 March 2021.
- Jumb, V.; Nalka, C.; Hussain, H.; Mathews, R. Morse Code detection using eye blinks. Int. J. Trendy Res. Eng. Technol. 2021, 5, 33–37. [Google Scholar] [CrossRef]
- Sushmitha, M.; Kolkar, N.; Suman, G.; Kulkarni, K. Morse Code detector and decoder using eye blinks. In Proceedings of the 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 2–4 September 2021. [Google Scholar]
- Kim, H.; Han, S.; Cho, J. iMouse: Augmentative communication with patients having neuro-locomotor disabilities using simplified Morse Code. Electronics 2023, 12, 2782. [Google Scholar] [CrossRef]
- Meghana, M.; Vasavi, M.; Shravani, D. Facial landmark detection with mediapipe and creating animated snapchat filters. Int. J. Innov. Eng. Manag. Res. 2021, 11, 98–107. [Google Scholar]
- Pothiraj, S.; Vadlamani, R.; Reddy, B.P.K. A non-intrusive method for driver drowsiness detection, using facial landmarks. In 3C Tecnología Glosas de Innovación Aplicadas a la Pyme, Edición Especial; 3Ciencias: Alcoy, Spain, 2021; pp. 71–85. [Google Scholar]
- Dewi, C.; Chen, R.; Chang, C.; Wu, S.; Jiang, X.; Yu, H. Eye aspect ratio for real-time drowiness detection to improve driver safety. Electronics 2022, 11, 3183. [Google Scholar] [CrossRef]
- Hollander, J.; Huette, S. Extracting Blinks from Continuous Eye-Tracking Data in a Mind Wandering Paradigm. Conscious. Cogn. 2022, 100, 103303. [Google Scholar] [CrossRef] [PubMed]
- Cardillo, E.; Ferro, L.; Sapienza, G.; Li, C. Reliable Eye-Blinking Detection With Millimeter-Wave Radar Glasses. IEEE Trans. Microw. Theory Tech. 2024, 72, 771–779. [Google Scholar] [CrossRef]
3 Digit Ternary Value | Alphabet to Type | ||
---|---|---|---|
1st | 2nd | 3rd | |
Left | Left | Left | A |
Left | Left | Middle | B |
Left | Left | Right | C |
Left | Middle | Left | D |
Left | Middle | Middle | E |
Left | Middle | Right | F |
Left | Right | Left | G |
Left | Right | Middle | H |
Left | Right | Right | I |
Middle | Left | Left | J |
Middle | Left | Middle | K |
Middle | Left | Right | L |
Middle | Middle | Left | M |
Middle | Middle | Middle | N |
Middle | Middle | Right | O |
Middle | Right | Left | P |
Middle | Right | Middle | Q |
Middle | Right | Right | R |
Right | Left | Left | S |
Right | Left | Middle | T |
Right | Left | Right | U |
Right | Middle | Left | V |
Right | Middle | Middle | W |
Right | Middle | Right | X |
Right | Right | Left | Y |
Right | Right | Middle | Z |
Right | Right | Right | spacebar (marked with ‘-’) |
ID No./Gender/Glasses | Left Eye Image | Right Eye Image |
---|---|---|
ID 01/Female/No | ||
ID 02/Male/No | ||
ID 03/Male/No | ||
ID 04/Female/Yes | ||
ID 05/Male/Yes | ||
ID 06/Male/No | ||
ID 07/Male/No | ||
ID 08/Female/No | ||
ID 09/Male/No | ||
ID 10/Male/No |
Patient Intention | [15] | [17] | [18] | Proposed | |
---|---|---|---|---|---|
PI-01 | Hot | 82 | 62 | 57 | 40 |
PI-02 | Curtain | 196 | 176 | 132 | 87 |
PI-03 | Absorption | 256 | 220 | 173 | 112 |
PI-04 | Change Body Position | 418 | 425 | 342 | 186 |
ID No. | PI-01 | PI-02 | PI-03 | PI-04 |
---|---|---|---|---|
ID 01 | 36 | 112 | 112 | 202 |
ID 02 | 43 | 70 | 132 | 136 |
ID 03 | 39 | 133 | 126 | 220 |
ID 04 | 42 | 82 | 71 | 130 |
ID 05 | 39 | 77 | 100 | 171 |
ID 06 | 49 | 78 | 161 | 163 |
ID 07 | 30 | 69 | 108 | 162 |
ID 08 | 34 | 80 | 88 | 195 |
ID 09 | 37 | 96 | 95 | 237 |
ID 10 | 47 | 74 | 130 | 239 |
mean | 39.6 | 87.1 | 112.3 | 185.5 |
Std. deviation | 5.5 | 19.6 | 24.5 | 37.2 |
Patient Intention (PI) | [15] | [17] | [18] | Proposed | ||
---|---|---|---|---|---|---|
Without AI Correction | With AI Correction | |||||
PI-01 | Hot | 80.00% | 64.67% | 93.00% | 90.00% | 90.00% |
PI-02 | Curtain | 72.00% | 61.14% | 91.86% | 98.33% | 100.00% |
PI-03 | Absorption | 63.70% | 60.20% | 93.50% | 100.00% | 100.00% |
PI-04 | Change Body Position | 53.21% | 44.05% | 92.10% | 100.00% | 100.00% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, S.; Lee, S. Text Typing Using Blink-to-Alphabet Tree for Patients with Neuro-Locomotor Disabilities. Sensors 2025, 25, 4555. https://doi.org/10.3390/s25154555
Lee S, Lee S. Text Typing Using Blink-to-Alphabet Tree for Patients with Neuro-Locomotor Disabilities. Sensors. 2025; 25(15):4555. https://doi.org/10.3390/s25154555
Chicago/Turabian StyleLee, Seungho, and Sangkon Lee. 2025. "Text Typing Using Blink-to-Alphabet Tree for Patients with Neuro-Locomotor Disabilities" Sensors 25, no. 15: 4555. https://doi.org/10.3390/s25154555
APA StyleLee, S., & Lee, S. (2025). Text Typing Using Blink-to-Alphabet Tree for Patients with Neuro-Locomotor Disabilities. Sensors, 25(15), 4555. https://doi.org/10.3390/s25154555