Next Article in Journal
Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights
Previous Article in Journal
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection

by
Ramis İleri
1,
Çiğdem Gülüzar Altıntop
1,*,
Fatma Latifoğlu
1,* and
Esra Demirci
2
1
Department of Biomedical Engineering, Faculty of Engineering, Erciyes University, Kayseri 38280, Türkiye
2
Department of Child and Adolescent Psychiatry, Erciyes University Faculty of Medicine, Kayseri 38280, Türkiye
*
Authors to whom correspondence should be addressed.
J. Eye Mov. Res. 2025, 18(5), 56; https://doi.org/10.3390/jemr18050056
Submission received: 10 July 2025 / Revised: 16 September 2025 / Accepted: 10 October 2025 / Published: 15 October 2025

Abstract

Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5–17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.

1. Introduction

Dyslexia is a neurodevelopmental learning disorder that impairs reading and is the most common type of learning disability, affecting approximately 5–17.5% of children and accounting for 70–80% of those with significant learning difficulties [1,2]. Despite adequate intelligence and educational opportunities, dyslexic individuals experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and overall learning [3,4]. Early identification is crucial for initiating targeted interventions and supporting educational outcomes.
Traditional dyslexia diagnosis relies on behavioral assessments and standardized tests, including measures of phonological awareness, reading fluency, comprehension, and cognitive functions such as working memory and processing speed [5,6,7]. While effective, these methods are often time-consuming, subjective, and may under-identify bilingual or morphologically complex language speakers [8]. Teachers and families play a critical role in noticing early reading difficulties, emphasizing the need for continuous observation and comprehensive evaluation [9,10].
Recent research highlights physiological measurements as a promising complementary approach for dyslexia detection. Among these, Electrooculography (EOG) offers a non-invasive, cost-effective, and real-time method for tracking eye movements, including saccades, fixations, regressions, and blinks [11,12,13,14,15,16,17,18,19]. Unlike video-based systems, which require complex image processing algorithms [17], EOG provides a simpler and more accessible alternative, particularly in resource-constrained environments. Studies consistently show dyslexic readers exhibit longer fixations, more regressions, and shorter saccades than typical readers [20,21]. For example, Bachmann et al. [22] had typically developing controls and dyslexic children read Italian texts in two different fonts. They examined them in terms of fluent reading (number of syllables per second) and revealed whether there was a statistical difference in mean and standard deviation values. Rello et al. [23] used a statistical model to predict readers (Spanish readers) with and without dyslexia (from 11 to 54 years old) using eye-tracking measures, achieving 80.18% accuracy with features such as reading time, mean of fixations, and participant age. Furthermore, typographic features—such as font type, spacing, and character shape—can significantly influence reading efficiency, with specialized fonts like Dyslexie or OpenDyslexic designed to reduce common reading errors in dyslexic individuals [24,25,26]. These findings underscore the importance of typographic choices in educational and digital environments, suggesting that font design can serve as a low-cost, scalable intervention to support dyslexic readers.
Turkish varies from many other languages in two major ways. The first is that it has a completely transparent orthography. In other words, it is a language that is read exactly as it is written, with each letter of the alphabet representing a sound. The second is that it is an agglutinative language, which means it has a complex morphological structure [27]. According to [28], the integration of orthographic and morphological structures in Turkish does not provide readers with the predicted advantage of clear orthography. Unlike English, Turkish exhibits a near-perfect phoneme-to-grapheme correspondence, which minimizes decoding errors and allows researchers to isolate cognitive deficits unrelated to orthographic ambiguity [29]. Despite this regularity, dyslexic readers of Turkish still demonstrate characteristic reading impairments, including increased fixation durations, frequent regressions, and letter position errors [29,30,31]. The morphological richness of Turkish, with its extensive use of suffixes and compound word formation, further enables the examination of how dyslexia affects morphological parsing and lexical access. These features make Turkish a valuable language for cross-linguistic dyslexia research, offering insights into the interplay between language structure and reading disorders [29,30,31]. Comparative research on the influence of Turkish on dyslexia is considered necessary to adequately disclose the challenges that Turkish children face [32,33].
The diagnosis of dyslexia involves identifying and assessing reading-related learning impairments [34]. Traditional diagnostic and screening methods require professionals to conduct lengthy, face-to-face evaluations that measure reading and writing performance, including reading rate (words per minute), reading errors, writing mistakes, comprehension, pseudoword reading, and reading fluency [35]. In recent years, machine learning techniques have been increasingly applied to dyslexia detection, utilizing algorithms such as Support Vector Machines (SVM), Logistic Regression (LR), Artificial Neural Networks (ANN), Random Forests (RF), and k-Nearest Neighbors (KNN) [36]. These models are trained on features extracted from diverse signal modalities, including electroencephalography (EEG), electrooculography (EOG), and eye-tracking data.
Convolutional Neural Networks (CNNs) have demonstrated notable success in dyslexia detection, commonly employing data such as EEG, magnetic resonance imaging (MRI), handwriting samples, and eye movement recordings [37,38]. Studies have incorporated reading tasks in multiple languages, such as English, Spanish, and Swedish, to enhance generalizability [39,40,41]. For instance, Sait et al. [42] proposed a lightweight and interpretable deep learning framework integrating cross-modality data—specifically MRI, EEG, and handwriting images—for dyslexia detection, achieving 99.8% accuracy across five publicly available datasets and outperforming conventional CNNs and vision transformer architectures.
Building on these findings, the present study investigated whether dyslexia could be objectively detected by classifying EOG signals using deep neural networks. Children diagnosed with dyslexia were exposed to reading tasks with texts presented in various typefaces and font sizes through a controlled reading test system. This unique system is prepared in Turkish and allows the assessment of multiple typefaces and fonts, providing a novel approach to combining physiological measurements with typographic analysis for dyslexia detection and educational support.
To the best of our knowledge, no study has yet explored the combination of EOG-based physiological measurements with typeface analysis in children with dyslexia. This study addresses this gap by developing a reading task-based test system and recording EOG signals while children read texts in multiple typefaces and font sizes. Signal processing techniques and a novel deep learning approach based on EOG scalograms, DyslexiaNet, were applied to classify dyslexic and typical readers and determine optimal typefaces for reading efficiency. The findings aim to provide objective data for personalized educational support and assist clinicians in the early diagnosis of dyslexia. The flow chart of this study is given in Figure 1.
The paper is organized as follows: Section 2 presents the detailed information and methodology of the proposed approach; Section 3 shows experimental results and the model performance; Section 4 proceeds with discussions; and Section 5 is the final section, concluding this paper.

2. Materials and Methods

2.1. Data Acquisition

In this paper, a reading task-based testing system was designed for data recording. Figure 2a illustrates the data acquisition system and electrode placement. EOG signals were obtained at a sampling frequency of 100 Hz. For data pre-processing, the raw EOG signals were filtered using a 4th-order Butterworth band-pass filter for noise removal. The frequency ranges from 0.1 to 10 Hz. Figure 2b shows the sample EOG signals from three random subjects from the second-, third-, and fourth-grade students.

2.2. Participants

Twenty-three children with dyslexia, aged 8–10 (13 females and 10 males), and 13 age–sex matched typically developing controls (TDC) were included in the study. All subjects were primary school second (7–8 years old), third (8–9 years old), and fourth (9–10 years old) grade students. TDC received a typical clinical evaluation that included assessments of her neurological, endocrine, and mental conditions. The study excluded children with dyslexia who also had other psychiatric illnesses, epilepsy, cerebral palsy, developmental delay, or other abnormalities of the central nervous system. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria, as well as the specific learning disability (SLD) battery, which includes subtests that evaluate literacy and basic arithmetic skills, as well as tests that assess disorders or problems in visual perception, ranking and sequencing skills, the hand-eye-ear test of the head, lateralization, an assessment of lateralization, and an assessment of lateralization, were all used to make the diagnosis of dyslexia by child and adolescent psychiatrist. Ethical approval for this study was obtained from the ethics committee of Erciyes University, Kayseri (approval number: 2018/565). Verbal consent was obtained from the children participating in the study, and written consent was obtained from their parents. Supplementary Figure S1 and Table S2 show the subject’s characteristics.

2.3. Experimental Setup

Detailed information can be found about the experimental setup and the EOG Data acquisition system in [38]. A total of 28 distinct reading tasks were prepared for use in this study. These texts were designed in the participants’ native language (Turkish) and varied systematically in typeface and font size. The participant group consisted of students from the second, third, and fourth grades of primary school. To ensure age-appropriate content and reading complexity, different texts were selected to represent each grade level. All texts were sourced from official textbooks published by the Ministry of National Education of the Republic of Turkey.
To minimize the likelihood that participants had previously encountered the texts, materials were selected from textbooks intended for one grade level above the participant’s current grade. For example, second-grade students were presented with texts extracted from third-grade textbooks. Detailed information about the texts is given in Supplementary Table S1.
Font sizes were chosen to reflect the minimum and maximum values commonly used in Ministry-approved textbooks, ensuring ecological validity. Line spacing was standardized at 2.0 for all texts except for Text 28, which served as a control with different spacing.
During the experiment, each participant was asked to read a total of 28 texts. The texts were displayed sequentially using a PowerPoint presentation. Transitions between texts were manually controlled by a research assistant, who also monitored the participants’ reading progress. Upon completion of each text, the assistant stopped the recording, which was managed by the BIOPAC MP-36 system. The duration of each reading task was automatically recorded by the BIOPAC system, ensuring precise synchronization between the reading activity and physiological data acquisition. Participants were instructed to read aloud at their natural, everyday pace, without the use of any specialized reading techniques. The duration of the experiment varied across participants, depending on individual reading speed.
To mitigate fatigue and maintain attention, a break of 3 to 5 min was provided after every four texts. These sets of four texts shared the same typeface but differed in font size, allowing for controlled comparison within each typeface condition.

2.4. Reading Time

Reading times were calculated using the BIOPAC MP36 data recording system. When the reading of each text is finished, the marking is done by the data recording system automatically. Figure 3 shows the determination of text reading times using BIOPAC. The red circled mark in the figure is the mark made after each text reading. Text reading time was determined by calculating the time difference between the two marked points.

2.5. Number of Regression

In several studies in the literature, it has been reported that people with dyslexia perform regressions/re-reading during reading more than a normal reader [43,44,45]. Considering this information, determination of the regression movement in EOG signals may be important for the detection of dyslexia. As the eye moves from left to right while reading a text, the amplitude of the EOG signal changes. When we start reading from the beginning of the line (left) until the end of the line (right), eyeballs approach the positive electrode, because of this positive amplitude occurs in the EOG signal. While reading, if the subject returns to the previous or earlier words (to the left), the amplitude of the EOG signal suddenly changes to a negative value. This negative amplitude change creates a regression movement. In our previous study [46], we developed a method that automatically determines the regression movement. We used the same method to determine the number of regression movements in this study.

2.6. Number of Blink

There are many studies to determine the blink movement by using EOG signals [13,47,48,49,50]. In this study, the number of blinks was determined using vertical EOG signals. First, vertical EOG signals were filtered with a Notch filter to remove 50 Hz noise, and then baseline correction was performed. After pre-processing, the peak, minimum, and up-crossing points of the vertical EOG signal were computed. In the literature, blink duration was determined as 100 ms–800 ms (0.1 s–0.8 s) [50]. In this study, if the time between the up-crossing and the crest point is in the range of 50 ms–400 ms, the peak points of this condition were determined as a blink. In addition, in determining the number of blinks, the threshold amplitude value was determined for each signal, and blink computation was made for the amplitude values above this threshold value (dashed, horizontal, pink line in Figure 4). If the amplitude of the EOG signals is below the threshold, it is not counted as a blink. Figure 4 illustrates the determination of blink movement.

2.7. Energy of EOG Signals

EOG signal amplitude shows how far the eyes moved from the reference position and its changes with saccades, blinks, and regression, etc., which are the kinematics of eye movements. Therefore, “energy of EOG signals” relates to the kinetic energy involved in moving the eye from one fixation point to another. The energies of the EOG signals for each text were calculated using Equation (1).
+ x t 2 d t  

2.8. Scalogram Images

In this study, due to the use of 2D CNN models, the one-dimensional EOG signals were transformed into two-dimensional scalogram images to enable effective classification. Continuous wavelet transform (CWT) is one of the time–frequency domain conversion methods [51]. It transforms 1-D physiological signals such as EEG [52,53], EMG [54,55], and ECG into 2-D time–frequency spectrums which can be directly used by the convolution layers. Scalogram images were created in MATLAB R2024a using CWT at a resolution of 300 DPI. The mathematical formula for calculating CWT is expressed as:
C W T a , b = 1 a x ( t ) ψ * ( t b a ) d t
where x(t) is the time domain signal, ψ* (t) is the complex conjugate wavelet function, a is the scale coefficient, and b is the time-shift coefficient.
The scalogram images show the amount of energy distribution in the time shift (b) and scaling factor (a) of EOG signals. It is calculated as the magnitude squared of the continuous wavelet transform, and the formula for computing is given in Equation (3).
E n e r g y C W T a , b = C W T a , b 2
EOG signals were segmented into signal frames (each frame consists of 1000 EOG samples). Following this, each frame converts into a scalogram image. Figure 5a illustrates the step of converting EOG signals to scalogram signals. 3000 scalogram images were used for both groups in this study. This process is applied to both channel signals. Figure 5b shows the scalogram images of dyslexia and TDC obtained using CWT method from the EOG signals recorded from the horizontal and vertical channels.

2.9. AlexNet

AlexNet was developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton in 2012 [56]. It won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012. AlexNet consists of several layers, including 5 convolutional layers, 3 fully connected layers, 5 max pooling layers, and one SoftMax layer. Apart from these layers, it has RELU and dropout layers. The input data size should be 227 × 227 × 3 for AlexNet.

2.10. ResNet50

ResNet (Residual Neural Network) [57] is specifically designed to address issues related to deepening traditional deep networks. To overcome these problems, it utilizes “skip connections”; that is, it directly adds the outputs of previous layers to the outputs of later layers. These connections facilitate the learning process of the network and stabilize the training of deeper networks.

2.11. MobileNet

MobileNet is an artificial neural network architecture developed by Google for computer vision tasks based on deep learning [58]. It is designed to provide efficient and lightweight modeling in resource-constrained environments such as mobile devices. MobileNet employs various techniques to optimize deep neural network architectures, particularly utilizing a structure called “depth-wise separable convolution” to significantly reduce computational costs while decreasing the model’s depth. This enables the creation of faster and lighter models while maintaining high levels of accuracy.

2.12. DyslexiaNet

A summary of the DyslexiaNet model framework is presented in Table 1. The presented model employs four convolution layers with other layers. The first CNN layer processes the input scalogram images with a size of 28 × 28 × 3. The first convolutional layer (Conv) has 16 filters of size with a stride of 1 and the same padding. Following the first Conv1, the batch normalization and ReLU layers are applied. Next, a 2 × 2-sized 2D max pooling layer with a stride of 2 is applied. This structure repeated 4 times with different sizes of filters and strides. The second, third, and fourth Conv layers have 32, 64, and 64 filters of size, and each layer maintains the same padding with a stride value of 1. The dropout layer used a value of 0.5 to prevent overfitting. The SoftMax layer is the last layer of the proposed network to predict the class of an input image. The proposed DyslexiaNet model is illustrated in Figure 6. Hyperparameters of the DyslexiaNet model are given in Supplementary Table S3.

3. Results

3.1. Children with Dyslexia Have a Higher Reading Time

The average reading time of each text was calculated for each group separately. Then, we analyzed whether there was a significant difference between individuals with dyslexia and TDC individuals in terms of average reading time. Figure 7 shows the average reading time of the texts of individuals with dyslexia and TDC individuals. Analysis of reading times among dyslexic students across grade levels revealed distinct preferences for specific typeface and font size combinations. For second-grade students with dyslexia, the shortest average reading times were observed with texts formatted in TTKB Dik Temel ABC at 18 points (pt), Times New Roman at 16 pt, and BonvenoCF-Colored at 20 pt. In the third grade, BonvenoCF at 16 pt yielded the fastest reading time (34.956 s), followed closely by BonvenoCF-Colored at 20 pt (35.252 s). Across both second and third grades, texts presented in italicized fonts consistently resulted in longer reading times, suggesting that italic styling may hinder reading fluency in dyslexic children. Among fourth-grade students, a gradual decrease in average reading time was noted across texts 13 to 16, all of which were prepared using the BonvenoCF typeface. Notably, the 16th text, formatted in BonvenoCF at 20 pt, produced the lowest average reading time (33.324 s), indicating that this combination may be particularly effective for older dyslexic readers. Statistical analysis results are shown in Figure 8. The reading time of individuals with dyslexia is significantly higher than the reading time of TDC.

3.2. Children with Dyslexia Tend to Blink More

Another feature obtained from EOG signals was the blink rate. The blink rate of each text was calculated for each group separately. Figure 9 shows the blink rate of the texts of students with dyslexia and the TDC group. As can be seen from the figure, the blink rate of individuals with dyslexia is higher than TDC group. Blink rate during reading was analyzed as an indicator of cognitive load and visual strain across dyslexic students in the second, third, and fourth grades. Among second-grade participants, the lowest blink rate was recorded during the reading of the text formatted in BonvenoCF at 22 pt. For third-grade students, the Times New Roman at 16 pt BonvenoCF at 20 pt, SofiaPro 16 pt and TTKB Dik Temel ABC elicited the fewest blinks, indicating a similar trend in reduced cognitive demand. In the fourth-grade cohort, like third grade students Times New Roman at 20 pt, SofiaPro at 16 pt and TTKB Dik Temel ABC showed lower blink rate in dyslexia group. However, surprisingly, TDC groups showed a higher blink rate in some typefaces such as BonvenoCF compared to the dyslexia group. Statistical analysis results are shown in Figure 10. Although the results showed that the dyslexia group had a significantly higher blink rate than the TDC group, the blink rate in fourth grades was not as different as it was in second and third grades, where a big difference was seen between the dyslexia and TDC groups.

3.3. The Regression Rate Is Significantly Higher in the Dyslexia Group

The number of regressions from EOG signals was determined using the method specified in Section 2.4. Like other features, the regression rate of each text was calculated for each group separately, and statistical evaluations were made. Figure 11 depicts the regression rates of the texts of individuals with dyslexia and the TDC group. Second-grade students exhibited reduced regression when reading tasks presented in TTKB Dik Temel Abece at 20 pt and Times New Roman at 20 pt, suggesting that larger font sizes and specific typefaces may facilitate smoother reading for younger dyslexic readers in terms of regression. Among third-grade students, lower regression rates were observed with BonvenoCF at 16 pt, TTKB Dik Temel Abece at 20 pt, and Arial at 16 pt. For fourth-grade students, Times New Roman at 14 pt and a colored variant of Times New Roman at 20 pt were associated with reduced regression. This suggests that both font size and visual enhancements (e.g., color differentiation) may play a role in mitigating reading difficulties. Notably, BonvenoCF demonstrated consistently low regression across font sizes ranging from 14 pt to 20 pt, outperforming other typefaces in terms of stability and effectiveness. This consistency highlights BonvenoCF as a potentially optimal typeface for dyslexic readers across multiple grade levels. Statistical test results were given in Figure 12. Test results revealed that children with dyslexia have a significantly higher regression rate compared to healthy subjects.

3.4. The Energy of EOG Signals Shows an Increase in the Dyslexia Group

The energy of horizontal EOG signals was computed for all texts. In terms of average EOG signal energies for each text, whether there is a significant difference between individuals with dyslexia and TDC was analyzed. The statistical test results are given in Figure 13, where children with dyslexia have significantly higher EOG signal energy than TDC.

3.5. Classification Results

As mentioned in Section 2, we proposed a new simple deep learning architecture to classify our scalogram images converted from the EOG signal. AlexNet, ResNet50, MobileNetV2, and DyslexiaNet models were used for the classification of healthy and dyslexic scalogram images obtained from EOG signals. The CNN models were used to classify both horizontal and vertical channel EOG scalograms separately. Two different channels are described in Supplementary Table S4. Three thousand scalograms were used in both groups with k-fold cross-validation. The performance of each fold was computed separately, and the final performance was determined by averaging over all k-fold results. In this study, K was set at 5. The performance of networks was evaluated using accuracy, sensitivity, specificity, and F1-score performance metrics. The confusion matrix (CM) was used to calculate these metrics.
A c c u r a c y = t p + t n t p + t n + f p + f n
S e n s i t i v i t y = t p t p + f n
S p e c i f i c i t y = t n t n + f p
F1-Score = 2 t p 2 t p + f p + f n
Table 2 shows the classification performance results for channel 1. The accuracy, sensitivity, specificity, and F1-score values, as well as mean (±standard deviation) values, are given for 5-fold cross-validation. The average accuracy of networks was 65.61%, while the average values of sensitivity, specificity, and F1-score results for AlexNet in the vertical channel were obtained at 54.43%, 76.8%, and 60.97%, respectively. The overlapped and 5-fold confusion matrix for the vertical channel is given in Supplementary Figure S2. AlexNet correctly classified 3937 out of 6000 scalogram images and achieved an overall accuracy of 65.61% for the vertical channel. When it comes to ResNet50, average values of sensitivity, specificity, and F1-score were 55.56%, 51.13%, and 4.95%, respectively, and accuracy was 53.35%. The overlapped and 5-fold confusion matrix for the vertical channel is given in Supplementary Figure S3. ResNet50 correctly classified 3201 out of 6000 scalogram images. For MobileNetV2, average results were 57.01%, 42.61%, 71.44%, and 49.70%, respectively, for accuracy, sensitivity, specificity, and F1-score. The overlapped and 5-fold confusion matrix for the vertical channel is given in Supplementary Figure S4. MobileNetV2 correctly classified 3421 out of 6000 scalogram images. For DyslexiaNet, average values of accuracy, sensitivity, specificity, and F1-score were 73.73%, 63.73%, 83.72%, and 70.50%, respectively. The confusion matrix is presented in Figure 14.
Table 3 presents the performance of channel 2 classification results. The AlexNet model achieved an average accuracy of 99.94%, while the average sensitivity, specificity, and F1-score for the horizontal channel were 99.96%, 99.92%, and 99.94%, respectively. In terms of ResNet50 results, the average scores of all folds were 97.71%, 95.00%, 99.93%, and 97.43%, respectively, for accuracy, sensitivity, specificity, and F1-score. For MobileNetV2, average values of accuracy, sensitivity, specificity, and F1-score were obtained at 99.80%, 99.63%, 99.63%, and 99.78%, respectively. The overlapped and 5-fold confusion matrix for horizontal channels is given in Supplementary Figures S5–S7 for AlexNet, ResNet50, and MobileNetV2, respectively. Figure 15 shows a confusion matrix for overlapping and a 5-fold for DyslexiaNet. The AlexNet, ResNet50, and MobileNetV2 were correctly classified as 5997, 5863, and 5988 out of 6000 scalogram images, respectively. When it comes to DyslexiaNet, it reached 99.96% classification accuracy. The other classification metric of DyslexiaNet was 99.96% for sensitivity, specificity, and F1-score. The comparison of all performance metrics for all networks is given in Figure 16a,b for vertical and horizontal channels, respectively.
To test the computational workload, we also calculated the training time of the CNN models. The longer training time can increase the computational load for the machine and can affect the results. We looked at the training time for each network separately for both channels in Figure 17a. The vertical channel training time was significantly higher than the horizontal channel for AlexNet, MobileNet, and DyslexiaNet, whereas for ResNet50, it was significantly lower. We also compared the training times of networks for both vertical and horizontal EOG signals. The results are presented in Figure 17b. The mean training time of the horizontal channel was recorded as 193, 804, 1978, and 96 s for AlexNet, ResNet50, MobileNet, and DyslexiaNet, respectively. AlexNet, ResNet50, MobileNet, and DyslexiaNet have slightly longer training times for the vertical channel, with 273, 744, 2195, and 102 s of training time, respectively. The results provide further insights into the performance differences among the tested deep learning models. The results reveal significant differences in computational efficiency across models for both horizontal and vertical electrooculography channels.

4. Discussion

4.1. Typeface and Reading Performance in Dyslexia

Considering there is no study to determine the most suitable typeface by using EOG signals, this study aims to determine the most appropriate writing characteristics for use in education to prevent negativity arising from the lack of individualized learning material in the education of children with dyslexia. One of the most widely accepted characteristics of dyslexia is a slower reading speed compared to that of typically developing readers. Accordingly, the first parameter examined in this study was reading time. The results confirmed that children with dyslexia consistently required more time to read texts than their non-dyslexic peers, aligning with previous findings associating dyslexia with slower and more effortful reading. Notably, among the various typefaces tested, BonvenoCF was associated with the shortest average reading time for dyslexic readers, suggesting that font design can significantly influence reading efficiency. In contrast, the TDC group exhibited minimal variation in reading time across different texts, indicating that font style had less impact on fluent readers.
Among the typefaces examined, Times New Roman, BonvenoCF, and TTKB Dik Temel Abece—the official font used in Ministry of National Education textbooks—emerged as particularly influential in shaping reading performance among dyslexic students. Second-grade students demonstrated improved reading efficiency with TTKB Dik Temel Abece and BonvenoCF, likely due to their familiarity with TTKB Dik Temel Abece in school materials. Across other grade levels, BonvenoCF consistently yielded better reading performance, particularly in third-grade students. Additionally, using colored syllables with BonvenoCF supported syllable-level decoding, reducing cognitive load and improving visual parsing.

4.2. Blink Behavior and Regression

In addition to reading time, blink behavior was analyzed as a potential physiological marker of reading difficulty. Statistical analysis revealed a significant difference in the average number of blinks between dyslexic and non-dyslexic children, with dyslexic readers exhibiting a higher blink rate during reading tasks. This supports the hypothesis that blink frequency may reflect increased cognitive load or visual stress in dyslexic individuals and could serve as a supplementary diagnostic indicator.
Another commonly held belief about dyslexia is that affected individuals tend to read words backward or revisit previously read text. This behavior, known as regression, was quantitatively assessed using eye-tracking data. The results demonstrated that children with dyslexia performed significantly more regression movements than their non-dyslexic counterparts across all font sizes and grade levels. These findings are consistent with prior research indicating that dyslexic readers exhibit more frequent regressive saccades and longer fixation durations, reflecting difficulties in word decoding and comprehension.

4.3. EOG Signal Energy and Physiological Markers

To further investigate the physiological correlates of reading difficulty, EOG signal energy was analyzed. EOG signals increase in amplitude during eye movements such as blinking and regression. Therefore, lower EOG signal energy during reading is interpreted as an indicator of smoother and less effortful reading. The study found statistically significant differences in EOG signal energy between dyslexic and non-dyslexic children across all groups, with dyslexic readers showing higher energy levels. These results suggest that EOG signal energy may be a valuable biomarker for assessing reading difficulty and could enhance the accuracy of dyslexia diagnosis when combined with behavioral metrics.

4.4. Methodological Considerations and Previous Work

Traditional research has yet to effectively determine the role of cognitive skills in reading problems, probably because reading involves multiple interacting components that conventional statistical approaches cannot fully capture. As a result, the number of research projects utilizing artificial intelligence (AI) and CNN methodologies has increased. Previous research has used a variety of data sources using machine learning algorithms to diagnose dyslexia, including MRI images [59], fMRI images [37], EEG signals [60], games [61,62], reading errors [63,64], facial images [65], eye movements [41,62,66], and handwriting [67,68]. Taş et al. [50] developed a machine learning model to predict dyslexia in Turkish-speaking children using audio recordings, achieving a high accuracy of 95.63% [69]. While this study is one of the rare studies in Turkish children, it differs from the present study in that voice signals were used instead of EOG signals, and the methodology was completely different.
We conducted pilot studies in Turkish to investigate the relationship between EOG signals and reading tasks. In our previous work [70], ten TDC subjects read a Turkish text in Times New Roman at 12 pt. EOG signals were recorded to identify retrieving words/re-reading and skipping line movements, which were then used as features for classifiers, achieving 98% classification accuracy with Random Forest and k-NN. This was later expanded to include children with dyslexia [46], with retrieving words/re-reading movements detected at 97.11% and skipping line movements at 93.96% success.
A preliminary study [71] tested whether reading performance changes when typefaces are modified. Using Times New Roman in four font sizes (16, 18, 20, and 22 pt) and horizontal EOG signals from 20 subjects, a 1D CNN classifier achieved 73.61% classification accuracy between TDC and dyslexic children. In addition, in our recent study [38], EOG signals from horizontal and vertical channels were evaluated separately, achieving 98.70% and 80.94% accuracy, respectively. The current study extends this work by using extracted features from EOG signals for typeface selection, rather than solely dyslexia classification.

4.5. Deep Learning Approach and Network Performance

We proposed a novel approach using scalogram images converted from EOG signals as input for three well-known CNN methods—AlexNet, MobileNet, ResNet—and our proposed network, DyslexiaNet. Both horizontal and vertical channels were analyzed separately. Horizontal movements capture reading-related saccades, fixations, and regressions, while vertical movements are relevant for line transitions. DyslexiaNet demonstrated slightly better performance for the horizontal channel and significantly higher accuracy for the vertical channel than other networks. The horizontal channel consistently outperformed the vertical channel due to its greater sensitivity to reading-related eye movements.
Supplementary Table S5 summarizes the fundamental parameters of all the networks used in this study. Training time analysis showed that DyslexiaNet required the shortest time for both channels, followed by AlexNet. ResNet50 and MobileNetV2 had longer training times. DyslexiaNet’s simpler architecture with only 4 convolutional layers achieved high accuracy while minimizing computational load, avoiding potential overfitting seen in deeper networks, especially for vertical channel data. Also, when we compared DyslexiaNet performance versus our previous study [38], where we used 1D-CNN, we increased the classification accuracy using DyslexiaNet to 99.96% for the horizontal channel and 96.70% for the 1D-CNN.

4.6. Advantages and Implications

The main advantages of this proposed method are as follows:
  • The proposed method is a non-invasive and objective method using the EOG signals in children with dyslexia to determine the best typeface for them.
  • Since more than one typeface and font (28 texts in seven different typefaces and four different font sizes) are used, it provides a more general evaluation. A single typeface will not be sufficient to reach a general conclusion.
  • A new deep neural network model was proposed to detect dyslexia using scalogram images of EOG signals recorded while reading tasks in different typefaces and fonts in Turkish-speaking children.
  • The proposed method is easy to use and can be applied in real time.
These findings highlight the potential for integrating physiological and behavioral indicators to guide individualized educational strategies and support clinicians in the early detection of dyslexia.

5. Conclusions

In conclusion, the most suitable typefaces for children with dyslexia were determined by using EOG signals in this study. In contrast to common typefaces such as Arial, our results showed that an increase in reading speed and fewer reading mistakes were seen in BonvenoCF; however, each child could differ individually, and features such as blinking and re-reading were distinctive between children with dyslexia and the TDC group. Thus, by determining which typeface is proper for children diagnosed with dyslexia to read more easily, faster, and better, the educational life of the child could be improved by preparing individual materials.
Also, features can be used to support the diagnosis of dyslexia apart from the specific learning disability battery. A data-based method has been developed by analyzing EOG signals that evaluate dyslexia and eliminate subjectivity compared to the specific learning disability battery used in the clinic. The proposed method is a non-invasive and objective method using EOG signals in individuals with dyslexia. The classification results obtained from EOG signals showed higher classifier accuracy in the prediction of dyslexia. This proposed method will be implemented as a decision support system that helps physicians.

6. Limitations and Future Work

This study has some limitations. First, the relatively limited sample size and the restriction of participants to a specific age and language group limit the generalizability of the findings. In addition, the data set and modeling methods used in the study may not fully reflect the multidimensional and complex nature of dyslexia. Future studies plan to incorporate broader and more heterogeneous sample groups, enable multimodal data integration, and conduct long-term follow-up studies. Also, this study primarily aimed to evaluate deep learning–based dyslexia detection and simply provide some main features, i.e., reading time, regression rate, etc., to evaluate typefaces and font effects. Typeface findings were exploratory, as pairwise statistical comparisons between fonts were not performed, and text content was not strictly controlled. Future research should employ standardized texts and controlled font conditions to draw firmer conclusions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jemr18050056/s1, Table S1: Detailed information about the texts; Table S2: Subject’s characteristics; Figure S1: A summary of the subjects according to their health status; Table S3: Hyperparameter of the DyslexiaNet; Table S4: Definition of two different channels; Figure S2: Channel 1 overlapping and 5-fold confusion matrices for AlexNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Figure S3: Channel 1 overlapping and 5-fold confusion matrices for ResNet50: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Figure S4: Channel 1 overlapping and 5-fold confusion matrices for MobileNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Figure S5: Channel 2 overlapping and 5-fold confusion matrices for AlexNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Figure S6: Channel 2 overlapping and 5-fold confusion matrices for ResNet50: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Figure S7: Channel 2 overlapping and 5-fold confusion matrices for MobileNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (d) Fold 4 CM, (f) Fold 5 CM; Table S5: The fundamental parameters of networks.

Author Contributions

Conceptualization, R.İ. and F.L.; methodology, R.İ. and Ç.G.A.; validation, F.L. and E.D.; formal analysis, F.L. and E.D.; investigation, R.İ. and Ç.G.A.; data curation, R.İ. and Ç.G.A.; writing—original draft preparation, R.İ. and Ç.G.A.; writing—review and editing, F.L. and E.D.; visualization, R.İ. and Ç.G.A.; supervision, F.L.; project administration, F.L.;All authors have read and agreed to the published version of the manuscript.

Funding

This study has been supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) (Grant number 119E055).

Institutional Review Board Statement

The Human Research Ethics Committee of Erciyes University endorsed ethical approval for this study with file number 2018/565.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data supporting the study’s findings are accessible from the corresponding author upon reasonable request.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Catts, H.W.; Terry, N.P.; Lonigan, C.J.; Compton, D.L.; Wagner, R.K.; Steacy, L.M.; Farquharson, K.; Petscher, Y. Revisiting the definition of dyslexia. Ann. Dyslexia 2024, 74, 282–302. [Google Scholar] [CrossRef]
  2. McDowell, M. Specific learning disability. J. Paediatr. Child Health 2018, 54, 1077–1083. [Google Scholar] [CrossRef]
  3. Wajuihian, S.O.; Naidoo, K.S. Dyslexia: An overview. Afr. Vis. Eye Health 2011, 70, 89–98. [Google Scholar] [CrossRef]
  4. Lim, W.W.; Yeo, K.J.; Handayani, L. A systematic review on interventions for children with dyslexia. Int. J. Eval. Res. Educ. 2023, 2252, 1675. [Google Scholar] [CrossRef]
  5. Miciak, J.; Fletcher, J.M. The Critical Role of Instructional Response for Identifying Dyslexia and Other Learning Disabilities. J. Learn. Disabil. 2020, 53, 343–353. [Google Scholar] [CrossRef]
  6. Rice, M.; Gilson, C.B. Dyslexia Identification: Tackling Current Issues in Schools. Interv. Sch. Clin. 2023, 58, 205–209. [Google Scholar] [CrossRef]
  7. Yuzaidey, N.A.M.; Din, N.C.; Ahmad, M.; Ibrahim, N.; Razak, R.A.; Harun, D. Interventions for children with dyslexia: A review on current intervention methods. Med. J. Malays. 2018, 73, 311–320. [Google Scholar]
  8. Ortiz, A.; Martinez-Murcia, F.J.; Luque, J.L.; Giménez, A.; Morales-Ortega, R.; Ortega, J. Dyslexia Diagnosis by EEG Temporal and Spectral Descriptors: An Anomaly Detection Approach. Int. J. Neural Syst. 2020, 30, 2050029. [Google Scholar] [CrossRef] [PubMed]
  9. Fletcher, J.M.; Francis, D.J.; Foorman, B.R.; Schatschneider, C. Early Detection of Dyslexia Risk: Development of Brief, Teacher-Administered Screens. Learn. Disabil. Q. 2021, 44, 145–157. [Google Scholar] [CrossRef] [PubMed]
  10. Tosun, D.; Arikan, S.; Babür, N. Teachers’ Knowledge and Perception about Dyslexia: Developing and Validating a Scale. Int. J. Assess. Tools Educ. 2021, 8, 342–356. [Google Scholar] [CrossRef]
  11. Chakraborty, S.; Dasgupta, A.; Routray, A. Localization of eye Saccadic signatures in Electrooculograms using sparse representations with data driven dictionaries. Pattern Recognit. Lett. 2020, 139, 104–111. [Google Scholar] [CrossRef]
  12. Mulam, H.; Mudigonda, M. EOG-based eye movement recognition using GWO-NN optimization. Biomed. Tech. 2020, 65, 11–22. [Google Scholar] [CrossRef]
  13. Kumar, D.; Sharma, A. Electrooculogram-based virtual reality game control using blink detection and gaze calibration. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2016, Jaipur, India, 21–24 September 2016. [Google Scholar] [CrossRef]
  14. Fatourechi, M.; Bashashati, A.; Ward, R.K.; Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007, 118, 480–494. [Google Scholar] [CrossRef]
  15. Usakli, A.B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F. On the use of electrooculogram for efficient human computer interfaces. Comput. Intell. Neurosci. 2010, 2010, 135629. [Google Scholar] [CrossRef]
  16. Usakli, A.B.; Gurkan, S. Design of a novel efficient humancomputer interface: An electrooculagram based virtual keyboard. IEEE Trans. Instrum. Meas. 2010, 59, 2099–2108. [Google Scholar] [CrossRef]
  17. Lee, K.R.; Chang, W.D.; Kim, S.; Im, C.H. Real-time eye-writing recognition using electrooculogram. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 37–48. [Google Scholar] [CrossRef] [PubMed]
  18. Latifoǧlu, F.; Esas, M.Y.; Demirci, E. Diagnosis of attention-deficit hyperactivity disorder using EOG signals: A new approach. Biomed. Tech. 2020, 65, 149–164. [Google Scholar] [CrossRef]
  19. Altıntop, Ç.G.; Latifoğlu, F.; Akın, A.K. Can patients in deep coma hear us? Examination of coma depth using physiological signals. Biomed. Signal Process. Control 2022, 77, 103756. [Google Scholar] [CrossRef]
  20. Eden, G.F.; Stein, J.F.; Wood, H.M.; Wood, F.B. Differences in eye movements and reading problems in dyslexic and normal children. Vis. Res. 1994, 34, 1345–1358. [Google Scholar] [CrossRef] [PubMed]
  21. Martos, F.J.; Vila, J. Differences in eye movements control among dyslexic, retarded and normal readers in the Spanish population. Read. Writ. 1990, 2, 175–188. [Google Scholar] [CrossRef]
  22. Bachmann, C.; Mengheri, L. Dyslexia and fonts: Is a specific font useful? Brain Sci. 2018, 8, 89. [Google Scholar] [CrossRef]
  23. Rello, L.; Ballesteros, M. Detecting readers with dyslexia using machine learning with eye tracking measures. In Proceedings of the W4A 2015—12th Web for All Conference, Florence, Italy, 18–20 May 2015. [Google Scholar] [CrossRef]
  24. Rello, L.; Baeza-Yates, R. The effect of font type on screen readability by people with dyslexia. ACM Trans. Access. Comput. 2016, 8, 1–33. [Google Scholar] [CrossRef]
  25. Felicia, G. December the Science Behind Dyslexia Fonts and Their Effectiveness. 2024. Available online: https://dyslexichelp.org/why-does-dyslexia-font-work/ (accessed on 8 August 2025).
  26. Kuster, S.M.; van Weerdenburg, M.; Gompel, M.; Bosman, A.M.T. Dyslexie font does not benefit reading in children with or without dyslexia. Ann. Dyslexia 2018, 68, 25–42. [Google Scholar] [CrossRef]
  27. Aksan, D. Her Yönüyle Dil: Ana Çizgileriyle Dilbilim; Turk Dil Kurumu: Ankara, Türkiye, 1995. [Google Scholar]
  28. Kargin, T.; Güldenoğlu, B.; Sümer, H.M. Morfolojik Farkındalık Becerilerinin Okuma Sürecindeki Rolünün Gelişimsel Bakış Açısıyla İncelenmesi: İşiten ve İşitme Engelli Okuyuculardan Bulgular. Ank. Üniversitesi Eğitim Bilim. Fakültesi Özel Eğitim Derg. 2019, 20, 339–367. [Google Scholar] [CrossRef]
  29. Güven, S.; Friedmann, N. Developmental Letter Position Dyslexia in Turkish, a Morphologically Rich and Orthographically Transparent Language. Front. Psychol. 2019, 10, 2401. [Google Scholar] [CrossRef]
  30. Acartürk, C.; Özkan, A.; Pekçetin, T.N.; Ormanoğlu, Z.; Kırkıcı, B. TURead: An eye movement dataset of Turkish reading. Behav. Res. Methods 2024, 56, 1793–1816. [Google Scholar] [CrossRef]
  31. Güven, S.; Friedmann, N. Vowel dyslexia in Turkish: A window to the complex structure of the sublexical route. PLoS ONE 2021, 16, e0249016. [Google Scholar] [CrossRef]
  32. Sümer Dodur, H.M.; Altindağ Kumaş, Ö. Knowledge and beliefs of classroom teachers about dyslexia: The case of teachers in Turkey. Eur. J. Spec. Needs Educ. 2021, 36, 593–609. [Google Scholar] [CrossRef]
  33. Yavuz, T.; Yavuz, I.S.; Deveci, B.; Fidan, T. Dyslexia in education in Turkey. In The Routledge International Handbook of Dyslexia in Education; Taylor&Francis Group, Routledge: London, UK, 2022. [Google Scholar] [CrossRef]
  34. Schulte-Körne, G. The Prevention, Diagnosis, and Treatment of Dyslexia. Dtsch. Ärzteblatt Int. 2010, 107, 718. [Google Scholar] [CrossRef] [PubMed]
  35. Shaywitz, S.E.; Shaywitz, B.A. Dyslexia (specific reading disability). Biol. Psychiatry 2005, 57, 1301–1309. [Google Scholar] [CrossRef] [PubMed]
  36. Usman, O.L.; Muniyandi, R.C.; Omar, K.; Mohamad, M. Advance Machine Learning Methods for Dyslexia Biomarker Detection: A Review of Implementation Details and Challenges. IEEE Access 2021, 9, 36879–36897. [Google Scholar] [CrossRef]
  37. Zahia, S.; Garcia-Zapirain, B.; Saralegui, I.; Fernandez-Ruanova, B. Dyslexia detection using 3D convolutional neural networks and functional magnetic resonance imaging. Comput. Methods Programs Biomed. 2020, 197, 105726. [Google Scholar] [CrossRef]
  38. Ileri, R.; Latifoğlu, F.; Demirci, E. A novel approach for detection of dyslexia using convolutional neural network with EOG signals. Med. Biol. Eng. Comput. 2022, 60, 3041–3055. [Google Scholar] [CrossRef]
  39. Rello, L.; Romero, E.; Ali, A.; Williams, K.; Bigham, J.P.; White, N.C. Screening dyslexia for english using HCI measures and machine learning. In ACM International Conference Proceeding Series; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  40. Spoon, K.; Siek, K.; Crandall, D.; Fillmore, M. Can We (and Should We) Use AI to Detect Dyslexia in Children’s Handwriting? In Proceedings of the International Conference on Machine Learning AI for Social Good Workshop (NeurIPS 2019), Long Beach, CA, USA, 14–15 June 2019. [Google Scholar]
  41. Benfatto, M.N.; Seimyr, G.Ö.; Ygge, J.; Pansell, T.; Rydberg, A.; Jacobson, C. Screening for dyslexia using eye tracking during reading. PLoS ONE 2016, 11, e0165508. [Google Scholar] [CrossRef]
  42. Nerušil, B.; Polec, J.; Škunda, J.; Kačur, J. Eye tracking based dyslexia detection using a holistic approach. Sci. Rep. 2021, 11, 15687. [Google Scholar] [CrossRef] [PubMed]
  43. Rayner, K.; Fischer, M.H. Mindless reading revisited: Eye movements during reading and scanning are different. Percept. Psychophys. 1996, 58, 734–747. [Google Scholar] [CrossRef] [PubMed]
  44. Schmeisser, E.T.; Mcdonough, J.M.; Bond, M.; Hislop, P.D.; Epstein, A.D. Fractal analysis of eye movements during reading. Optom. Vis. Sci. 2001, 78, 805–814. [Google Scholar] [CrossRef]
  45. Biscaldi, M.; Fischer, B.; Aiple, F. Saccadic eye movements of dyslexic and normal reading children. Perception 1994, 23, 45–64. [Google Scholar] [CrossRef]
  46. Latifoğlu, F.; İleri, R.; Demirci, E. Assessment of dyslexic children with EOG signals: Determining retrieving words/re-reading and skipping lines using convolutional neural networks. Chaos Solitons Fractals 2021, 145, 110721. [Google Scholar] [CrossRef]
  47. Sammaiah, A.; Narsimha, B.; Suresh, E.; Sanjeeva Reddy, M. On the performance of wavelet transform improving Eye blink detections for BCI. In Proceedings of the 2011 International Conference on Emerging Trends in Electrical and Computer Technology, ICETECT 2011, Nagercoil, India, 23–24 March 2011. [Google Scholar] [CrossRef]
  48. Schleicher, R.; Galley, N.; Briest, S.; Galley, L. Blinks and saccades as indicators of fatigue in sleepiness warnings: Looking tired? Ergonomics 2008, 51, 982–1010. [Google Scholar] [CrossRef] [PubMed]
  49. Kong, X.; Wilson, G.F. A new EOG-based eyeblink detection algorithm. Behav. Res. Methods Instrum. Comput. 1998, 30, 713–719. [Google Scholar] [CrossRef]
  50. Venkataramanan, S.; Prabhat, P.; Choudhury, S.R.; Nemade, H.B.; Sahambi, J. Biomedical instrumentation based on Electrooculogram (EOG) signal processing and application to a hospital alarm system. In Proceedings of the Proceedings—2005 International Conference on Intelligent Sensing and Information Processing, ICISIP’05, Melbourne, Australia, 5–8 December 2005. [Google Scholar] [CrossRef]
  51. Sinha, S.; Routh, P.S.; Anno, P.D.; Castagna, J.P. Spectral decomposition of seismic data with continuous-wavelet transform. Geophysics 2005, 70, P19–P25. [Google Scholar] [CrossRef]
  52. Darvishi, S.; Al-Ani, A. Brain-computer interface analysis using continuous wavelet transform and adaptive neuro-fuzzy classifier. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology—Proceedings, Lyon, France, 22–26 August 2007. [Google Scholar] [CrossRef]
  53. Türk, Ö.; Özerdem, M.S. Epilepsy detection by using scalogram based convolutional neural network from eeg signals. Brain Sci. 2019, 9, 115. [Google Scholar] [CrossRef] [PubMed]
  54. Leao, R.N.; Burne, J.A. Continuous wavelet transform in the evaluation of stretch reflex responses from surface EMG. J. Neurosci. Methods 2004, 133, 115–125. [Google Scholar] [CrossRef]
  55. Wang, T.; Lu, C.; Sun, Y.; Yang, M.; Liu, C.; Ou, C. Automatic ECG classification using continuous wavelet transform and convolutional neural network. Entropy 2021, 23, 119. [Google Scholar] [CrossRef]
  56. Elhassouny, A.; Smarandache, F. The History Began from AlexNet A. In Proceedings of the International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, 22–24 July 2019. [Google Scholar]
  57. Shafiq, M.; Gu, Z. Deep Residual Learning for Image Recognition: A Survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
  58. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  59. Shaywitz, B.A.; Lyon, G.R.; Shaywitz, S.E. The role of functional magnetic resonance imaging in understanding reading and dyslexia. Dev. Neuropsychol. 2006, 30, 613–632. [Google Scholar] [CrossRef] [PubMed]
  60. Christodoulides, P.; Miltiadous, A.; Tzimourta, K.D.; Peschos, D.; Ntritsos, G.; Zakopoulou, V.; Giannakeas, N.; Astrakas, L.G.; Tsipouras, M.G.; Tsamis, K.I.; et al. Classification of EEG signals from young adults with dyslexia combining a Brain Computer Interface device and an Interactive Linguistic Software Tool. Biomed. Signal Process. Control 2022, 76, 103646. [Google Scholar] [CrossRef]
  61. Rauschenberger, M.; Rello, L.; Baeza-Yates, R.; Bigham, J.P. Towards language independent detection of dyslexia with a web-based game. In Proceedings of the 15th Web for All Conference: Internet of Accessible Things 2018, W4A 2018, Lyon, France, 23–25 April 2018. [Google Scholar] [CrossRef]
  62. Rauschenberger, M.; Baeza-Yates, R.; Rello, L. A Universal Screening Tool for Dyslexia by a Web-Game and Machine Learning. Front. Comput. Sci. 2022, 3, 628634. [Google Scholar] [CrossRef]
  63. Ullah Khan, R.; Lee, J.; Cheng, A.; Bee, O.Y. Machine Learning and Dyslexia: Diagnostic and Classification System (DCS) for Kids with Learning Disabilities. Int. J. Eng. Technol. 2018, 7, 97–100. [Google Scholar]
  64. Jothi Prabha, A.; Bhargavi, R. Prediction of Dyslexia from Eye Movements Using Machine Learning. IETE J. Res. 2022, 68, 814–823. [Google Scholar] [CrossRef]
  65. Abdul Hamid, S.S.; Admodisastro, N.; Manshor, N.; Kamaruddin, A.; Ghani, A.A.A. Dyslexia adaptive learning model: Student engagement prediction using machine learning approach. In Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2018; Volume 700. [Google Scholar] [CrossRef]
  66. El Hmimdi, A.E.; Ward, L.M.; Palpanas, T.; Kapoula, Z. Predicting dyslexia and reading speed in adolescents from eye movements in reading and non-reading tasks: A machine learning approach. Brain Sci. 2021, 11, 1337. [Google Scholar] [CrossRef]
  67. Aldehim, G.; Rashid, M.; Alluhaidan, A.S.; Sakri, S.; Basheer, S. Deep Learning for Dyslexia Detection: A Comprehensive CNN Approach with Handwriting Analysis and Benchmark Comparisons. J. Disabil. Res. 2024, 3, 20240010. [Google Scholar] [CrossRef]
  68. Spoon, K.; Crandall, D.; Siek, K. Towards Detecting Dyslexia in Children’s Handwriting Using Neural Networks. In Proceedings of the International Conference on Machine Learning AI for Social Good Workshop, Long Beach, CA, USA, 28 July 2019. [Google Scholar]
  69. Taş, T.; Bülbül, M.A.; HaşimOğlu, A.; Meral, Y.; Çalişkan, Y.; Budagova, G.; Kutlu, M. A machine learning approach for dyslexia detection using Turkish audio records. Turk. J. Electr. Eng. Comput. Sci. 2023, 31, 892–907. [Google Scholar] [CrossRef]
  70. Latifoglu, F.; Ileri, R.; Demirci, E.; Altintop, C.G. Detection of Reading Movement from EOG Signals. In Proceedings of the 2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Bari, Italy, 1 June–1 July 2020. [Google Scholar] [CrossRef]
  71. Ileri, R.; Latifoglu, F.; Demirci, E. New Method to Diagnosis of Dyslexia Using 1D-CNN. In Proceedings of the TIPTEKNO 2020—Tip Teknolojileri Kongresi—2020 Medical Technologies Congress, TIPTEKNO 2020, Antalya, Turkey, 19–20 November 2020. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed methodology.
Figure 1. Overview of the proposed methodology.
Jemr 18 00056 g001
Figure 2. (a) The EOG electrode placement, (b) EOG signals of three random subjects while reading text: (b.a) horizontal EOG from TDC subject (b.b) vertical EOG from TDC subject, (b.c) horizontal EOG from dyslexic subject (b.d) vertical EOG from dyslexic subject.
Figure 2. (a) The EOG electrode placement, (b) EOG signals of three random subjects while reading text: (b.a) horizontal EOG from TDC subject (b.b) vertical EOG from TDC subject, (b.c) horizontal EOG from dyslexic subject (b.d) vertical EOG from dyslexic subject.
Jemr 18 00056 g002
Figure 3. Calculation of reading time using BIOPAC.
Figure 3. Calculation of reading time using BIOPAC.
Jemr 18 00056 g003
Figure 4. Determination of blink movement.
Figure 4. Determination of blink movement.
Jemr 18 00056 g004
Figure 5. (a) The process of converting EOG signals to scalogram signals, (b) scalogram of EOG signals: from horizontal channel dyslexia and healthy; from vertical channel dyslexia and typically developing controls.
Figure 5. (a) The process of converting EOG signals to scalogram signals, (b) scalogram of EOG signals: from horizontal channel dyslexia and healthy; from vertical channel dyslexia and typically developing controls.
Jemr 18 00056 g005
Figure 6. Architecture of DyslexiaNet.
Figure 6. Architecture of DyslexiaNet.
Jemr 18 00056 g006
Figure 7. The average reading time results for second (a), third (b) and fourth (c) grades subjects. The x-axis represents text information, and the y-axis represents average reading time in second.
Figure 7. The average reading time results for second (a), third (b) and fourth (c) grades subjects. The x-axis represents text information, and the y-axis represents average reading time in second.
Jemr 18 00056 g007
Figure 8. Children with dyslexia have significantly higher reading times. The statistical results for second (a), third (b) and fourth (c) grades subjects with independent sample t-test, ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 8. Children with dyslexia have significantly higher reading times. The statistical results for second (a), third (b) and fourth (c) grades subjects with independent sample t-test, ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g008
Figure 9. The blink rate results for second (a), third (b), and fourth (c) grade subjects. The x-axis represents text information, and the y-axis represents blink rate. The blink rate was calculated by dividing the average number of blinks in each text by the reading time in minutes of that text.
Figure 9. The blink rate results for second (a), third (b), and fourth (c) grade subjects. The x-axis represents text information, and the y-axis represents blink rate. The blink rate was calculated by dividing the average number of blinks in each text by the reading time in minutes of that text.
Jemr 18 00056 g009
Figure 10. Children with dyslexia have significantly higher blink rate. The statistical results for second (a), third (b), and fourth (c) grades subjects with independent sample t-test, ****: p < 0.0001, *: p < 0.05. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 10. Children with dyslexia have significantly higher blink rate. The statistical results for second (a), third (b), and fourth (c) grades subjects with independent sample t-test, ****: p < 0.0001, *: p < 0.05. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g010
Figure 11. The regression rate results for second (a), third (b), and fourth (c) grade subjects. The x-axis represents text information, and the y-axis represents regression rate. The regression rate was calculated by dividing the average number of regressions by the number of words in the text.
Figure 11. The regression rate results for second (a), third (b), and fourth (c) grade subjects. The x-axis represents text information, and the y-axis represents regression rate. The regression rate was calculated by dividing the average number of regressions by the number of words in the text.
Jemr 18 00056 g011
Figure 12. Children with dyslexia have a significantly higher regression rate. The statistical results for second (a), third (b), and fourth (c) grades subjects with independent sample t-test ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 12. Children with dyslexia have a significantly higher regression rate. The statistical results for second (a), third (b), and fourth (c) grades subjects with independent sample t-test ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g012
Figure 13. Children with dyslexia have significantly higher EOG signal energy. The statistical results for second (a), third (b) and fourth (c) grades subjects with independent sample t-test ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 13. Children with dyslexia have significantly higher EOG signal energy. The statistical results for second (a), third (b) and fourth (c) grades subjects with independent sample t-test ****: p < 0.0001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g013
Figure 14. Channel 1 overlapping and 5-fold confusion matrices for DyslexiaNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (e) Fold 4 CM, (f) Fold 5 CM.
Figure 14. Channel 1 overlapping and 5-fold confusion matrices for DyslexiaNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (e) Fold 4 CM, (f) Fold 5 CM.
Jemr 18 00056 g014
Figure 15. Channel 2 overlapping and 5-fold confusion matrices for DyslexiaNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (e) Fold 4 CM, (f) Fold 5 CM.
Figure 15. Channel 2 overlapping and 5-fold confusion matrices for DyslexiaNet: (a) overlapped CM, (b) Fold 1 CM, (c) Fold 2 CM, (d) Fold 3 CM, (e) Fold 4 CM, (f) Fold 5 CM.
Jemr 18 00056 g015
Figure 16. The comparison of classification metrics for (a) vertical and (b) horizontal EOG with One Way ANOVA with Dunnett correction: accuracy, sensitivity, specificity, F-score, * p < 0.05, *** p < 0.0001, **** p < 0.00001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 16. The comparison of classification metrics for (a) vertical and (b) horizontal EOG with One Way ANOVA with Dunnett correction: accuracy, sensitivity, specificity, F-score, * p < 0.05, *** p < 0.0001, **** p < 0.00001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g016
Figure 17. Average training time comparison for CNN methods with unpaired t-test with Welch’s correction for (a) AlexNet, ResNet50, MobileNet, DyslexiaNet, (b) average training time for CNN methods with One Way ANOVA with Dunnett correction for AlexNet, ResNet50, MobileNet, DyslexiaNet, * p < 0.05, *** p < 0.0001, **** p < 0.00001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Figure 17. Average training time comparison for CNN methods with unpaired t-test with Welch’s correction for (a) AlexNet, ResNet50, MobileNet, DyslexiaNet, (b) average training time for CNN methods with One Way ANOVA with Dunnett correction for AlexNet, ResNet50, MobileNet, DyslexiaNet, * p < 0.05, *** p < 0.0001, **** p < 0.00001. Statistical analysis was performed using GraphPad Prism 10.0® software.
Jemr 18 00056 g017
Table 1. Summary of the DyslexiaNet model.
Table 1. Summary of the DyslexiaNet model.
#NameTypeActivationsLearnablesTotal
Learnables
1imageinput
28 × 28 × 3 images with ‘zerocenter’ normalzation
Image Input28 × 28 × 3-0
2Conv_1
16- 4 × 4 × 3 convolutions with stride [1 1] and padding ‘same’
Convolution28 × 28 × 16Weights Bias4 × 4 × 3 × 16
1 × 1 × 16
784
3Batchnorm_1
Batch normalization with 8 channels
Batch Normalization28 × 28 × 8Offset Scale1 × 1 × 16
1 × 1 × 16
32
4Relu_1
ReLU
ReLU28 × 28 × 16- 0
5Maxpool_1
2 × 2 max pooling with stride [2 2] and padding
[0 0 0 0]
Max Pooling7 × 7 × 16- 0
6Conv_2
32 3 × 3 × 8 convolutions with stride [1 1] and padding ‘same’
Convolution7 × 7 × 32Weights Bias4 × 4 × 16 × 32
1 × 1 × 32
8224
7Batchnorm_2
Batch normalization with 16 channels
Batch Normalization7 × 7 × 32Offset Scale1 × 1 × 32
1 × 1 × 32
64
8Relu_2
ReLU
ReLU7 × 7 × 32- 0
9maxpool_2
2 × 2 max pooling with stride [2 2] and padding [0 0 0 0]
Max Pooling3 × 3 × 32- 0
10conv 3
64 3 × 3 × 16 convolutions with stride [1 1] and padding ‘same’
Convolution3 × 3 × 64Weights Bias4 × 4 × 32 × 64
1 × 1 × 64
32,832
11batchnorm 3
Batch normalization with 32 channels
Batch Normalization3 × 3 × 64Offset Scale1 × 1 × 64
1 × 1 × 64
128
12relu 3
ReLU
ReLU3 × 3 × 64- 0
13Conv_4
64 3 × 3 × 32 convolutions with stride [1 1] and padding ‘same’
Convolution3 × 3 × 64Weights Bias4 × 4 × 64 × 64
1 × 1 × 64
65,600
14batchnorm 4
Batch normalization with 32 channels
Batch Normalization3 × 3 × 64Offset
Scale
1 × 1 × 64
1 × 1 × 64
128
15relu 4
ReLU
ReLU3 × 3 × 64- 0
16Dropout
50% dropout
Dropout3 × 3 × 64- 0
17fc
2 fully connected layers
Fully Connected1 × 1 × 2Weights Bias2 × 576
2 × 1
1154
18SoftMax
SoftMax
Softmax1 × 1 × 2- 8
19Classoutput
crossentropyex
Classification Output - 0
Table 2. Performance results for the five folds for the vertical channel.
Table 2. Performance results for the five folds for the vertical channel.
CNN ModelFold NumberAccuracy (%)Sensitivity (%)Specificity (%)F1-Score (%)
AlexNetFold-162.5039.3385.6751.19
Fold-268.2555.6780.8363.68
Fold-368.5061.3375.6766.07
Fold-466.4257.5075.3369.98
Fold-562.4258.3366.5060.82
Mean ± Std.65.61 ± 2.6754.43 ± 7.7676.80 ± 6.3960.97 ± 5.16
ResNet50Fold-156.3312.6710022.49
Fold-252.9237.8368.0044.55
Fold-352.8359.3346.3355.71
Fold-453.5071.8335.1760.70
Fold-551.1796.176.1766.32
Mean ± Std.53.35 ± 1.8755.56 ± 31.9451.13 ± 35.2549.95 ± 17.32
MobileNetV2Fold-154.4239.8369.0046.63
Fold-255.0842.8067.3048.80
Fold-356.6638.2075.2046.80
Fold-457.9149.2466.7053.90
Fold-561.0043.0079.0052.40
Mean ± Std.57.01 ±2.6142.61 ± 4.2271.44 ± 5.4049.70 ± 3.30
DyslexiaNetFold-177.0863.179173.38
Fold-272.925788.8367.79
Fold-370.335189.6763.22
Fold-476.757677.576.57
Fold-571.5871.571.6271.56
Mean ± Std.73.732 ± 3.0463.734 ± 10.2283.724 ± 8.6570.504 ± 5.16
Table 3. Performance results for the five folds for the horizontal channel.
Table 3. Performance results for the five folds for the horizontal channel.
CNN ModelFold NumberAccuracy (%)Sensitivity (%)Specificity (%)F1-Score (%)
AlexNetFold-199.9110099.8099.90
Fold-2100100100100
Fold-399.8399.8099.8099.80
Fold-4100100100100
Fold-5100100100100
Mean ± Std.99.94 ± 0.06899.96 ± 0.00899.92 ± 0.09799.94 ± 0.008
ResNet50Fold-1100100100100
Fold-299.8310099.6799.83
Fold-3100100100100
Fold-488.7577.5010087.32
Fold-5100100100100
Mean ± Std.97.71 ± 5.012795.00 ± 10.062399.93 ± 0.147697.43 ± 5.6521
MobileNetV2Fold-199.9210099.8399.92
Fold-299.5810099.1799.59
Fold-399.5810099.1799.59
Fold-4100100100100
Fold-599.9299.8310099.92
Mean ± Std.99.80 ± 0.203599.96 ± 0.076099.63 ± 0.429299.80 ± 0.1981
DyslexiaNetFold-199.9299.8310099.92
Fold-2100100100100
Fold-3100100100100
Fold-499.9210099.8399.92
Fold-5100100100100
Mean ± Std.99.968 ± 0.0499.966 ± 0.0799.966 ± 0.0799.968 ± 0.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

İleri, R.; Altıntop, Ç.G.; Latifoğlu, F.; Demirci, E. DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection. J. Eye Mov. Res. 2025, 18, 56. https://doi.org/10.3390/jemr18050056

AMA Style

İleri R, Altıntop ÇG, Latifoğlu F, Demirci E. DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection. Journal of Eye Movement Research. 2025; 18(5):56. https://doi.org/10.3390/jemr18050056

Chicago/Turabian Style

İleri, Ramis, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, and Esra Demirci. 2025. "DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection" Journal of Eye Movement Research 18, no. 5: 56. https://doi.org/10.3390/jemr18050056

APA Style

İleri, R., Altıntop, Ç. G., Latifoğlu, F., & Demirci, E. (2025). DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection. Journal of Eye Movement Research, 18(5), 56. https://doi.org/10.3390/jemr18050056

Article Metrics

Back to TopTop