Next Article in Journal
Simulation Investigation of Quantum FSO–Fiber System Using the BB84 QKD Protocol Under Severe Weather Conditions
Previous Article in Journal
Research on Networking Protocols for Large-Scale Mobile Ultraviolet Communication Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models

by
Ricardo Bernardez-Vilaboa
1,
F. Javier Povedano-Montero
1,2,*,
José Ramon Trillo
3,
Alicia Ruiz-Pomeda
1,
Gema Martínez-Florentín
1 and
Juan E. Cedrún-Sánchez
1
1
Optometry and Vision Department, Faculty of Optics and Optometry, Complutense University of Madrid, 28037 Madrid, Spain
2
Hospital Doce de Octubre Research Institute (i+12), 28041 Madrid, Spain
3
Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence, DaSCI, University of Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Photonics 2025, 12(7), 711; https://doi.org/10.3390/photonics12070711
Submission received: 20 June 2025 / Revised: 5 July 2025 / Accepted: 11 July 2025 / Published: 14 July 2025

Abstract

Background/Objective: This study aims to evaluate the predictive performance of three supervised machine learning algorithms—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN) in forecasting key visual skills relevant to rhythmic gymnastics. Methods: A total of 383 rhythmic gymnasts aged 4 to 27 years were evaluated in various sports centers across Madrid, Spain. Visual assessments included clinical tests (near convergence point accommodative facility, reaction time, and hand–eye coordination) and eye-tracking tasks (fixation stability, saccades, smooth pursuits, and visual acuity) using the DIVE (Devices for an Integral Visual Examination) system. The dataset was split into training (70%) and testing (30%) subsets. Each algorithm was trained to classify visual performance, and predictive performance was assessed using accuracy and macro F1-score metrics. Results: The decision tree model demonstrated the highest performance, achieving an average accuracy of 92.79% and a macro F1-score of 0.9276. In comparison, the SVM and KNN models showed lower accuracies (71.17% and 78.38%, respectively) and greater difficulty in correctly classifying positive cases. Notably, the DT model outperformed the others in predicting fixation stability and accommodative facility, particularly in short-duration fixation tasks. Conclusion: The decision tree algorithm achieved the highest performance in predicting short-term fixation stability, but its effectiveness was limited in tasks involving accommodative facility, where other models such as SVM and KNN outperformed it in specific metrics. These findings support the integration of machine learning in sports vision screening and suggest that predictive modeling can inform individualized training and performance optimization in visually demanding sports such as rhythmic gymnastics.

1. Introduction

Vision and associated visual skills are fundamentally important in most sports [1,2]. In gymnastics, visual abilities significantly contribute to athletes’ capacity to execute highly complex skills on various apparatuses, each presenting distinct visual demands [3]. Key visual skills utilized in gymnastics include static and dynamic visual acuity, gaze control (fixations, saccades, and smooth pursuit), depth perception, peripheral vision, accommodation (focus flexibility), reaction time, and hand–eye coordination [1].
Gymnasts must precisely control their gaze, fixating on specific locations while performing intricate movements. Rhythmic gymnastics, involving the manipulation of apparatuses like ribbons, hoops, and balls, places extreme demands on visual control for anticipating, tracking, and executing rapid, precise actions. Success hinges not only on physical prowess but also on the ability to accurately stabilize and direct one’s gaze. Gaze behavior can vary based on task constraints and the performer’s level of expertise [3].
Gaze control involves two main functions: stabilizing the gaze to maintain focus using reflexes triggered by inputs from the vestibular system, visual cues, or neck movements; and orienting the gaze towards objects of interest using a combination of quick eye movements (saccades) and smooth tracking movements (smooth pursuit). During this process, the visual and oculomotor systems collaborate to ensure that the object of interest remains centered on the retina [4].
Focus flexibility, or accommodation, refers to the skill that enables athletes to swiftly shift their focus from one point to another in space without excess effort. Difficulties in this area can hinder the ability to track incoming or outgoing objects quickly and accurately [5].
Reaction time refers to the duration between sensing a stimulus and initiating the appropriate response. Specifically, visual reaction time measures the time it takes to perceive and react to visual stimuli, which may also involve auditory cues. Response time is defined as the total time necessary to process visual information and complete the motor response sequence [1].
In our research, we employed the Devices for an Integral Visual Examination (DIVE) System, a comprehensive tool integrating eye-tracking technology with various utilities to assess visual function across different domains. The DIVE system features a high-resolution touchscreen display for presenting visual stimuli and facilitating patient interaction. Enhanced by an advanced eye tracker, the DIVE captures patient responses to these stimuli [4].
Furthermore, while the present study focuses on the application of machine learning models, it is rooted in biomedical optics using the DIVE system—a platform based on light-based eye-tracking and precision visual stimulus presentation. This optical infrastructure is central to capturing high-resolution oculomotor data and aligns with the technological scope of photonics-related research.
In this research study, we investigate the predictive capabilities of three distinct algorithms utilized to forecast visual skills among gymnasts. The algorithms employed are the k-nearest neighbors (KNN), decision tree (DT), and support vector machine (SVM) using the hold-out method.
KNN is an intuitive supervised learning algorithm that employs a proximity-based method for pattern recognition [4].
DT is a supervised learning algorithm used for classification and prediction tasks [6].
SVM is a kernel-based machine learning algorithm that can categorize and input data into specific classes or categories [4].
In the broader field of optical systems, artificial intelligence—particularly artificial neural networks (ANNs)—have been increasingly employed to model complex photonic phenomena. Hamedi and Jahromi [7] used ANN architectures to analyze the performance of all-optical logic gates, demonstrating the capability of neural models to simulate non-linear optical interactions. Similarly, Hamedi et al. [8] applied AI techniques to the modeling of nanoplasmonic biosensors, highlighting the potential of data-driven methods to predict optical system responses with high precision. In another study, Jahromi and Hamedi [9] developed an ANN-based model to estimate the electronic and optical properties of nanocomposites, achieving excellent correlation between experimental and predicted values. These applications reinforce the versatility of ANN approaches in optical domains and suggest promising avenues for their integration into complex vision-based sports performance modeling.
The hold-out method is a technique used in machine learning and statistics to evaluate the performance of a predictive model. It involves dividing the dataset into two parts: one used to train the model (training set, typically 70% of the data) and the other reserved for testing (test set, typically 30%). This approach helps in assessing the model’s ability to generalize to unseen data.
Our objective is to discern the efficacy of these algorithms in forecasting visual abilities based on a battery of visual tests administered to 383 gymnasts aged between 4 and 27 years. To elucidate the visual skills contributing to gymnastics and enable greater predictive precision, we employed a variety of algorithms in this research. Through the utilization of these algorithms, we aim to determine their effectiveness in predicting visual skills crucial for performance in rhythmic gymnastics.

2. Materials and Methods

Permission was obtained from the Sports Department of the Madrid City Council to conduct visual assessments of rhythmic gymnasts. The study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the Institutional Review Board of Hospital Clínico San Carlos, Madrid, Spain (protocol no. 21/766-E; approval date: 20 December 2021). Participation was voluntary, and written informed consent was obtained from all participants or their legal guardians in the case of minors. All tests were administered by trained clinicians under standardized conditions.
Assessments were conducted in various sports centers across Madrid where rhythmic gymnastics is regularly practiced by affiliated clubs. In each center, two dedicated testing areas were set up: a darkened room for DIVE assessments and an adjacent space with tables and chairs for clinical procedures. The clinical tests included near convergence point, cover test, monocular accommodative facility, visual reaction time, and hand–eye coordination. The order of test administration was randomized, and the total duration of testing per participant was approximately 15 min.
Figure 1 provides a schematic overview of the visual evaluation process carried out with rhythmic gymnasts, including the sequence of procedures, the settings used for data collection, and the specific tests performed.

2.1. Equipment and Visual Assessment Protocol

The DIVE system, equipped with a 12-inch screen providing a visual angle of 22.11° horizontally and 14.81° vertically, was used to conduct the evaluations. Its eye-tracking technology operates with a maximum temporal resolution of 120 Hz, offering high precision in tracking eye movements, a key feature for assessing rapid and subtle visual responses in gymnasts.
During the visual assessment sessions, gymnasts sat in front of the DIVE system, which features a high-resolution 12-inch screen and integrated eye-tracking technology (Figure 2). Testing was conducted in a dimly lit environment to ensure optimal tracking accuracy and participant comfort. The system recorded eye movements and visual responses during a series of standardized tasks, providing objective data across multiple visual domains.
The selection of the DIVE system (Device for an Integral Visual Examination) for visual assessment in this study is supported by its prior validation in clinical and multicenter settings. Pérez Roche et al. [10] utilized DIVE to evaluate visual acuity and contrast sensitivity in a sample of 2208 children aged between 6 months and 14 years, including both full-term and preterm births, across five countries. The findings indicated that both visual functions improved with age, particularly during the first five years of life. This study provided normative reference values and endorsed DIVE’s utility as an objective and effective tool for measuring basic visual functions in pediatric populations.
Furthermore, Pueyo et al. [11] employed DIVE to characterize the development of oculomotor control in 802 healthy children aged between 5 months and 15 years, observing significant improvements in fixation stability and saccadic movements with age, especially during the first two years of their lives.
Lastly, Altemir et al. [12] assessed fixational behavior during long and short fixation tasks in 259 participants aged between 5 months and 77 years using DIVE. They found that gaze stability improved with age up to 30 years and then progressively declined from the fifth decade of life onwards.
The DIVE system facilitated various assessment protocols, each selected for its relevance to the visual demands of rhythmic gymnastics. These protocols include:
  • Long Saccades DIVE: It assesses the gymnasts’ ability to perform rapid and extended eye movements, which are crucial for tracking moving apparatus.
  • Short Saccades DIVE: It measures the precision of shorter eye movements, which are necessary for detailed tasks such as hand–eye coordination with apparatus.
  • Eye Tracker Fixation Test DIVE: It assesses the stability of the gymnasts’ visual fixation, which is essential for maintaining focus during routines.
  • Color Perception DIVE: Detects possible anomalies in color perception that could affect interaction with colored apparatus.
  • Visual Acuity and Single Binocular Field (Av y FSC DIVE): Essential for spatial awareness and precision in positioning relative to the apparatus.
  • In addition to the DIVE protocols, two complementary clinical tests were conducted:
  • Reaction time and hand–eye coordination were assessed using the Reaction Lights system to simulate dynamic visual–motor demands.
  • Monocular accommodative facility was measured with ±2.00 D flippers to evaluate the gymnasts’ focusing facility.

2.2. Variable Categorization and Class Balance

For the machine learning models, each visual variable was converted into a categorical output. In particular, REAF (right eye accommodative facility) and LEAF (left eye accommodative facility) were labeled as “normal” or “reduced” based on clinical optometric reference values adapted to pediatric populations. Participants who achieved ≥6 cycles per minute were classified as “normal”, and those below this threshold were classified as “reduced”.
Additionally, to mitigate potential class imbalance, a stratified 5-fold cross-validation approach was implemented during model training and validation. This ensured that the distribution of class labels was preserved across all data folds, maintaining representativeness and improving model robustness.

2.3. Model Training and Preprocessing

All models were developed and evaluated using the Scikit-learn and XGBoost libraries in Python 3.11.4. The dataset was divided into 70% for training and 30% for testing. This single hold-out split was used to evaluate the final model performance on unseen data.
Although no cross-validation was applied for model evaluation, a stratified 5-fold cross-validation framework was used during hyperparameter tuning to optimize each algorithm’s configuration.
A univariate feature selection method based on ANOVA F-values was used to retain the most relevant features. For each algorithm, hyperparameter tuning was performed using grid search within the cross-validation framework.
Although the hyperparameter search was not exhaustive, we conducted empirical exploration for each algorithm to identify configurations that maximized predictive performance. For KNN, several values of k (e.g., 3, 5, 7) were tested. For SVM, we experimented with different kernels (linear and RBF) and regularization parameters (C).
The best-performing configuration for each model was selected based on macro F1-score on the validation folds. Tree-based models (decision tree, random forest, and XGBoost) did not require feature scaling, while standardization (z-score normalization) was applied to SVM and KNN to ensure optimal performance.
The macro F1-score provides an unweighted average of F1-scores across all classes and is particularly useful when class distributions are imbalanced. It is defined as
F 1 m a c r o = 1 C   i = 1 C 2   *   P r e c i s i o n i   *   R e c a l l i P r e c i s i o n i   +   R e c a l l i
where C is the number of classes. For our binary classification tasks (e.g., “normal” vs. “reduced”), the macro F1-score corresponds to the average of F1-scores for each class, treating both classes equally regardless of size. In future applications involving multi-class or ordinal targets, this metric can similarly assess overall performance by computing per-class F1 values and averaging them.

3. Results

A total of 383 rhythmic gymnasts aged 4–26 years participated in this study. The dataset conformed to the assumption of normality, allowing the application of parametric statistical methods. Descriptive statistics, including means, standard deviations, minimum and maximum values, were calculated to characterize the global variables of the sample. To analyze differences and patterns in the variables of interest, the sample was divided into age groups. Statistical comparisons were made to identify significant trends and variations between these groups, providing insight into the developmental trajectory of visual variables within rhythmic gymnastics.
These global values were presented in Table 1, which provides an overview of the entire cohort’s performance across all assessed variables. A closer examination of the table reveals considerable variability in several visual function parameters within the sample of rhythmic gymnasts. Notably, accommodative facility (REAF and LEAF) and eye–hand coordination (EHC) display wide ranges and relatively high standard deviations, indicating heterogeneity in neurosensory performance across participants. Visual reaction time (VRT) also shows substantial dispersion (mean: 1066 ms; SD: 241 ms), likely reflecting developmental differences across the broad age spectrum. Notably, the near convergence point (NCP) ranged from 0 to 16 cm, suggesting that while many gymnasts demonstrate effective convergence, a subset exhibits marked limitations. The relatively symmetrical means and variability in fixation stability metrics (FLTBREFS/FLTBLEFS and FSTBREFS/FSTBLEFS) indicate balanced binocular control between eyes.
To analyze differences and patterns in the variables of interest, the sample was divided into nine age groups: 6–6.9, 7–7.9, 8–8.9, 9–10.9, 11–12.9, 13–13.9, 13–14.9, 15–18, and 19–27 years. Statistical comparisons were made to identify significant trends and variations between these groups, providing insight into the developmental trajectory of visual variables within rhythmic gymnastics.
Table 2 presents the age-stratified means and standard deviations for each assessed variable, highlighting how specific visual performance metrics evolve with age. Overall, an age-related improvement is evident in key parameters such as accommodative facility, visual reaction time, and eye–hand coordination, consistent with the progressive maturation of oculomotor and neurosensory pathways. Conversely, convergence point, and smooth pursuit performance show greater variability, potentially reflecting more complex or non-linear developmental profiles influenced by training history or individual visual demands.

3.1. Differences Between Groups

Figure 3, Figure 4 and Figure 5 provide a graphical representation of the age-related differences observed in three key visual variables: near convergence point (NCP), reaction time, and hand–eye coordination, respectively. The data reveal a general trend of improvement with increasing age, particularly noticeable between the younger and older participant groups.
As shown in Figure 3, NCP values tend to decrease with age, although this trend is not strictly linear. Notably, the 19–27 age group showed a slight increase, which may be due to greater individual variability or differing visual demands at older ages.
Figure 4 shows a gradual reduction in reaction time, reflecting faster visual–motor responses as age increases. Similarly, Figure 5 illustrates improvements in hand–eye coordination, with older groups achieving higher scores. These visual patterns support the statistical analyses and suggest a developmental maturation of visual and sensorimotor functions relevant to rhythmic gymnastics performance.
A series of significant differences are evident across age groups in relation to near convergence point, reaction time, and hand–eye coordination, as depicted in the preceding three figures.
To further investigate whether age alone could explain the variance observed in visual performance, we conducted a correlational analysis between age and key visual parameters. As shown in Figure 6, the correlation between age and near convergence point (NCP) was weak (R2 = 0.075), with a shallow regression slope. This suggests that the relationship between age and NCP is modest and non-linear. Additionally, machine learning models trained without age as a feature retained robust predictive performance, indicating that the visual variables provide relevant information beyond chronological age.

3.2. Machine Learning Models Performance

In addition to the descriptive group comparisons, the predictive capabilities of three supervised machine learning models—decision tree, support vector machine (SVM), and k-nearest neighbors (KNN)—were evaluated. These models were applied to classify performance on specific visual functions, such as fixation stability and accommodative facility, using eye-tracking and clinical data. The results for each model and task are presented below.
This study aims to evaluate the performance of three machine learning models, namely decision tree, support vector machine (SVM), and k-nearest neighbors (KNN), in predicting fixation stability in short tasks. The results of this study will provide insights into the most suitable model for this task and highlight the strengths and weaknesses of each model.
The class distributions for the binary classification of accommodative facility (normal vs. reduced) were balanced by a median split. To further illustrate the model’s predictive ability, Figure 7 presents the ROC curve for the decision tree model in predicting accommodative facility. The area under the curve (AUC = 0.98) demonstrates excellent discriminative performance.
Figure 8 shows the structure of the final decision tree model, illustrating the most relevant features used for classification. Notably, fixation stability and accommodative facility emerged as dominant predictors, consistent with their established clinical relevance in visual performance assessment.
Results for the right eye:
  • Decision tree: 98.20% accuracy and 0.9819 macro F1 score. The confusion matrix is
57 1 1 52
  • Support vector machine (SVM) with a linear kernel: 79.28% accuracy and 0.7894 macro F1 score. The confusion matrix is
51 7 16 37
  • K-nearest neighbors (KNN) with k = 3:66.67% accuracy and 0.6627 macro F1 score. The confusion matrix is
43 15 22 31
  • Random forest: 81.08% accuracy and 0.8078 macro F1 score. The confusion matrix is
52 6 15 38
  • XGBoost: 87.39% accuracy and 0.8726 macro F1 score. The confusion matrix is
54 4 10 43
Results for the left eye:
  • Decision tree: 96.40% accuracy and 0.9635 macro F1 score. The confusion matrix is
47 1 3 60
  • Support vector machine (SVM) with a linear kernel: 75.68% accuracy and 0.7567 macro F1 score. The confusion matrix is
41 7 20 43
  • K-nearest neighbors (KNN) with k = 3:71.17% accuracy and 0.7115 macro F1 score. The confusion matrix is
38 10 22 41
  • Random forest: 83.78% accuracy and 0.8377 macro F1 score. The confusion matrix is
45 3 15 48
  • XGBoost: 88.29% accuracy and 0.8823 macro F1 score. The confusion matrix is
45 3 10 53
In general, the decision tree consistently demonstrated excellent performance in both eyes, with high accuracy and a macro F1 score close to 1. The SVM and KNN showed lower performance, exhibiting a higher rate of misclassifications. Notably, the SVM struggled to correctly classify positive cases in both eyes, whereas the KNN showed a higher number of false positives and negatives in the left eye.
In summary, the decision tree is the most suitable model for predicting fixation stability in short tasks in both eyes, followed by the KNN and SVM. It is important to acknowledge the limitations of each model and consider appropriate adjustments to enhance their predictive performance.
Combined results for both eyes:
  • Decision tree: 97.30% average accuracy and 0.973 average macro F1 score. The confusion matrix is
59 3 0 49
  • Support vector machine (SVM) with a linear kernel: 74.77% average accuracy and 0.732 average macro F1 score. The confusion matrix is
55 7 21 28
  • K-nearest neighbors (KNN) with k = 3:70.27% average accuracy and 0.700 average macro F1 score. The confusion matrix is
46 16 17 32
  • Random forest: 91.89% average accuracy and 0.9176 average macro F1 score. The confusion matrix is
58 4 5 44
  • XGBoost: 95.50% average accuracy and 0.9542 average macro F1 score. The confusion matrix is
60 2 3 46
The combined results show that the decision tree is still the best-performing model, with an average accuracy of 97.20% and an average macro F1 score of 0.973. The KNN and SVM have lower performance, with average accuracy and macro F1 scores of 70.00% and 74.77%, respectively.
Short-term fixation denotes the ability to maintain visual attention on an object for a brief period, typically spanning seconds to minutes. This generally demands less sustained concentration and attention compared to long-term fixation.
The study found that the decision tree model also exhibits optimal predictive performance for short-term fixation, albeit with a slightly lower precision compared to long-term fixation. This indicates that the model can accurately predict an individual’s ability to sustain visual attention on an object for a brief period.
In summary, three machine learning models (support vector machine (SVM), k-nearest neighbors (KNN), and decision tree) were tested for two classification tasks: accommodative facility of the right eye (REAF) and accommodative facility of the left eye (LEAF), for both eyes.
In general, the models had difficulties in correctly classifying the positive class in both classification tasks.
The SVM had high accuracy in some tests, but its low F1 macro revealed a significant imbalance in the classification of the positive class.
The KNN had a better balance between classes in some tests, but its accuracy was lower compared to the SVM.
The decision tree performed worst in some tests, with both low accuracy and macro F1 score, indicating substantial misclassification in both classes.
For the accommodative facility of the right eye (REAF), the SVM had the highest accuracy (79.28%), but its F1 macro was low (0.4422) due to its inability to correctly classify the positive class. The KNN had a better F1 macro (0.5547) compared to the SVM, but its accuracy was lower (70.27%). The DT exhibited the lowest performance, both in accuracy (65.77%) and F1 macro (0.5229). Moreover, random forest and XGBoost had an accuracy of 56.76% and 61.26%, respectively, and a macro F1 0.4127 and 0.5037.
For the accommodative facility of the left eye (LEAF), the KNN seemed to be the most balanced model, while the SVM suffered from a severe imbalance and the DT had a low overall performance.

4. Discussion

The present study explored the application of supervised machine learning algorithms to predict key visual functions in rhythmic gymnasts, focusing specifically on fixation stability and accommodative facility. Among the three models evaluated—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN), the DT algorithm exhibited the highest predictive performance, with an accuracy of 92.79% and a macro F1-score of 0.9276. These findings highlight the potential of decision trees as a robust and interpretable approach for modeling complex, non-linear relationships between visual variables and functional outcomes in high-performance sports.
The superior performance of the DT model may be attributed to its inherent ability to manage multidimensional data and capture subtle interactions between input features.
Although age was correlated with some visual variables in descriptive analyses, our findings suggest that the predictive performance of the model was not driven primarily by chronological age. As shown in the correlation analysis between age and near convergence point (Figure 6), the association was weak (R2 = 0.075), and models trained without age as an input retained high accuracy. This supports the notion that machine learning algorithms captured meaningful visual performance patterns that extend beyond age-related maturation.
In dynamic disciplines like rhythmic gymnastics, visual skills such as fixation and accommodation are essential for responding to rapidly changing stimuli with precision and stability. Our results support the notion that visual–motor abilities can be effectively predicted using AI-based models, offering practical implications for athlete monitoring and individualized training design.
Visual skills such as fixation stability and accommodative facility, both of which are essential for athletes to maintain visual focus and adjust rapidly to changing visual stimuli, were predicted with high accuracy. This is consistent with the findings of previous studies, which emphasize the importance of visual acuity, saccades, and reaction time in sports performance [13,14].
Our results strongly suggest that the decision tree (DT) algorithm is the most robust, consistent, and clinically interpretable choice for classification problems in this context. Its exceptionally high accuracy and macro F1-score across most datasets make it a highly reliable and effective tool when the goal is to maximize both predictive performance and clarity. While other algorithms, such as the SVM and KNN, may offer advantages in specific scenarios, the DT model consistently outperformed them. This reinforces its value as the most suitable, practical, and accessible solution for modeling complex visual performance patterns in rhythmic gymnasts.
The final decision tree model identified fixation stability and accommodative facility as the most influential features in classifying visual performance categories. This aligns with clinical expectations, as both variables are closely linked to visual efficiency and oculomotor control, which are fundamental in sports like rhythmic gymnastics. The prominence of these features reinforces the interpretability of the model and its potential applicability in practical settings.
The binary categorization into “normal” and “reduced” was a methodological choice to enhance interpretability and ensure class balance using the median as cutoff. While this simplification may reduce granularity, it enabled robust and clinically meaningful predictions in this exploratory phase. Additional decision tree structures and ROC curves for the remaining visual variables are available in the Supplementary Materials (Figure S1–S10). Future studies will explore multi-class and continuous models for greater precision.
Additionally, a comparative analysis of the algorithms used (decision tree, support vector machine, and k-nearest neighbors) has been incorporated, focusing on their technical characteristics. The performance of the SVM model may have been limited by its reliance on linear class separability, especially for variables such as accommodative facility. The KNN algorithm, on the other hand, showed sensitivity to the choice of the k parameter and the number of samples within local neighborhoods, which may affect its stability in heterogeneous datasets. Although the decision tree model performed well overall, its simple structure may carry a higher risk of overfitting in small or highly variable datasets. This comparison underscores the importance of selecting models not only based on overall performance but also on their suitability for the type of variable and data structure.
Moreover, the three algorithms differ significantly in their bias–variance trade-offs. The decision tree tends to have low bias but high variance, making it prone to overfitting, especially when no pruning is applied. SVMs, depending on the kernel, typically have a more balanced bias–variance profile and are robust to overfitting, but can be sensitive to the selection of hyperparameters like the regularization term and kernel type. KNN is characterized by high variance and low bias, especially with small values of k, and is highly sensitive to outliers and noise in the data. These characteristics influence model stability and generalizability, particularly in datasets with heterogeneous distributions or noisy measurements.
While the decision tree model exhibited strong performance across most visual variables, its predictive accuracy was notably lower for accommodative facility (REAF/LEAF). This discrepancy may be attributed to several factors. First, a potential class imbalance—where most participants demonstrated normal accommodative function—may have hindered the model’s ability to learn minority class patterns. Second, accommodative facility tests, conducted with ±2.00 D flippers, are subject to variability due to examiner influence and participant cooperation, which can introduce noise into the labels. Finally, the relatively simple structure of the decision tree may lead to overfitting when learning from noisy or unbalanced data, especially without extensive regularization or pruning.
Additionally, the lower performance of the KNN model does not appear to be related to class imbalance, as outcome variables were binarized using the sample median to ensure balanced class distribution. Rather, KNN’s limitations may stem from its sensitivity to high-dimensional input spaces and the absence of dimensionality reduction or feature engineering strategies. Future work may address this by applying principal component analysis (PCA) or automated feature selection to improve performance.
Furthermore, we assessed feature distribution and multicollinearity to ensure the reliability of the input variables. Principal component analysis (PCA) was tested as a dimensionality reduction technique but showed minimal improvement in model performance. Considering the relatively small number of predictors (14 variables) and the importance of interpretability in clinical–sport settings, we decided to retain the original feature set. Multicollinearity was evaluated using variance inflation factors (VIF), and all values were below two, indicating no significant redundancy among variables.
Although artificial neural networks (ANNs) are increasingly used in biomedical and sports-related predictive modeling, we deliberately excluded them from the current study due to the heightened risk of overfitting associated with our relatively small dataset (n = 383). Nevertheless, prior studies have demonstrated that ANN architectures can yield reliable predictions even with similar sample sizes. For example, Hamedi et al. and Jahromi et al. applied ANN models to predict optical and nanophotonic properties in experimental contexts with limited data availability [7,8,9]. These precedents support the potential future application of ANN techniques in sports vision research, particularly when larger, more heterogeneous, or multimodal datasets become available.
The findings of our study align with the growing body of literature supporting the integration of artificial intelligence (AI) and machine learning (ML) techniques in sports science. Reis et al. [15] emphasizes the utility of decision tree (DT) algorithms and other supervised learning methods for injury risk prediction and performance optimization, especially in disciplines that require dynamic and multidimensional analysis. Our work adds to this evidence by showing that DT algorithms can also be highly effective in predicting key visual variables in rhythmic gymnasts, highlighting the potential of AI models to support tailored interventions and performance monitoring in youth sports.
A relevant comparison can be made with the study by Liu et al. [16], who applied various machine learning algorithms—including decision trees, KNN, and SVM—to predict physical activity behavior among university students, based on psychological constructs such as sports learning interest and autonomy support. Although their study focused on behavioral and motivational variables rather than visual or physiological abilities, both investigations share the common goal of using supervised learning models to forecast performance-related traits in sports populations. In their findings, logistic regression achieved the highest overall accuracy (72.88%), while decision trees and SVM yielded moderate results (F1 scores of 0.6672 and 0.6845, respectively). In contrast, our study found decision trees to be the most effective model for predicting visual function, particularly in tasks involving fixation and accommodative facility. These differences may be attributed to the nature of the target variables—subjective behavioral intentions versus objective visual performance—as well as the structure of the datasets. Nonetheless, both studies highlight the utility of machine learning as a powerful tool for modeling complex relationships in sports-related domains.
Additional support for the use of machine learning in predicting physical performance variables comes from Zhang et al. [17], who applied optimized algorithms such as decision trees and SVM to gain recognition and prediction. Their study demonstrated high precision in modeling human posture changes, with a root mean square error (RMSE) as low as 0.018 on flat terrain. Although focused on movement patterns rather than visual variables, their findings align with our results in highlighting the strength of decision trees in capturing complex, non-linear relationships in human performance prediction.
Machine learning (ML) refers to the development of systems capable of learning from experience and adapting autonomously to generate predictive analytics, without requiring explicit instructions [13].
Machine learning has been applied in various areas of sports—for example, for sports monitoring data [14], for activity recognition [14], for making performance predictions [4,14,18,19,20], and to investigate whether sports skills, physical performance, or general cognitive functions differ between players of different competition levels [6].
Several studies have used machine learning algorithms to predict performance in sports contexts. For example, using KNN in the running discipline of marathon [21] or to make injury predictions [13,19].
According to the authors, no previous study has been found that examines the visual skills of rhythmic gymnasts using machine learning to make predictions.
In this context, the objective of our study was to predict the visual variables utilized in gymnasts using three distinct algorithms: k-nearest neighbors (KNN), decision tree, and support vector machine (SVM). The visual skills assessed in gymnasts included visual acuity, saccades, smooth pursuits, fixations, reaction time, contrast sensitivity, accommodative facility, and color vision. Among these, machine learning algorithms were applied to predict two key functions: fixation stability and accommodative facility.
The optometric assessments conducted on athletes add value to the study, as they evaluate aspects crucial for sports performance. Predicting specific optometric values further enriches the scope of the study.
Regarding the predictive modeling, notable percentages exceeding 85% were observed for all variables. indicating high reliability even with only 60% of the data.
In the context of visual tests conducted on rhythmic gymnasts, the k-nearest neighbors (KNN) model has been effectively trained and performs well on a representative test set, suggesting the model has learned useful patterns and can generalize to similar situations with new rhythmic gymnasts. However, additional regular evaluations are recommended to ensure relevance and accuracy in evolving problem conditions.
From a practical perspective, the predictive models developed in this study offer valuable tools for the early detection of visual performance deficits in rhythmic gymnasts. By integrating eye-tracking assessments and algorithmic classification, coaches and clinicians could identify athletes with suboptimal fixation stability or accommodative facility—both critical for spatial orientation and rapid motor response during performance. This approach enables personalized training interventions aimed at strengthening specific visual skills, optimizing sensorimotor coordination, and potentially reducing injury risk.
Moreover, the integration of predictive models like the one presented in this study could be highly valuable for applied contexts beyond assessment. In training programs, early identification of gymnasts with reduced fixation stability or accommodative facility would allow coaches to tailor visual and sensorimotor exercises aimed at improving those specific skills. Similarly, systematic screening with AI-based tools may support talent identification by detecting athletes with exceptional visual abilities early in their development. Finally, incorporating such models into injury prevention protocols could help identify visual deficits associated with increased risk of misjudging distances, apparatus timing, or coordination under pressure, all of which are relevant to rhythmic gymnastics performance and safety.
Longitudinal implementation of these models could support visual monitoring throughout the athletic development cycle, offering objective data to inform selection processes, guide rehabilitation strategies, and adjust visual–cognitive load during training sessions.

5. Conclusions

This study demonstrates the potential of artificial intelligence, particularly supervised learning models, to predict visual performance variables in rhythmic gymnasts using combined clinical and eye-tracking data. Among the three models evaluated—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN)—the decision tree consistently achieved the highest classification accuracy and macro F1 scores across all tasks, especially in predicting short-term fixation stability.
These findings highlight the ability of decision-tree-based approaches to model complex visual functions in young athletes, offering a reliable tool for performance profiling and individualized visual training. Despite its effectiveness, model generalizability remains limited due to the homogeneous sample and restricted age distribution. This research offers a novel application of supervised learning to sports vision, emphasizing the value of artificial intelligence in athlete assessment.
Future research should explore the integration of more advanced architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), or hybrid models (e.g., neuro–fuzzy systems) to analyze longitudinal changes in visual performance. When supported by sufficiently large and heterogeneous datasets, these techniques may uncover deeper patterns in oculomotor behavior and enhance real-time athlete profiling. The continued use of optical systems like DIVE underscores the interdisciplinary bridge between photonics technologies and sports vision analytics.
This convergence of biomedical optics and artificial intelligence opens promising avenues for the development of smart, vision-based monitoring systems, ultimately contributing to performance optimization and injury prevention in sports.

6. Limitations and Future Research

Despite the DT model’s high predictive accuracy, several limitations must be acknowledged. Since the dataset consisted exclusively of rhythmic gymnasts from Madrid, the generalizability of the findings to broader populations may be limited. Additionally, although the DIVE system provided high-quality visual data, further validation with larger and more diverse samples is needed to confirm the robustness of the predictions across different sports and skill levels [19].
Additionally, although artificial neural networks represent a powerful class of predictive models, they were not employed in this study due to concerns about overfitting and poor generalization associated with small sample sizes. The relatively modest number of observations and structured nature of the input features favored the use of tree-based models, which are more robust under such constraints. Future work should consider the integration of neural models when sufficient data are available to ensure reliable training and validation.
Another important limitation concerns the use of a single hold-out data split to train and evaluate the machine learning models. While this approach provided a consistent framework for baseline model comparison, it may lead to variability in performance metrics depending on how the data are partitioned. Given the modest sample size (n = 383), this strategy may reduce the generalizability and stability of the results. Future studies should consider applying k-fold cross-validation or repeated hold-out methods to obtain more robust and reliable performance estimates across different data partitions.
Furthermore, while dichotomization of visual variables improved interpretability and class balance in this initial model, future studies will consider multi-class and continuous modeling approaches to enhance granularity and clinical specificity.
Future research should explore the application of deep and reinforcement learning techniques, which have shown promise in other areas of sports performance analysis [20]. These models could potentially enhance the accuracy of predictions and uncover deeper patterns in the data, particularly for modeling more complex visual–motor performance patterns. Furthermore, longitudinal studies could examine how visual skills develop over time in athletes, providing insights into the long-term impact of visual training on performance.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/photonics12070711/s1.

Author Contributions

Conceptualization, R.B.-V., J.R.T. and F.J.P.-M.; methodology, R.B.-V. and J.E.C.-S.; software, J.R.T.; validation, R.B.-V., A.R.-P. and F.J.P.-M.; formal analysis, J.R.T.; investigation, R.B.-V. and G.M.-F.; resources, J.R.T.; data curation, R.B.-V.; writing—original draft preparation, F.J.P.-M.; writing—review and editing, F.J.P.-M.; visualization, J.E.C.-S.; supervision F.J.P.-M.; project administration, R.B.-V. and F.J.P.-M.; funding acquisition, R.B.-V. and J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee for Medicinal Products of the San Carlos Clinical Hospital (protocol no. 21/766-E; approval date: 20 December 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting the conclusions of this study are available upon request from the corresponding author. Due to ethical and privacy restrictions, the data are not publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DIVEDevices for an Integral Visual Examination
DTDecision Tree
EHCEye–Hand Coordination
FLTBLEFSFixation in Large Task Binocular Left Eye Fixation Stability
FLTBREFSFixation in Large Task Binocular Right Eye Fixation Stability
FSTBLEFSFixation in Short Task Binocular Left Eye Fixation Stability
FSTBREFSFixation in Short Task Binocular Right Eye Fixation Stability
GFLTPGlobal Fixation in Long Tasks Performance
GFSTPGlobal Fixation in Short Tasks Performance
GOCPGlobal Oculomotor Control Performance
GSPGlobal Saccadic Performance
GSPPGlobal Smooth Pursuit Performance
KNNK-Nearest Neighbors
LEAFLeft Eye Accommodative Facility
MLMachine learning
NCPNear Convergence Point
REAFRight Eye Accommodative Facility
SVMSupport Vector Machine
VRTVisual Reaction Time
bpmBeats Per Minute
cpmCycles Per Minute
logdeg2Logarithm of Degrees Squared
msMilliseconds

References

  1. Potgieter, K.; Ferreira, J.T. The effects of visual skills on Rhythmic Gymnastics. Afr. Vis. Eye Health 2009, 68, 137–154. [Google Scholar] [CrossRef]
  2. Sherman, A. Overview of research information regarding vision and sports. J. Am. Optom. Assoc. 1980, 51, 661–665. [Google Scholar] [PubMed]
  3. Barreto, J.; Casanova, F.; Peixoto, C.; Fawver, B.; Williams, A.M. How task constraints influence the gaze and motor behaviours of elite-level gymnasts. Int. J. Environ. Res. Public Health 2021, 18, 6941. [Google Scholar] [CrossRef] [PubMed]
  4. Hussain, A.; Ali, G.; Akhtar, F.; Khand, Z.H.; Ali, A. Design and Analysis of News Category Predictor. Eng. Technol. Appl. Sci. Res. 2020, 10, 6380–6385. [Google Scholar] [CrossRef]
  5. Millard, L.; Breukelman, G.J.; Burger, T.; Nortje, J.; Schulz, J. Visual skills essential for rugby. Med. Hypothesis Discov. Innov. Ophthalmol. 2023, 12, 46–54. [Google Scholar] [CrossRef]
  6. Formenti, D.; Trecroci, A.; Duca, M.; Vanoni, M.; Ciovati, M.; Rossi, A.; Alberti, G. Volleyball-Specific Skills and Cognitive Functions Can Discriminate Players of Different Competitive Levels. J. Strength Cond. Res. 2022, 36, 813–819. [Google Scholar] [CrossRef]
  7. Hamedi, S.; Dehdashti Jahromi, H. Performance analysis of all-optical logical gate using artificial neural network. Expert Syst. Appl. 2021, 178, 115029. [Google Scholar] [CrossRef]
  8. Hamedi, S.; Jahromi, H.D.; Lotfiani, A. Artificial intelligence-aided nanoplasmonic biosensor modeling. Eng. Appl. Artif. Intell. 2023, 118, 105646. [Google Scholar] [CrossRef]
  9. Jahromi, H.D.; Hamedi, S. Artificial intelligence approach for calculating electronic and optical properties of nanocomposites. Mater. Res. Bull. 2021, 141, 111371. [Google Scholar] [CrossRef]
  10. Pérez Roche, M.T.; Yam, J.C.; Liu, H.; Gutierrez, D.; Pham, C.; Balasanyan, V.; García, G.; Ley, M.C.; de Fernando, S.; Ortín, M.; et al. Visual Acuity and Contrast Sensitivity in Preterm and Full-Term Children Using a Novel Digital Test. Children 2022, 10, 87. [Google Scholar] [CrossRef]
  11. Pueyo, V.; Yam, J.C.S.; Perez-Roche, T.; Balasanyan, V.; Ortin, M.; Garcia, G.; Prieto, E.; Pham, C.; Gutiérrez, D.; Castillo, O.; et al. Development of oculomotor control throughout childhood: A multicenter and multiethnic study. J. Vis. 2022, 22, 4. [Google Scholar] [CrossRef] [PubMed]
  12. Altemir, I.; Alejandre, A.; Fanlo-Zarazaga, A.; Ortín, M.; Pérez, T.; Masiá, B.; Pueyo, V. Evaluation of Fixational Behavior throughout Life. Brain Sci. 2021, 12, 19. [Google Scholar] [CrossRef] [PubMed]
  13. Amendolara, A.; Pfister, D.; Settelmayer, M.; Shah, M.; Wu, V.; Donnelly, S.; Johnston, B.; Peterson, R.; Sant, D.; Kriak, J.; et al. An Overview of Machine Learning Applications in Sports Injury Prediction. Cureus 2023, 15, e46170. [Google Scholar] [CrossRef]
  14. Lei, P. System Design and Simulation for Square Dance Movement Monitoring Based on Machine Learning. Comput. Intell. Neurosci. 2022, 2022, 1994046. [Google Scholar] [CrossRef]
  15. Reis, F.J.J.; Alaiti, R.K.; Vallio, C.S.; Hespanhol, L. Artificial intelligence and Machine Learning approaches in sports: Concepts, applications, challenges, and future perspectives. Braz. J. Phys. Ther. 2024, 28, 101083. [Google Scholar] [CrossRef]
  16. Liu, H.; Hou, W.; Emolyn, I.; Liu, Y. Building a prediction model of college students’ sports behavior based on machine learning method: Combining the characteristics of sports learning interest and sports autonomy. Sci. Rep. 2023, 13, 15628. [Google Scholar] [CrossRef]
  17. Gao, J.; Ma, C.; Su, H.; Wang, S.; Xu, X.; Yao, J. Research on gait recognition and prediction based on optimized machine learning algorithm. J. Biomed. Eng. 2022, 39, 103–111. [Google Scholar] [CrossRef]
  18. Balkhi, P.; Moallem, M. A Multipurpose Wearable Sensor-Based System for Weight Training. Automation 2022, 3, 132–152. [Google Scholar] [CrossRef]
  19. Calderon-Diaz, M.; Silvestre Aguirre, R.; Vasconez, J.P.; Yanez, R.; Roby, M.; Querales, M.; Salas, R. Explainable Machine Learning Techniques to Predict Muscle Injuries in Professional Soccer Players through Biomechanical Analysis. Sensors 2024, 24, 119. [Google Scholar] [CrossRef]
  20. Mohandas, A.; Ahsan, M.; Haider, J. Tactically Maximize Game Advantage by Predicting Football Substitutions Using Machine Learning. Big Data Cogn. Comput. 2023, 7, 117. [Google Scholar] [CrossRef]
  21. Lerebourg, L.; Saboul, D.; Clemencon, M.; Coquart, J.B. Prediction of Marathon Performance using Artificial Intelligence. Int. J. Sports Med. 2023, 44, 352–360. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the visual function assessment protocol in rhythmic gymnasts.
Figure 1. Flowchart of the visual function assessment protocol in rhythmic gymnasts.
Photonics 12 00711 g001
Figure 2. Rhythmic gymnast performing visual assessment with the DIVE eye-tracking system.
Figure 2. Rhythmic gymnast performing visual assessment with the DIVE eye-tracking system.
Photonics 12 00711 g002
Figure 3. Pairwise comparisons of age groups based on near convergence point (NCP) performance.
Figure 3. Pairwise comparisons of age groups based on near convergence point (NCP) performance.
Photonics 12 00711 g003
Figure 4. Pairwise comparisons of age groups based on visual reaction time. Orange edges indicate statistically significant differences (adjusted p < 0.05), while blue edges indicate non-significant comparisons (p ≥ 0.05).
Figure 4. Pairwise comparisons of age groups based on visual reaction time. Orange edges indicate statistically significant differences (adjusted p < 0.05), while blue edges indicate non-significant comparisons (p ≥ 0.05).
Photonics 12 00711 g004
Figure 5. Pairwise comparisons of age groups based on hand–eye coordination scores. Orange edges represent statistically significant differences between groups (adjusted p < 0.05), while blue edges indicate non-significant comparisons (p ≥ 0.05).
Figure 5. Pairwise comparisons of age groups based on hand–eye coordination scores. Orange edges represent statistically significant differences between groups (adjusted p < 0.05), while blue edges indicate non-significant comparisons (p ≥ 0.05).
Photonics 12 00711 g005
Figure 6. Relationship between NCP and age (R2 = 0.075).
Figure 6. Relationship between NCP and age (R2 = 0.075).
Photonics 12 00711 g006
Figure 7. Receiver operating characteristic (ROC) curve for the decision tree model predicting accommodative facility.
Figure 7. Receiver operating characteristic (ROC) curve for the decision tree model predicting accommodative facility.
Photonics 12 00711 g007
Figure 8. Decision tree model trained on the az-score dataset. Nodes indicate split rules, sample size, and class distribution.
Figure 8. Decision tree model trained on the az-score dataset. Nodes indicate split rules, sample size, and class distribution.
Photonics 12 00711 g008
Table 1. Descriptive values for the complete sample.
Table 1. Descriptive values for the complete sample.
VariableMeanStandard DeviationMinMax
Age (years)11.773.894.4326.97
NCP (cm)1.813.42016
REAF (cpm)3.995.02031
LEAF (cpm)4.535.32028
VRT (ms)10662415182120
EHC (bpm)50.9211.802288
GOCP (unit)50.3210.651777
GFSTP (unit)58.3120.75499
GFLTP (unit)66.2022.18199
GSP (unit)49.8518.60292
GSPP (unit)42.4313.09672
FLTBREFS (logdeg2)0.560.37−1.091.81
FLTBLEFS (logdeg2)0.560.36−1.081.82
FSTBREFS (logdeg2)−0.360.36−1.151.22
FSTBLEFS (logdeg2)−0.360.341.101.39
n = 383. NCP: near convergence point; cpm: cycles per minute; REAF: right eye accommodative facility; LEAF: left eye accommodative facility; VRT: visual reaction time; ms: milliseconds; EHC: eye–hand coordination; bpm: beats per minute; GOCP: global oculomotor control performance; GFSTP: global fixation in short tasks performance; GFLTP: global fixation in long tasks performance; GSP: global saccadic performance; GSPP: global smooth pursuit performance; FLTBREFS: fixation in large task binocular right eye fixation stability; logdeg2: logarithm degree2; FLTBLEFS: fixation in large task binocular left eye fixation stability; FSTBREFS: fixation in short task binocular right eye fixation stability; FSTBLEFS: fixation in short task binocular left eye fixation stability.
Table 2. Mean (± SD) of each visual variable stratified by age group. Units are indicated per variable (cpm = cycles per minute; ms = milliseconds; logdeg2 = log degrees squared).
Table 2. Mean (± SD) of each visual variable stratified by age group. Units are indicated per variable (cpm = cycles per minute; ms = milliseconds; logdeg2 = log degrees squared).
VariableAge Group 1 (6–6.9 Years)Age Group 2 (7–7.9 Years)Age Group 3 (8–8.9 Years)Age Group 4 (9–10.9 Years)Age Group 5 (11–12.9 Years)Age Group 6 (13–13.9 Years)Age Group 7 (13–14.9 Years)Age Group 8 (15–18 Years)Age Group 9 (19–27 Years)
Age (years)6.04 (0.69)7.53 (0.29)8.83 (0.56)10.94 (0.57)12.45 (0.30)13.48 (0.26)14.42 (0.30)15.79 (0.57)19.14 (2.09)
NCP (cm)03.11 (0.53)0.78 (0.27)1.64 (0.41)2.71 (0.60)2.55 (0.70)2.66 (0.75)2.30 (0.70)3.35 (0.60)
REAF (cpm)4.24 (0.64)4.24 (0.64)3.08 (0.44)5.75 (0.78)4.58 (0.73)4.13 (1.22)4.17 (1.10)3.20 (0.85)3.44 (0.76)
LEAF (cpm)4.06 (0.68)2.91 (0.53)3.48 (0.46)6.16 (0.80)5.14 (0.89)5.45 (1.02)5.60 (1.20)4.33 (1.09)3.93 (0.82)
VRT (ms)1364 (53)1272 (32)1191 (22)1030 (25)955 (26.95)974 (28)935 (27)874 (23)926 (22)
EHC (bpm)40.97 (1.96)41.40 (1.19)44.35 (0.94)52.72 (1.31)55.28 (1.87)54.32 (1.96)57.14 (1.83)59.67 (1.95)57.59 (1.45)
GOCP (unit)48.38 (1.95)50.91 (1.99)50.78 (1.26)50.83 (1.26)51.06 (1.90)51.31 (1.84)48.74 (2.04)50.13 (1.58)50.87 (1.42)
GFSTP (unit)53.88 (3.98)67.03 (3.47)58.73 (2.41)59.38 (2.69)59.83 (3.22)61.15 (3.51)58.17 (3.88)53.57 (3.43)59.68 (2.86)
GFLTP (unit)67.15 (3.94)67.15 (3.94)66.24 (3.03)69.34 (2.61)63.18 (3.03)76.03 (3.37)63.30 (4.12)60.93 (3.82)63.30 (2.96)
GSP (unit)51.50 (3.25)51.14 (2.95)52.54 (2.34)48.19 (2.41)50.18 (3.12)49.19 (3.50)46.71 (3.52)49.30 (2.81)49.26 (2.61)
GSPP (unit)40.28 (2.49)41.49 (2.35)41.39 (1.45)43.75 (1.54)41.01 (2.45)44.48 (1.98)40.76 (2.47)46.52 (2.08)43.30 (2.12)
FLTBREFS (logdeg2)0.64 (0.65)0.59 (0.06)0.59 (0.05)0.57 (0.43)0.63 (0.05)0.38 (0.06)0.60 (0.06)0.56 (0.06)0.52 (0.05)
FLTBLEFS (logdeg2)0.56 (0.37)0.59 (0.05)0.63 (0.05)0.55 (0.05)0.62 (0.06)0.40 (0.06)0.58 (0.06)0.55 (0.05)0.51 (0.05)
FSTBREFS (logdeg2)−0.30 (0.08)−0.33 (0.05)−0.35 (0.04)−0.35 (0.04)−0.45 (0.04)−0.44 (0.05)−0.34 (0.07)−0.41 (0.06)−0.38 (0.06)
FSTBLEFS (logdeg2)−0.33 (0.07)−0.32 (0.78)−0.33 (0.04)−0.38 (0.04)−0.38 (0.77)−0.46 (0.05)−0.31 (0.06)−0.37 (0.06)−0.37 (0.06)
n343571653832353041
Values in parentheses represent the standard deviation. NCP: near convergence point; cpm: cycles per minute; REAF: right eye accommodative facility; LEAF: left eye accommodative facility; VRT: visual reaction time; ms: milliseconds; EHC: eye–hand coordination; bpm: beats per minute; GOCP: global oculomotor control performance; GFSTP: global fixation in short tasks performance; GFLTP: global fixation in long tasks performance; GSP: global saccadic performance; GSPP: global smooth pursuit performance; FLTBREFS: fixation in large task binocular right eye fixation stability; logdeg2: logarithm degree2; FLTBLEFS: fixation in large task binocular left eye fixation stability; FSTBREFS: fixation in short task binocular right eye fixation stability; FSTBLEFS: fixation in short task binocular left eye fixation stability.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bernardez-Vilaboa, R.; Povedano-Montero, F.J.; Trillo, J.R.; Ruiz-Pomeda, A.; Martínez-Florentín, G.; Cedrún-Sánchez, J.E. AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models. Photonics 2025, 12, 711. https://doi.org/10.3390/photonics12070711

AMA Style

Bernardez-Vilaboa R, Povedano-Montero FJ, Trillo JR, Ruiz-Pomeda A, Martínez-Florentín G, Cedrún-Sánchez JE. AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models. Photonics. 2025; 12(7):711. https://doi.org/10.3390/photonics12070711

Chicago/Turabian Style

Bernardez-Vilaboa, Ricardo, F. Javier Povedano-Montero, José Ramon Trillo, Alicia Ruiz-Pomeda, Gema Martínez-Florentín, and Juan E. Cedrún-Sánchez. 2025. "AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models" Photonics 12, no. 7: 711. https://doi.org/10.3390/photonics12070711

APA Style

Bernardez-Vilaboa, R., Povedano-Montero, F. J., Trillo, J. R., Ruiz-Pomeda, A., Martínez-Florentín, G., & Cedrún-Sánchez, J. E. (2025). AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models. Photonics, 12(7), 711. https://doi.org/10.3390/photonics12070711

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop