-
Modulating Multisensory Processing: Interactions Between Semantic Congruence and Temporal Synchrony -
Contrast Sensitivity Comparison of Daily Simultaneous-Vision Center-Near Multifocal Contact Lenses: A Pilot Study -
Impact of Preoperative Conjunctival Vascular Area on Surgical Outcomes in Trabeculectomy with Mitomycin C for Glaucoma: A Comprehensive Analysis
Journal Description
Vision
Vision
is an international, peer-reviewed, open access journal on vision published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, PubMed, PMC, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 23.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.8 (2024)
Latest Articles
Comparative Analysis of Transformer Architectures and Ensemble Methods for Automated Glaucoma Screening in Fundus Images from Portable Ophthalmoscopes
Vision 2025, 9(4), 93; https://doi.org/10.3390/vision9040093 (registering DOI) - 3 Nov 2025
Abstract
►
Show Figures
Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer
[...] Read more.
Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer architectures for automated glaucoma detection in challenging scenarios. We use the Brazil Glaucoma (BrG) and private D-Eye datasets to assess model robustness. These datasets include images typical of smartphone-coupled ophthalmoscopes, which are often noisy and variable in quality. Four Transformer models—Swin-Tiny, ViT-Base, MobileViT-Small, and DeiT-Base—were trained and evaluated both individually and in ensembles. We evaluated the results at both image and patient levels to reflect clinical practice. The results show that, although performance drops on lower-quality images, ensemble combinations and patient-level aggregation significantly improve accuracy and sensitivity. We achieved up to 85% accuracy and an 84.2% F1-score on the D-Eye dataset, with a notable reduction in false negatives. Grad-CAM attention maps confirmed that Transformers identify anatomical regions relevant to diagnosis. These findings reinforce the potential of Transformer ensembles as an accessible solution for early glaucoma detection in populations with limited access to specialized equipment.
Full article
Open AccessArticle
Biases in Perceiving Positive Versus Negative Emotions: The Influence of Social Anxiety and State Affect
by
Vivian M. Ciaramitaro, Erinda Morina, Jenny L. Wu, Daniel A. Harris and Sarah A. Hayes-Skelton
Vision 2025, 9(4), 92; https://doi.org/10.3390/vision9040092 (registering DOI) - 1 Nov 2025
Abstract
►▼
Show Figures
Models suggest social anxiety is characterized by negative processing biases. Negative biases also arise from negative mood, i.e., state affect. We examined how social anxiety influences emotional processing and whether state affect, or mood, modified the relationship between social anxiety and perceptual bias.
[...] Read more.
Models suggest social anxiety is characterized by negative processing biases. Negative biases also arise from negative mood, i.e., state affect. We examined how social anxiety influences emotional processing and whether state affect, or mood, modified the relationship between social anxiety and perceptual bias. We quantified bias by determining the point of subjective equality, PSE, the face judged equally often as happy and as angry. We found perceptual bias depended on social anxiety and state affect. PSE was greater in individuals high (mean PSE: 8.69) versus low (mean PSE: 3.04) in social anxiety. The higher PSE indicated a stronger negative bias in high social anxiety. State affect modified this relationship, with high social anxiety associated with stronger negative biases, but only for individuals with greater negative affect. State affect and trait anxiety interacted such that social anxiety status alone was insufficient to fully characterize perceptual biases. This raises several issues such as the need to consider what constitutes an appropriate control group and the need to consider state affect in social anxiety. Importantly, our results suggest compensatory effects may counteract the influences of negative mood in individuals low in social anxiety.
Full article

Figure 1
Open AccessArticle
In-Vivo Characterization of Healthy Retinal Pigment Epithelium and Photoreceptor Cells from AO-(T)FI Imaging
by
Sohrab Ferdowsi, Leila Sara Eppenberger, Safa Mohanna, Oliver Pfäffli, Christoph Amstutz, Lucas M. Bachmann, Michael A. Thiel and Martin K. Schmid
Vision 2025, 9(4), 91; https://doi.org/10.3390/vision9040091 (registering DOI) - 1 Nov 2025
Abstract
►▼
Show Figures
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80
[...] Read more.
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis® device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm2 ( ), while the average PR cell density was calculated as ≈10,207 cells/mm2 ( ).
Full article

Figure 1
Open AccessReview
Birefringence of the Human Cornea: A Review
by
Sudi Patel, Larysa Tutchenko and Igor Dmytruk
Vision 2025, 9(4), 90; https://doi.org/10.3390/vision9040090 - 28 Oct 2025
Abstract
►▼
Show Figures
Background: This paper aims to provide an overview of corneal birefringence (CB), systematize the knowledge and current understanding of CB, and identify difficulties associated with introducing CB into mainstream clinical practice. Methods: Literature reviews were conducted, seeking articles focused on CB published between
[...] Read more.
Background: This paper aims to provide an overview of corneal birefringence (CB), systematize the knowledge and current understanding of CB, and identify difficulties associated with introducing CB into mainstream clinical practice. Methods: Literature reviews were conducted, seeking articles focused on CB published between the early 19th century and the present time. Secondary-level searches were made examining relevant publications referred to in primary-level publications, ranging back to the early 17th century. The key search words were “corneal birefringence” and “non-invasive measurements”. Results: CB was first recorded by Brewster in 1815. Orthogonally polarized rays travel at different speeds through the cornea, creating a slow axis and a fast axis. The slow axis aligns with the pattern of most corneal stromal collagen fibrils. In vivo, it is oriented along the superior temporal–inferior nasal direction at an angle of about 25° (with an approximate range of −54° to 90°) from the horizontal. CB has been reported to (i) influence the estimation of retinal nerve fiber layer thickness; (ii) be affected by corneal interventions; (iii) be altered in keratoconus; (iv) vary along the depth of the cornea; and (v) be affected by intra-ocular pressure. Conclusions: Under precisely controlled conditions, capturing the CB pattern is the first step in a non-destructive process used to model the ultra-fine structure of the individual cornea, and changes thereof, in vivo.
Full article

Figure 1
Open AccessArticle
Prevalence of Keratoconus and Associated Risk Factors Among High School Students in Couva, Trinidad: A Cross-Sectional Study
by
Ngozika Esther Ezinne, Shinead Phagoo, Ameera Roopnarinesingh and Michael Agyemang Kwarteng
Vision 2025, 9(4), 89; https://doi.org/10.3390/vision9040089 - 20 Oct 2025
Abstract
Purpose: This study aimed to determine the prevalence and associated risk factors of keratoconus (KC) among high school students in Couva, Trinidad and Tobago. Method: A cross-sectional, school-based approach was used, involving a simple random sampling technique to select schools and students. A
[...] Read more.
Purpose: This study aimed to determine the prevalence and associated risk factors of keratoconus (KC) among high school students in Couva, Trinidad and Tobago. Method: A cross-sectional, school-based approach was used, involving a simple random sampling technique to select schools and students. A structured questionnaire assessed KC risk factors, while clinical assessments, including visual acuity, refraction, slit lamp biomicroscopy, and topography, were performed. Data were analyzed using R. Exact tests were used for KC (n = 2 cases) and robust Poisson regression estimated adjusted prevalence ratios for the ‘at-risk’ screening endpoint. Results: A total of 432 students aged 12–17 years participated, with a response rate of 97.5%. Most participants were of East Indian descent (48.1%), female (52.1%), and 14 years old (23.1%). Approximately 47.7% (95% CI 43.0–52.5%) were at risk of KC, with 0.5% (2/432; exact 95% CI 0.06–1.67%) diagnosed with the condition. The most common risk factors were eye rubbing (87.4%), over eight hours of sun exposure weekly (71.8%), and atopy (68.4%). KC was observed to be significantly higher among people with a family history (p = 0.018). Conclusions: The study highlights a low prevalence and a high risk of KC among high school students, with a strong link to family history and common risk factors such as eye rubbing and sun exposure. These findings emphasize the urgent need for regular KC screening in schools to ensure early diagnosis and effective management.
Full article
Open AccessArticle
Comparing Visual Search Efficiency Across Different Facial Characteristics
by
Navdeep Kaur, Isabella Hooge and Andrea Albonico
Vision 2025, 9(4), 88; https://doi.org/10.3390/vision9040088 - 15 Oct 2025
Abstract
Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and
[...] Read more.
Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and by Haxby and colleagues, suggest that identity and other facial features are processed through partly independent systems. This study aimed to compare the efficiency with which different facial characteristics are processed in a visual search task. Participants viewed arrays of two, four, or six faces and judged whether one face differed from the others. Four tasks were created, focusing separately on identity, expression, ethnicity, and gender. We found that search times were significantly longer when looking for identity and shorter when looking for ethnicity. Significant correlations were found among almost all tests in all outcome variables. Comparison of target-present and target-absent trials suggested that performance in none of the tests seems to follow a serial-search-terminating model. These results suggest that different facial characteristics share early processing but differentiate into independent recognition mechanisms at a later stage.
Full article
(This article belongs to the Section Visual Neuroscience)
►▼
Show Figures

Figure 1
Open AccessArticle
Development of a Neural Network to Predict Optimal IOP Reduction in Glaucoma Management
by
Raheem Remtulla, Sidrat Rahman and Hady Saheb
Vision 2025, 9(4), 87; https://doi.org/10.3390/vision9040087 - 15 Oct 2025
Abstract
►▼
Show Figures
Glaucoma management relies on lowering intraocular pressure (IOP), but determining the target reduction at presentation is challenging, particularly in normal-tension glaucoma (NTG). We developed and internally validated a neural network regression model using retrospective clinical data from Qiu et al. (2015), including 270
[...] Read more.
Glaucoma management relies on lowering intraocular pressure (IOP), but determining the target reduction at presentation is challenging, particularly in normal-tension glaucoma (NTG). We developed and internally validated a neural network regression model using retrospective clinical data from Qiu et al. (2015), including 270 patients (118 with NTG). A single-layer artificial neural network with five nodes was trained in MATLAB R2024b using the Levenberg–Marquardt algorithm. Inputs included demographic, refractive, structural, and functional parameters, with IOP reduction as the output. Data were split into 65% training, 15% validation, and 20% testing, with training repeated 10 times. Model performance was strong and consistent (average RMSE: 1.90 ± 0.29 training, 2.18 ± 0.34 validation, 2.11 ± 0.30 testing; Pearson’s r: 0.92 ± 0.02, 0.88 ± 0.02, 0.88 ± 0.04). The best-performing model achieved RMSEs of 1.57, 2.90, and 1.77 with r values of 0.93, 0.91, and 0.93, respectively. Feature ablation revealed significant contributions from IOP, axial length, CCT, diagnosis, VCDR, spherical equivalent, mean deviation, and laterality. This study demonstrates that a simple neural network can reliably predict individualized IOP reduction targets, supporting personalized glaucoma management and improved outcomes.
Full article

Figure 1
Open AccessArticle
Clinical Assessment of a Virtual Reality Perimeter Versus the Humphrey Field Analyzer: Comparative Reliability, Usability, and Prospective Applications
by
Marco Zeppieri, Caterina Gagliano, Francesco Cappellani, Federico Visalli, Fabiana D’Esposito, Alessandro Avitabile, Roberta Amato, Alessandra Cuna and Francesco Pellegrini
Vision 2025, 9(4), 86; https://doi.org/10.3390/vision9040086 - 11 Oct 2025
Abstract
►▼
Show Figures
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited
[...] Read more.
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited patient comfort. Comparative data on newer head-mounted virtual reality perimeters are limited, leaving uncertainty about their clinical reliability and potential advantages. Aim: The aim was to evaluate parameters such as visual field outcomes, portability, patient comfort, eye tracking, and usability. Methods: Participants underwent testing with both devices, assessing metrics like mean deviation (MD), pattern standard deviation (PSD), and duration. Results: The HVRP demonstrated small but statistically significant differences in MD and PSD compared to the HFA, while maintaining a consistent trend across participants. MD values were slightly more negative for HFA than HVRP (average difference −0.60 dB, p = 0.0006), while pattern standard deviation was marginally higher with HFA (average difference 0.38 dB, p = 0.00018). Although statistically significant, these differences were small in magnitude and do not undermine the clinical utility or reproducibility of the device. Notably, HVRP showed markedly shorter testing times with HVRP (7.15 vs. 18.11 min, mean difference 10.96 min, p < 0.0001). Its lightweight, portable design allowed for bedside and home testing, enhancing accessibility for pediatric, geriatric, and mobility-impaired patients. Participants reported greater comfort due to the headset design, which eliminated the need for chin rests. The device also offers potential for AI integration and remote data analysis. Conclusions: The HVRP proved to be a reliable, user-friendly alternative to traditional perimetry. Its advantages in comfort, portability, and test efficiency support its use in both clinical settings and remote screening programs for visual field assessment. Its portability and user-friendly design support broader use in clinical practice and expand possibilities for bedside assessment, home monitoring, and remote screening, particularly in populations with limited access to conventional perimetry.
Full article

Figure 1
Open AccessArticle
Comparative Assessment of Large Language Models in Optics and Refractive Surgery: Performance on Multiple-Choice Questions
by
Leah Attal, Elad Shvartz, Alon Gorenshtein, Shirley Pincovich and Daniel Bahir
Vision 2025, 9(4), 85; https://doi.org/10.3390/vision9040085 - 9 Oct 2025
Abstract
►▼
Show Figures
This study aimed to evaluate the performance of seven advanced AI Large Language Models (LLMs)—ChatGPT 4o, ChatGPT O3 Mini, ChatGPT O1, DeepSeek V3, DeepSeek R1, Gemini 2.0 Flash, and Grok-3—in answering multiple-choice questions (MCQs) in optics and refractive surgery, to assess their role
[...] Read more.
This study aimed to evaluate the performance of seven advanced AI Large Language Models (LLMs)—ChatGPT 4o, ChatGPT O3 Mini, ChatGPT O1, DeepSeek V3, DeepSeek R1, Gemini 2.0 Flash, and Grok-3—in answering multiple-choice questions (MCQs) in optics and refractive surgery, to assess their role in medical education for residents. The AI models were tested using 134 publicly available MCQs from national ophthalmology certification exams, categorized by the need to perform calculations, the relevant subspecialty, and the use of images. Accuracy was analyzed and compared statistically. ChatGPT O1 achieved the highest overall accuracy (83.5%), excelling in complex optical calculations (84.1%) and optics questions (82.4%). DeepSeek V3 displayed superior accuracy in refractive surgery-related questions (89.7%), followed by ChatGPT O3 Mini (88.4%). ChatGPT O3 Mini significantly outperformed others in image analysis, with 88.2% accuracy. Moreover, ChatGPT O1 demonstrated comparable accuracy rates for both calculated and non-calculated questions (84.1% vs. 83.3%). This is in stark contrast to other models, which exhibited significant discrepancies in accuracy for calculated and non-calculated questions. The findings highlight the ability of LLMs to achieve high accuracy in ophthalmology MCQs, particularly in complex optical calculations and visual items. These results suggest potential applications in exam preparation and medical training contexts, while underscoring the need for future studies designed to directly evaluate their role and impact in medical education. The findings highlight the significant potential of AI models in ophthalmology education, particularly in performing complex optical calculations and visual stem questions. Future studies should utilize larger, multilingual datasets to confirm and extend these preliminary findings.
Full article

Figure 1
Open AccessArticle
Myopia Prediction Using Machine Learning: An External Validation Study
by
Rajat S. Chandra, Bole Ying, Jianyong Wang, Hongguang Cui, Guishuang Ying and Julius T. Oatts
Vision 2025, 9(4), 84; https://doi.org/10.3390/vision9040084 - 9 Oct 2025
Abstract
►▼
Show Figures
We previously developed machine learning (ML) models for predicting cycloplegic spherical equivalent refraction (SER) and myopia using non-cycloplegic data and following a standardized protocol (cycloplegia with 0.5% tropicamide and biometry using NIDEK A-scan), but the models’ performance may not be generalizable to other
[...] Read more.
We previously developed machine learning (ML) models for predicting cycloplegic spherical equivalent refraction (SER) and myopia using non-cycloplegic data and following a standardized protocol (cycloplegia with 0.5% tropicamide and biometry using NIDEK A-scan), but the models’ performance may not be generalizable to other settings. This study evaluated the performance of ML models in an independent cohort using a different cycloplegic agent and biometer. Chinese students (N = 614) aged 8–13 years underwent autorefraction before and after cycloplegia with 0.5% tropicamide (n = 505) or 1% cyclopentolate (n = 109). Biometric measures were obtained using an IOLMaster 700 (n = 207) or Optical Biometer SW-9000 (n = 407). ML models were evaluated using R2, mean absolute error (MAE), sensitivity, specificity, and area under the ROC curve (AUC). The XGBoost model predicted cycloplegic SER very well (R2 = 0.95, MAE (SD) = 0.32 (0.30) D). Both ML models predicted myopia well (random forest: AUC 0.99, sensitivity 93.7%, specificity 96.4%; XGBoost: sensitivity 90.1%, specificity 96.8%) and accurately predicted the myopia rate (observed 62.9%; random forest: 60.6%; XGBoost: 58.8%) despite heterogeneous cycloplegia and biometry factors. In this independent cohort of students, XGBoost and random forest performed very well for predicting cycloplegic SER and myopia status using non-cycloplegic data. This external validation study demonstrated that ML may provide a useful tool for estimating cycloplegic SER and myopia prevalence with heterogeneous clinical parameters, and study in additional populations is warranted.
Full article

Figure 1
Open AccessArticle
Oculomotor Training Improves Reading and Associated Cognitive Functions in Children with Learning Difficulties: A Pilot Study
by
Alessio Facchin, Silvio Maffioletti, Marta Maffioletti, Gabriele Esposito, Marta Bonetti, Luisa Girelli and Roberta Daini
Vision 2025, 9(4), 83; https://doi.org/10.3390/vision9040083 - 7 Oct 2025
Abstract
►▼
Show Figures
In the first years of schooling, inefficient eye movements can impair the development of reading skills. Nonetheless, the improvement of these abilities has been little investigated in children. This pilot study aimed to verify the effectiveness of Office Based Oculomotor Training (OBOT) in
[...] Read more.
In the first years of schooling, inefficient eye movements can impair the development of reading skills. Nonetheless, the improvement of these abilities has been little investigated in children. This pilot study aimed to verify the effectiveness of Office Based Oculomotor Training (OBOT) in enhancing reading skills in ‘poor’ readers. Twenty-one children (aged 7–12 years) underwent an assessment of reading, visual, and perceptual abilities before and after a training of oculomotor skills (i.e., execution of saccadic movements with symbol charts in various modes and types; 14 participants) or a simple reading exercise (7 participants). The overall duration of the training was six weeks. The results showed a specific improvement, in the group subjected to oculomotor training only, not only in oculomotor abilities but also in reading, visuo-perceptual skills, and the ability to resolve crowding. These primary results suggest that the improvement of oculomotor abilities can lead to an indirect increase in reading in developmental age.
Full article

Figure 1
Open AccessArticle
Ocular Manifestations in Pediatric Traumatic Brain Injury Admitted to the ICU: A Prospective Analysis
by
Amer Jaradat, Rami Al-Dwairi, Adam Abdallah, Atef F. Hulliel, Rawhi Alshaykh, Mahmood Al Nuaimi, Ala’ Al Barbarawi, Seren Al Beiruti and Abdelwahab Aleshawi
Vision 2025, 9(4), 82; https://doi.org/10.3390/vision9040082 - 4 Oct 2025
Abstract
Background: Traumatic Brain Injury (TBI) in children is a major cause of morbidity and mortality worldwide. Ocular manifestations are common but often overlooked, despite their potential to cause long-term visual impairment. This study aimed to evaluate the prevalence and characteristics of ocular findings
[...] Read more.
Background: Traumatic Brain Injury (TBI) in children is a major cause of morbidity and mortality worldwide. Ocular manifestations are common but often overlooked, despite their potential to cause long-term visual impairment. This study aimed to evaluate the prevalence and characteristics of ocular findings in pediatric TBI patients admitted to the intensive care unit (ICU). Method: We prospectively reviewed records of pediatric patients (≤16 years) with TBI admitted to the Neurosurgery ICU at King Abdullah University Hospital (January 2022–December 2024). TBI was defined using U.S. CDC criteria and confirmed by clinical and radiological findings. Ocular manifestations were identified from ophthalmology consultations, neurosurgical notes, and bedside examinations. Demographics, injury details, and clinical outcomes were recorded. Statistical analyses included Chi-square, Fisher’s exact, and Mann–Whitney U tests, with significance set at p ≤ 0.05. Results: Thirty-eight patients (median age: 8 years; 55.3% male) were included. Ocular findings were present in 20 patients (52.6%). These patients were significantly older (median age 10 vs. 6 years, p = 0.007) and had lower admission GCS scores (11 vs. 14, p = 0.016). Male predominance was higher in the ocular group (75.0% vs. 33.3%, p = 0.030). Ocular findings were significantly associated with surgical intervention (60.0% vs. 22.2%, p = 0.025), orbital fractures (40.0% vs. 5.6%, p = 0.021), basal skull fracture signs (p = 0.036), and extraocular muscle limitation (p = 0.048). On multivariable analysis, orbital fracture remained the only independent predictor of ocular findings (aOR 2.22, 95% CI 1.17–3.57, p = 0.02). Conclusion: Over half of pediatric ICU TBI patients demonstrated ocular manifestations, closely linked to greater injury severity and craniofacial trauma. Routine, comprehensive ophthalmological evaluation should be integrated into the multidisciplinary management of severe pediatric TBI to optimize visual and functional outcomes.
Full article
Open AccessArticle
Too Bright to Focus? Influence of Brightness Illusions and Ambient Light Levels on the Dynamics of Ocular Accommodation
by
Antonio Rodán, Angélica Fernández-López, Jesús Vera, Pedro R. Montoro, Beatriz Redondo and Antonio Prieto
Vision 2025, 9(4), 81; https://doi.org/10.3390/vision9040081 - 30 Sep 2025
Abstract
Can brightness illusions modulate ocular accommodation? Previous studies have shown that brightness illusions can influence pupil size as if caused by actual luminance increases. However, their effects on other ocular responses—such as accommodative or focusing dynamics—remain largely unexplored. This study investigates the influence
[...] Read more.
Can brightness illusions modulate ocular accommodation? Previous studies have shown that brightness illusions can influence pupil size as if caused by actual luminance increases. However, their effects on other ocular responses—such as accommodative or focusing dynamics—remain largely unexplored. This study investigates the influence of brightness illusions, under two ambient lighting conditions, on accommodative and pupillary dynamics (physiological responses), and on perceived brightness and visual comfort (subjective responses). Thirty-two young adults with healthy vision viewed four stimulus types (blue bright and non-bright, yellow bright and non-bright) under low- and high-contrast ambient lighting while ocular responses were recorded using a WAM-5500 open-field autorefractor. Brightness and comfort were rated after each session. The results showed that high ambient contrast (mesopic) and brightness illusions increased accommodative variability, while yellow stimuli elicited a greater lag under photopic condition. Pupil size decreased only under mesopic lighting. Perceived brightness was enhanced by brightness illusions and blue color, whereas visual comfort decreased for bright illusions, especially under low light. These findings suggest that ambient lighting and visual stimulus properties modulate both physiological and subjective responses, highlighting the need for dynamic accommodative assessment and visually ergonomic display design to reduce visual fatigue during digital device use.
Full article
(This article belongs to the Special Issue Effects of Optical and Behavioral Factors on the Ocular Accommodation Response)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Evaluating the Clinical Validity of Commercially Available Virtual Reality Headsets for Visual Field Testing: A Systematic Review
by
Jesús Vera, Alan N. Glazier, Mark T. Dunbar, Douglas Ripkin and Masoud Nafey
Vision 2025, 9(4), 80; https://doi.org/10.3390/vision9040080 - 24 Sep 2025
Abstract
►▼
Show Figures
Virtual reality (VR) technology has emerged as a promising alternative to conventional perimetry for assessing visual fields. However, the clinical validity of commercially available VR-based perimetry devices remains uncertain due to variability in hardware, software, and testing protocols. A systematic review was conducted
[...] Read more.
Virtual reality (VR) technology has emerged as a promising alternative to conventional perimetry for assessing visual fields. However, the clinical validity of commercially available VR-based perimetry devices remains uncertain due to variability in hardware, software, and testing protocols. A systematic review was conducted following PRISMA guidelines to evaluate the validity of VR-based perimetry compared to the Humphrey Field Analyzer (HFA). Literature searches were performed across MEDLINE, Embase, Scopus, and Web of Science. Studies were included if they assessed commercially available VR-based visual field devices in comparison to HFA and reported visual field outcomes. Devices were categorized by regulatory status (FDA, CE, or uncertified), and results were synthesized narratively. Nineteen studies were included. Devices such as Heru, Olleyes VisuALL, and the Advanced Vision Analyzer showed promising agreement with HFA metrics, especially in moderate to advanced glaucoma. However, variability in performance was observed depending on disease severity, population type, and device specifications. Limited dynamic range and lack of eye tracking were common limitations in lower-complexity devices. Pediatric validation and performance in early-stage disease were often suboptimal. Several VR-based perimetry systems demonstrate clinically acceptable validity compared to HFA, particularly in certain patient subgroups. However, broader validation, protocol standardization, and regulatory approval are essential for widespread clinical adoption. These devices may support more accessible visual field testing through telemedicine and decentralized care.
Full article

Figure 1
Open AccessArticle
A Simulated Visual Field Defect Impairs Temporal Processing: An Effect Not Modulated by Emotional Faces
by
Mohammad Ahsan Khodami and Luca Battaglini
Vision 2025, 9(3), 79; https://doi.org/10.3390/vision9030079 - 16 Sep 2025
Abstract
Temporal processing is fundamental to visual perception, yet little is known about how it functions under compromised visual field conditions or whether emotional stimuli, as reported in the literature, can modulate it. This study investigated temporal resolution using a two-flash fusion paradigm with
[...] Read more.
Temporal processing is fundamental to visual perception, yet little is known about how it functions under compromised visual field conditions or whether emotional stimuli, as reported in the literature, can modulate it. This study investigated temporal resolution using a two-flash fusion paradigm with a static, semi-transparent overlay that degraded the right visual hemifield of opacity 0.60 and examined the potential modulatory effects of emotional faces. In Experiment 1, participants were asked to report if they perceived one or two flashes presented at either −6° (normal vision) or +6° (beneath a scotoma) across eight interstimulus intervals, ranging from 10 to 80 ms with a step size of 10 ms. Results showed significantly impaired temporal discrimination in the degraded vision condition, with elevated thresholds 52.29 ms vs. 34.78 ms and reduced accuracy, particularly at intermediate ISIs 30–60 ms. In Experiment 2, we introduced emotional faces before flash presentation to determine whether emotional content would differentially affect temporal processing. Our findings indicate that neither normal nor scotoma-impaired temporal processing was modulated by the specific emotional content (angry, happy, or neutral) of the facial primes.
Full article
(This article belongs to the Section Visual Neuroscience)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Stevens–Johnson Syndrome and Toxic Epidermal Necrolysis: A Systematic Review of Ophthalmic Management and Treatment
by
Korolos Sawires, Brendan K. Tao, Harrish Nithianandan, Larena Menant-Tay, Michael O’Connor, Peng Yan and Parnian Arjmand
Vision 2025, 9(3), 78; https://doi.org/10.3390/vision9030078 - 11 Sep 2025
Abstract
►▼
Show Figures
Background: Stevens–Johnson Syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) are rare, life-threatening mucocutaneous disorders often associated with severe ophthalmic complications. Ocular involvement occurs in 50–68% of cases and can result in permanent vision loss. Despite this, optimal management strategies remain unclear, and treatment
[...] Read more.
Background: Stevens–Johnson Syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) are rare, life-threatening mucocutaneous disorders often associated with severe ophthalmic complications. Ocular involvement occurs in 50–68% of cases and can result in permanent vision loss. Despite this, optimal management strategies remain unclear, and treatment practices vary widely. Methods: A systematic review was conducted in accordance with PRISMA guidelines and prospectively registered on PROSPERO (CRD420251022655). Medline, Embase, and CENTRAL were searched from 1998 to 2024 for English-language studies reporting treatment outcomes for ocular SJS/TEN. Results: A total of 194 studies encompassing 6698 treated eyes were included. Best-corrected visual acuity (BCVA) improved in 52.2% of eyes, epithelial regeneration occurred in 16.8%, and symptom relief was reported in 26.3%. Common treatments included topical therapy (n = 1424), mucosal grafts (n = 1220), contact lenses (n = 1134), amniotic membrane transplantation (AMT) (n = 889), systemic medical therapy (n = 524), and punctal occlusion (n = 456). Emerging therapies included TNF-alpha inhibitors, anti-VEGF agents, photodynamic therapy, and 5-fluorouracil. Conclusions: Disease-stage-specific therapy is crucial in ocular SJS/TEN. Acute interventions such as AMT may prevent long-term complications, while chronic care targets structural and tear-film abnormalities. Further prospective studies are needed to standardize care and optimize visual outcomes.
Full article

Figure 1
Open AccessArticle
Predicting Pattern Standard Deviation in Glaucoma: A Machine Learning Approach Leveraging Clinical Data
by
Raheem Remtulla, Patrik Abdelnour, Daniel R. Chow, Andres C. Ramos, Guillermo Rocha and Paul Harasymowycz
Vision 2025, 9(3), 77; https://doi.org/10.3390/vision9030077 - 1 Sep 2025
Abstract
Visual field (VF) testing is crucial for the management of glaucoma. However, the process is often hindered by technician shortages and reliability issues. In this study, we leveraged machine learning to predict pattern standard deviation (PSD) using clinical inputs. This machine learning retrospective
[...] Read more.
Visual field (VF) testing is crucial for the management of glaucoma. However, the process is often hindered by technician shortages and reliability issues. In this study, we leveraged machine learning to predict pattern standard deviation (PSD) using clinical inputs. This machine learning retrospective study used publicly accessible data from 743 eyes (541 glaucoma and 202 non-glaucoma controls). An automated neural network (ANN) model was trained using seven clinical input features: mean retinal nerve fiber layer (RNFL), IOP, patient age, CCT, glaucoma diagnosis, study protocol, and laterality. The ANN demonstrated efficient training across 1000 epochs, with consistent error reduction in training and test sets. Mean RMSEs were 1.67 ± 0.05 for training, and 2.27 ± 0.27 for testing. The r was 0.89 ± 0.01 for training, and 0.81 ± 0.04 for testing, indicating strong predictive accuracy with minimal overfitting. The LOFO analysis revealed that the primary contributors to PSD prediction were RNFL, CCT, IOP, glaucoma status, study protocol, and age, listed in order of significance. Our neural network successfully predicted PSD from RNFL and clinical data with strong performance metrics, in addition to demonstrating construct validity. This work demonstrates that neural networks hold the potential to predict or even generate VF estimations based solely on RNFL and clinical inputs.
Full article
(This article belongs to the Special Issue Retinal and Optic Nerve Diseases: New Advances and Current Challenges)
►▼
Show Figures

Figure 1
Open AccessArticle
Gaze Dispersion During a Sustained-Fixation Task as a Proxy of Visual Attention in Children with ADHD
by
Lionel Moiroud, Ana Moscoso, Eric Acquaviva, Alexandre Michel, Richard Delorme and Maria Pia Bucci
Vision 2025, 9(3), 76; https://doi.org/10.3390/vision9030076 - 1 Sep 2025
Abstract
►▼
Show Figures
Aim: The aim of this preliminary study was to explore the visual attention in children with ADHD using eye-tracking, and to identify a relevant quantitative proxy of their attentional control. Methods: Twenty-two children diagnosed with ADHD (aged 7 to 12 years) and their
[...] Read more.
Aim: The aim of this preliminary study was to explore the visual attention in children with ADHD using eye-tracking, and to identify a relevant quantitative proxy of their attentional control. Methods: Twenty-two children diagnosed with ADHD (aged 7 to 12 years) and their 24 sex-, age-matched control participants with typical development performed a visual sustained-fixation task using an eye-tracker. Fixation stability was estimated by calculating the bivariate contour ellipse area (BCEA) as a continuous index of gaze dispersion during the task. Results: Children with ADHD showed a significantly higher BCEA than control participants (p < 0.001), reflecting their increased gaze instability. The impairment in gaze fixation persisted even in the absence of visual distractors, suggesting intrinsic attentional dysregulation in ADHD. Conclusions: Our results provide preliminary evidence that eye-tracking coupled with BCEA analysis, provides a sensitive and non-invasive tool for quantifying visual attentional resources of children with ADHD. If replicated and extended, the increased use of gaze instability as an indicator of visual attention in children could have a major impact in clinical settings to assist clinicians. This analysis focuses on overall gaze dispersion rather than fine eye micro-movements such as microsaccades.
Full article

Figure 1
Open AccessArticle
A Comparative Study Between Clinical Optical Coherence Tomography (OCT) Analysis and Artificial Intelligence-Based Quantitative Evaluation in the Diagnosis of Diabetic Macular Edema
by
Camila Brandão Fantozzi, Letícia Margaria Peres, Jogi Suda Neto, Cinara Cássia Brandão, Rodrigo Capobianco Guido and Rubens Camargo Siqueira
Vision 2025, 9(3), 75; https://doi.org/10.3390/vision9030075 - 1 Sep 2025
Abstract
Recent advances in artificial intelligence (AI) have transformed ophthalmic diagnostics, particularly for retinal diseases. In this prospective, non-randomized study, we evaluated the performance of an AI-based software system against conventional clinical assessment—both quantitative and qualitative—of optical coherence tomography (OCT) images for diagnosing diabetic
[...] Read more.
Recent advances in artificial intelligence (AI) have transformed ophthalmic diagnostics, particularly for retinal diseases. In this prospective, non-randomized study, we evaluated the performance of an AI-based software system against conventional clinical assessment—both quantitative and qualitative—of optical coherence tomography (OCT) images for diagnosing diabetic macular edema (DME). A total of 700 OCT exams were analyzed across 26 features, including demographic data (age, sex), eye laterality, visual acuity, and 21 quantitative OCT parameters (Macula Map A X-Y). We tested two classification scenarios: binary (DME presence vs. absence) and multiclass (six distinct DME phenotypes). To streamline feature selection, we applied paraconsistent feature engineering (PFE), isolating the most diagnostically relevant variables. We then compared the diagnostic accuracies of logistic regression, support vector machines (SVM), K-nearest neighbors (KNN), and decision tree models. In the binary classification using all features, SVM and KNN achieved 92% accuracy, while logistic regression reached 91%. When restricted to the four PFE-selected features, accuracy modestly declined to 84% for both logistic regression and SVM. These findings underscore the potential of AI—and particularly PFE—as an efficient, accurate aid for DME screening and diagnosis.
Full article
(This article belongs to the Section Retinal Function and Disease)
►▼
Show Figures

Figure 1
Open AccessArticle
Modulating Multisensory Processing: Interactions Between Semantic Congruence and Temporal Synchrony
by
Susan Geffen, Taylor Beck and Christopher W. Robinson
Vision 2025, 9(3), 74; https://doi.org/10.3390/vision9030074 - 1 Sep 2025
Abstract
►▼
Show Figures
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality.
[...] Read more.
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality. Participants were presented with pictures and sounds simultaneously (Experiment 1) or asynchronously (Experiment 2) and had to respond whether the visual or auditory stimulus was an animal or vehicle, while ignoring the other modality. Semantic congruency and incongruency in the unattended modality both affected responses in the attended modality, with visual stimuli having larger effects on auditory processing than the reverse (Experiment 1). Effects of visual input on auditory processing decreased under longer SOAs, while effects of auditory input on visual processing increased over SOAs and were correlated with relative processing speed (Experiment 2). These results suggest that congruence and modality both impact multisensory processing.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Special Issues
Special Issue in
Vision
Retinal and Optic Nerve Diseases: New Advances and Current Challenges
Guest Editor: Livio VitielloDeadline: 31 December 2025
Special Issue in
Vision
Cue Integration and Depth Perception
Guest Editor: Reuben RideauxDeadline: 31 December 2025
Special Issue in
Vision
Neural Mechanisms of Coordinated Eye-Hand Behaviors
Guest Editor: Maureen HaganDeadline: 30 March 2026


