You are currently viewing a new version of our website. To view the old version click .

Vision

Vision is an international, peer-reviewed, open access journal on vision published quarterly online by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q3 (Psychology, Experimental | Ophthalmology)

All Articles (568)

This study evaluates the ability of the QuickSee Free (QSF) portable autorefractor (PlenOptika) to detect and measure refractive error compared to subjective clinical refractometry (SCR) in a Brazilian adult population in a low-resource setting in Amazonas. A total of 100 participants aged 18–65 years underwent visual acuity screening and autorefraction with and without cycloplegia using the QSF, alongside a complete ophthalmic examination including SCR. Refractive error measurements included spherical component (SC), cylindrical component (CC), cylindrical axis (CA), spherical equivalent (SE), and vector powers (MV90 and MV135). Accuracy was assessed for hyperopia ≥ +2.00 D, myopia ≤ −0.75 D, astigmatism ≥ 1.00 DC, and anisometropia ≥ 1.00 D using receiver operating characteristic (ROC) curve analysis. The area under the curve for detecting significant refractive errors ranged from 0.538 to 0.930. The mean difference between QSF without cycloplegia and SCR was −1.08 ± 1.17 D for SC and −1.15 ± 1.15 D for SE (p < 0.0001), and with cycloplegia, it was −0.81 ± 1.07 D and −0.83 ± 1.02 D, respectively. The QSF exhibited a moderate negative bias for both SC and SE with and without cycloplegia, underestimating these values, but it showed good predictability for detecting refractive errors in a low-resource setting.

6 November 2025

Bivariate analysis of the influence of the parameters’ magnitude vector 90 and magnitude vector 135 on differences between right-eye refraction values obtained by QSF under cycloplegia and SCR in 100 healthy adults. Manacapuru, Amazonas, 2025. MV90: magnitude vector on 90° axis; MV135: difference between diopter components projected on the 135° axis and the 45° axis. The ellipsis indicates the region where 95% of the data is expected to fall when following a bivariate normal distribution.

Comparative Analysis of Transformer Architectures and Ensemble Methods for Automated Glaucoma Screening in Fundus Images from Portable Ophthalmoscopes

  • Rodrigo Otávio Cantanhede Costa,
  • Pedro Alexandre Ferreira França and
  • Alexandre César Pinto Pessoa
  • + 3 authors

Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer architectures for automated glaucoma detection in challenging scenarios. We use the Brazil Glaucoma (BrG) and private D-Eye datasets to assess model robustness. These datasets include images typical of smartphone-coupled ophthalmoscopes, which are often noisy and variable in quality. Four Transformer models—Swin-Tiny, ViT-Base, MobileViT-Small, and DeiT-Base—were trained and evaluated both individually and in ensembles. We evaluated the results at both image and patient levels to reflect clinical practice. The results show that, although performance drops on lower-quality images, ensemble combinations and patient-level aggregation significantly improve accuracy and sensitivity. We achieved up to 85% accuracy and an 84.2% F1-score on the D-Eye dataset, with a notable reduction in false negatives. Grad-CAM attention maps confirmed that Transformers identify anatomical regions relevant to diagnosis. These findings reinforce the potential of Transformer ensembles as an accessible solution for early glaucoma detection in populations with limited access to specialized equipment.

3 November 2025

Flowchart of the applied methodology.

Models suggest social anxiety is characterized by negative processing biases. Negative biases also arise from negative mood, i.e., state affect. We examined how social anxiety influences emotional processing and whether state affect, or mood, modified the relationship between social anxiety and perceptual bias. We quantified bias by determining the point of subjective equality, PSE, the face judged equally often as happy and as angry. We found perceptual bias depended on social anxiety and state affect. PSE was greater in individuals high (mean PSE: 8.69) versus low (mean PSE: 3.04) in social anxiety. The higher PSE indicated a stronger negative bias in high social anxiety. State affect modified this relationship, with high social anxiety associated with stronger negative biases, but only for individuals with greater negative affect. State affect and trait anxiety interacted such that social anxiety status alone was insufficient to fully characterize perceptual biases. This raises several issues such as the need to consider what constitutes an appropriate control group and the need to consider state affect in social anxiety. Importantly, our results suggest compensatory effects may counteract the influences of negative mood in individuals low in social anxiety.

1 November 2025

Experimental design. Participants judged a series of randomly selected faces as either happy or angry. Possible face stimuli included 8 unique face identities—4 male and 4 female faces—with each identity morphed along an emotional continuum from angry to neutral to happy (72 possible face stimuli). A fixation cross appeared at screen center (180 s), which participants were asked to fixate. Then, a brief auditory cue (500 Hz) alerted participants to an upcoming face morph (1 s), followed by a question mark (1.5 s). While the question mark was displayed, participants judged if the previously displayed face morph was happy or angry. After the question mark, a fixation cross appeared at screen center (8 s), and the sequence described above was repeated for the next trial. A total of 64 possible trials were presented.

We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis® device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm2 (±σ=757), while the average PR cell density was calculated as ≈10,207 cells/mm2 (±σ=1273).

1 November 2025

AO-TFI image acquisition protocol. Images were acquired at 5 predefined zones with approximate coordinates with respect to the Fovea for the left (OS) and right (OD) eyes. The positive X-axis direction of the coordinate system points to the temporal side for both eyes.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Visual Mental Imagery System
Reprint

Visual Mental Imagery System

How We Image the World
Editors: David F. Marks
Eye Movements and Visual Cognition
Reprint

Eye Movements and Visual Cognition

Editors: Raymond M. Klein, Simon P. Liversedge

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Vision - ISSN 2411-5150