Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (449)

Search Parameters:
Keywords = near vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7965 KiB  
Article
Identification of Environmental Noise Traces in Seismic Recordings Using Vision Transformer and Mel-Spectrogram
by Qianlong Ding, Shuangquan Chen, Jinsong Shen and Borui Wang
Appl. Sci. 2025, 15(15), 8586; https://doi.org/10.3390/app15158586 - 1 Aug 2025
Viewed by 236
Abstract
Environmental noise is inevitable during seismic data acquisition, with major sources including heavy machinery, rivers, wind, and other environmental factors. During field data acquisition, it is important to assess the impact of environmental noise and evaluate data quality. In subsequent seismic data processing, [...] Read more.
Environmental noise is inevitable during seismic data acquisition, with major sources including heavy machinery, rivers, wind, and other environmental factors. During field data acquisition, it is important to assess the impact of environmental noise and evaluate data quality. In subsequent seismic data processing, these noise components also need to be eliminated. Accurate identification of noise traces facilitates rapid quality control (QC) during fieldwork and provides a reliable basis for targeted noise attenuation. Conventional environmental noise identification primarily relies on amplitude differences. However, in seismic data, high-amplitude signals are not necessarily caused by environmental noise. For example, surface waves or traces near the shot point may also exhibit high amplitudes. Therefore, relying solely on amplitude-based criteria has certain limitations. To improve noise identification accuracy, we use the Mel-spectrogram to extract features from seismic data and construct the dataset. Compared to raw time-series signals, the Mel-spectrogram more clearly reveals energy variations and frequency differences, helping to identify noise traces more accurately. We then employ a Vision Transformer (ViT) network to train a model for identifying noise in seismic data. Tests on synthetic and field data show that the proposed method performs well in identifying noise. Moreover, a denoising case based on synthetic data further confirms its general applicability, making it a promising tool in seismic data QC and processing workflows. Full article
Show Figures

Figure 1

13 pages, 692 KiB  
Article
Contrast Sensitivity Comparison of Daily Simultaneous-Vision Center-Near Multifocal Contact Lenses: A Pilot Study
by David P. Piñero, Ainhoa Molina-Martín, Elena Martínez-Plaza, Kevin J. Mena-Guevara, Violeta Gómez-Vicente and Dolores de Fez
Vision 2025, 9(3), 67; https://doi.org/10.3390/vision9030067 - 1 Aug 2025
Viewed by 169
Abstract
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) [...] Read more.
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) was evaluated. A randomized crossover pilot study was conducted. Four daily disposable lens designs, based on simultaneous-vision and center-near correction, were compared. The achromatic contrast sensitivity function (CSF) was measured binocularly using the CSV1000e test under two lighting conditions: room light on and off. Chromatic CSF was measured using the OptoPad-CSF test. Comparison of achromatic results with room lighting showed a statistically significant difference only for 3 cpd (p = 0.03) between the baseline visit (with spectacles) and all MCLs. Comparison of achromatic results without room lighting showed no statistically significant differences between the baseline and all MCLs for any spatial frequency (p > 0.05 in all cases). Comparison of CSF-T results showed a statistically significant difference only for 4 cpd (p = 0.002). Comparison of CSF-D results showed no statistically significant difference for all frequencies (p > 0.05 in all cases). The MCL designs analyzed provided satisfactory achromatic contrast sensitivity results for distance vision, similar to those obtained with spectacles, with no remarkable differences between designs. Chromatic contrast sensitivity for the red-green and blue-yellow mechanisms revealed some differences from the baseline that should be further investigated in future studies. Full article
Show Figures

Figure 1

13 pages, 788 KiB  
Article
Advancing Kiwifruit Maturity Assessment: A Comparative Study of Non-Destructive Spectral Techniques and Predictive Models
by Michela Palumbo, Bernardo Pace, Antonia Corvino, Francesco Serio, Federico Carotenuto, Alice Cavaliere, Andrea Genangeli, Maria Cefola and Beniamino Gioli
Foods 2025, 14(15), 2581; https://doi.org/10.3390/foods14152581 - 23 Jul 2025
Viewed by 250
Abstract
Gold kiwifruits from two different farms, harvested at different times, were analysed using both non-destructive and destructive methods. A computer vision system (CVS) and a portable spectroradiometer were used to perform non-destructive measurements of firmness, titratable acidity, pH, soluble solids content, dry matter, [...] Read more.
Gold kiwifruits from two different farms, harvested at different times, were analysed using both non-destructive and destructive methods. A computer vision system (CVS) and a portable spectroradiometer were used to perform non-destructive measurements of firmness, titratable acidity, pH, soluble solids content, dry matter, and soluble sugars (glucose and fructose), with the goal of building predictive models for the maturity index. Hyperspectral data from the visible–near-infrared (VIS–NIR) and short-wave infrared (SWIR) ranges, collected via the spectroradiometer, along with colour features extracted by the CVS, were used as predictors. Three different regression methods—Partial Least Squares (PLS), Support Vector Regression (SVR), and Gaussian process regression (GPR)—were tested to assess their predictive accuracy. The results revealed a significant increase in sugar content across the different harvesting times in the season. Regardless of the regression method used, the CVS was not able to distinguish among the different harvests, since no significant skin colour changes were measured. Instead, hyperspectral measurements from the near-infrared (NIR) region and the initial part of the SWIR region proved useful in predicting soluble solids content, glucose, and fructose. The models built using these spectral regions achieved R2 average values between 0.55 and 0.60. Among the different regression models, the GPR-based model showed the best performance in predicting kiwifruit soluble solids content, glucose, and fructose. In conclusion, for the first time, the effectiveness of a fully portable spectroradiometer measuring surface reflectance until the full SWIR range for the rapid, contactless, and non-destructive estimation of the maturity index of kiwifruits was reported. The versatility of the portable spectroradiometer may allow for field applications that accurately identify the most suitable moment to carry out the harvesting. Full article
(This article belongs to the Section Food Quality and Safety)
Show Figures

Figure 1

19 pages, 1602 KiB  
Article
From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning
by Nicolae Tarbă, Costin-Anton Boiangiu and Mihai-Lucian Voncilă
Appl. Sci. 2025, 15(14), 8096; https://doi.org/10.3390/app15148096 - 21 Jul 2025
Viewed by 210
Abstract
Image binarization is an important process in many computer-vision applications. This transforms the color space of the original image into black and white. Global thresholding is a quick and reliable way to achieve binarization, but it is inherently limited by image noise and [...] Read more.
Image binarization is an important process in many computer-vision applications. This transforms the color space of the original image into black and white. Global thresholding is a quick and reliable way to achieve binarization, but it is inherently limited by image noise and uneven lighting. This paper introduces a global thresholding method that uses the results of classical global thresholding algorithms and other global image features to train a regression model via machine learning. We prove through nested cross-validation that the model can predict the best possible global threshold with an average F-measure of 90.86% and a confidence of 0.79%. We apply our approach to a popular computer vision problem, document image binarization, and compare popular metrics with the best possible values achievable through global thresholding and with the values obtained through the algorithms we used to train our model. Our results show a significant improvement over these classical global thresholding algorithms, achieving near-perfect scores on all the computed metrics. We also compared our results with state-of-the-art binarization algorithms and outperformed them on certain datasets. The global threshold obtained through our method closely approximates the ideal global threshold and could be used in a mixed local-global approach for better results. Full article
Show Figures

Figure 1

16 pages, 2914 KiB  
Article
Smart Dairy Farming: A Mobile Application for Milk Yield Classification Tasks
by Allan Hall-Solorio, Graciela Ramirez-Alonso, Alfonso Juventino Chay-Canul, Héctor A. Lee-Rangel, Einar Vargas-Bello-Pérez and David R. Lopez-Flores
Animals 2025, 15(14), 2146; https://doi.org/10.3390/ani15142146 - 21 Jul 2025
Viewed by 392
Abstract
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient [...] Read more.
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient object detection and classification with real-time performance. The model is trained on a public dataset of cow images labeled with 305-day milk yield records. Thresholds were established to define the three yield classes, and a balanced subset of labeled images was selected for training, validation, and testing purposes. To assess the robustness and consistency of the proposed approach, the model was trained 30 times following the same experimental protocol. The system achieves precision, recall, and mean Average Precision (mAP@50) of 0.408 ± 0.044, 0.739 ± 0.095, and 0.492 ± 0.031, respectively, across all classes. The highest precision (0.445 ± 0.055), recall (0.766 ± 0.107), and mAP@50 (0.558 ± 0.036) were observed in the low-yield class. Qualitative analysis revealed that misclassifications mainly occurred near class boundaries, emphasizing the importance of consistent image acquisition conditions. The resulting model was deployed in a mobile application designed to support field-level assessment by non-specialist users. These findings demonstrate the practical feasibility of applying vision-based models to support decision-making in dairy production systems, particularly in settings where traditional data collection methods are unavailable or impractical. Full article
Show Figures

Figure 1

26 pages, 2816 KiB  
Review
Non-Destructive Detection of Soluble Solids Content in Fruits: A Review
by Ziao Gong, Zhenhua Zhi, Chenglin Zhang and Dawei Cao
Chemistry 2025, 7(4), 115; https://doi.org/10.3390/chemistry7040115 - 18 Jul 2025
Viewed by 426
Abstract
Soluble solids content (SSC) in fruits, as one of the key indicators of fruit quality, plays a critical role in postharvest quality assessment and grading. While traditional destructive methods can provide precise measurements of sugar content, they have limitations such as damaging the [...] Read more.
Soluble solids content (SSC) in fruits, as one of the key indicators of fruit quality, plays a critical role in postharvest quality assessment and grading. While traditional destructive methods can provide precise measurements of sugar content, they have limitations such as damaging the fruit’s integrity and the inability to perform rapid detection. In contrast, non-destructive detection technologies offer the advantage of preserving the fruit’s integrity while enabling fast and efficient sugar content measurements, making them highly promising for applications in fruit quality detection. This review summarizes recent advances in non-destructive detection technologies for fruit sugar content measurement. It focuses on elucidating the principles, advantages, and limitations of mainstream technologies, including near-infrared spectroscopy (NIR), X-ray technology, computer vision (CV), electronic nose (EN) technology and so on. Critically, our analysis identifies key challenges hindering the broader implementation of these technologies, namely: the integration and optimization of multi-technology approaches, the development of robust intelligent and automated detection systems, and issues related to high equipment costs and barriers to widespread adoption. Based on this assessment, we conclude by proposing targeted future research directions. These focus on overcoming the identified challenges to advance the development and practical application of non-destructive SSC detection technologies, ultimately contributing to the modernization and intelligentization of the fruit industry. Full article
(This article belongs to the Section Food Science)
Show Figures

Figure 1

26 pages, 6371 KiB  
Article
Growth Stages Discrimination of Multi-Cultivar Navel Oranges Using the Fusion of Near-Infrared Hyperspectral Imaging and Machine Vision with Deep Learning
by Chunyan Zhao, Zhong Ren, Yue Li, Jia Zhang and Weinan Shi
Agriculture 2025, 15(14), 1530; https://doi.org/10.3390/agriculture15141530 - 15 Jul 2025
Viewed by 266
Abstract
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and [...] Read more.
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and RGB images for 740 Gannan navel oranges of five cultivars are collected. Based on preprocessed spectra, optimally selected hyperspectral images, and registered RGB images, a dual-branch multi-modal feature fusion convolutional neural network (CNN) model is established. In this model, a spectral branch is designed to extract spectral features reflecting internal compositional variations, while the image branch is utilized to extract external color and texture features from the integration of hyperspectral and RGB images. Finally, growth stages are determined via the fusion of features. To validate the availability of the proposed method, various machine-learning and deep-learning models are compared for single-modal and multi-modal data. The results demonstrate that multi-modal feature fusion of HSI and MV combined with the constructed dual-branch CNN deep-learning model yields excellent growth stage discrimination in navel oranges, achieving an accuracy, recall rate, precision, F1 score, and kappa coefficient on the testing set are 95.95%, 96.66%, 96.76%, 96.69%, and 0.9481, respectively, providing a prominent way to precisely monitor the growth stages of fruits. Full article
Show Figures

Figure 1

21 pages, 2217 KiB  
Article
AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models
by Ricardo Bernardez-Vilaboa, F. Javier Povedano-Montero, José Ramon Trillo, Alicia Ruiz-Pomeda, Gema Martínez-Florentín and Juan E. Cedrún-Sánchez
Photonics 2025, 12(7), 711; https://doi.org/10.3390/photonics12070711 - 14 Jul 2025
Viewed by 265
Abstract
Background/Objective: This study aims to evaluate the predictive performance of three supervised machine learning algorithms—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN) in forecasting key visual skills relevant to rhythmic gymnastics. Methods: A total of 383 rhythmic gymnasts aged 4 [...] Read more.
Background/Objective: This study aims to evaluate the predictive performance of three supervised machine learning algorithms—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN) in forecasting key visual skills relevant to rhythmic gymnastics. Methods: A total of 383 rhythmic gymnasts aged 4 to 27 years were evaluated in various sports centers across Madrid, Spain. Visual assessments included clinical tests (near convergence point accommodative facility, reaction time, and hand–eye coordination) and eye-tracking tasks (fixation stability, saccades, smooth pursuits, and visual acuity) using the DIVE (Devices for an Integral Visual Examination) system. The dataset was split into training (70%) and testing (30%) subsets. Each algorithm was trained to classify visual performance, and predictive performance was assessed using accuracy and macro F1-score metrics. Results: The decision tree model demonstrated the highest performance, achieving an average accuracy of 92.79% and a macro F1-score of 0.9276. In comparison, the SVM and KNN models showed lower accuracies (71.17% and 78.38%, respectively) and greater difficulty in correctly classifying positive cases. Notably, the DT model outperformed the others in predicting fixation stability and accommodative facility, particularly in short-duration fixation tasks. Conclusion: The decision tree algorithm achieved the highest performance in predicting short-term fixation stability, but its effectiveness was limited in tasks involving accommodative facility, where other models such as SVM and KNN outperformed it in specific metrics. These findings support the integration of machine learning in sports vision screening and suggest that predictive modeling can inform individualized training and performance optimization in visually demanding sports such as rhythmic gymnastics. Full article
Show Figures

Figure 1

12 pages, 865 KiB  
Article
Comparative Outcomes of the Next-Generation Extended Depth-of-Focus Intraocular Lens and Enhanced Monofocal Intraocular Lens in Cataract Surgery
by Do Young Kim, Ella Seo Yeon Park, Hyunjin Park, Bo Yi Kim, Ikhyun Jun, Kyoung Yul Seo, Ahmed Elsheikh and Tae-im Kim
J. Clin. Med. 2025, 14(14), 4967; https://doi.org/10.3390/jcm14144967 - 14 Jul 2025
Viewed by 648
Abstract
Background/Objectives: A new, purely refractive extended depth-of-focus (EDOF) intraocular lens (IOL) was designed with a continuous change in power to bridge the gap between monofocal and multifocal IOLs. This study aimed to evaluate the real-world clinical outcomes of the new EDOF IOL compared [...] Read more.
Background/Objectives: A new, purely refractive extended depth-of-focus (EDOF) intraocular lens (IOL) was designed with a continuous change in power to bridge the gap between monofocal and multifocal IOLs. This study aimed to evaluate the real-world clinical outcomes of the new EDOF IOL compared with those of the enhanced monofocal IOL. Methods: A retrospective analysis was conducted on 100 eyes from 50 patients undergoing bilateral cataract surgery with either the PureSee™ EDOF (ZEN00V) or Eyhance™ (ICB00) monofocal IOL at a single institution. Visual acuity, defocus curves, contrast sensitivity, and patient-reported outcomes were evaluated three months postoperatively. Results: The ZEN00V group demonstrated superior uncorrected intermediate (0.11 ± 0.08 vs. 0.17 ± 0.11 logMAR, p = 0.006) and near visual acuity (0.25 ± 0.08 vs. 0.31 ± 0.13 logMAR, p = 0.023) compared to the ICB00 group, with comparable distance visual acuity. Both groups exhibited comparable defocus curves and contrast sensitivity. While photic phenomena were more frequent in the ZEN00V group, spectacle dependence was significantly lower for near vision (36% vs. 80%, p = 0.002) and comparable for intermediate and far vision. Conclusions: The PureSee™ EDOF IOL demonstrated enhanced intermediate and near vision with minimal compromise to distance vision while maintaining high contrast sensitivity. It also offered significant spectacle independence and patient satisfaction, making it a promising option for presbyopia correction. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

42 pages, 5041 KiB  
Article
Autonomous Waste Classification Using Multi-Agent Systems and Blockchain: A Low-Cost Intelligent Approach
by Sergio García González, David Cruz García, Rubén Herrero Pérez, Arturo Álvarez Sanchez and Gabriel Villarrubia González
Sensors 2025, 25(14), 4364; https://doi.org/10.3390/s25144364 - 12 Jul 2025
Viewed by 401
Abstract
The increase in garbage generated in modern societies demands the implementation of a more sustainable model as well as new methods for efficient waste management. This article describes the development and implementation of a prototype of a smart bin that automatically sorts waste [...] Read more.
The increase in garbage generated in modern societies demands the implementation of a more sustainable model as well as new methods for efficient waste management. This article describes the development and implementation of a prototype of a smart bin that automatically sorts waste using a multi-agent system and blockchain integration. The proposed system has sensors that identify the type of waste (organic, plastic, paper, etc.) and uses collaborative intelligent agents to make instant sorting decisions. Blockchain has been implemented as a technology for the immutable and transparent control of waste registration, favoring traceability during the classification process, providing sustainability to the process, and making the audit of data in smart urban environments transparent. For the computer vision algorithm, three versions of YOLO (YOLOv8, YOLOv11, and YOLOv12) were used and evaluated with respect to their performance in automatic detection and classification of waste. The YOLOv12 version was selected due to its overall performance, which is superior to others with mAP@50 values of 86.2%, an overall accuracy of 84.6%, and an average F1 score of 80.1%. Latency was kept below 9 ms per image with YOLOv12, ensuring smooth and lag-free processing, even for utilitarian embedded systems. This allows for efficient deployment in near-real-time applications where speed and immediate response are crucial. These results confirm the viability of the system in both accuracy and computational efficiency. This work provides an innovative solution in the field of ambient intelligence, characterized by low equipment cost and high scalability, laying the foundations for the development of smart waste management infrastructures in sustainable cities. Full article
(This article belongs to the Special Issue Sensing and AI: Advancements in Robotics and Autonomous Systems)
Show Figures

Figure 1

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 276
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

25 pages, 2723 KiB  
Article
A Human-Centric, Uncertainty-Aware Event-Fused AI Network for Robust Face Recognition in Adverse Conditions
by Akmalbek Abdusalomov, Sabina Umirzakova, Elbek Boymatov, Dilnoza Zaripova, Shukhrat Kamalov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Appl. Sci. 2025, 15(13), 7381; https://doi.org/10.3390/app15137381 - 30 Jun 2025
Cited by 1 | Viewed by 336
Abstract
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into [...] Read more.
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into a single framework. This study introduces HUE-Net—a Human-centric, Uncertainty-aware, Event-fused Network—designed specifically to thrive under severe environmental stress. HUE-Net marries the visible RGB band with near-infrared (NIR) imagery and high-temporal-event data through an early-fusion pipeline, proven more responsive than serial approaches. A custom hybrid backbone that couples convolutional networks with transformers keeps the model nimble enough for edge devices. Central to the architecture is the perturbed multi-branch variational module, which distills probabilistic identity embeddings while delivering calibrated confidence scores. Complementing this, an Adaptive Spectral Attention mechanism dynamically reweights each stream to amplify the most reliable facial features in real time. Unlike previous efforts that compartmentalize uncertainty handling, spectral blending, or computational thrift, HUE-Net unites all three in a lightweight package. Benchmarks on the IJB-C and N-SpectralFace datasets illustrate that the system not only secures state-of-the-art accuracy but also exhibits unmatched spectral robustness and reliable probability calibration. The results indicate that HUE-Net is well-positioned for forensic missions and humanitarian scenarios where trustworthy identification cannot be deferred. Full article
Show Figures

Figure 1

15 pages, 1335 KiB  
Article
Assessment of the Quality of Life in Children and Adolescents with Myopia from the City of Varna
by Mariya Stoeva, Daliya Stefanova, Dobrin Boyadzhiev, Zornitsa Zlatarova, Binna Nencheva and Mladena Radeva
J. Clin. Med. 2025, 14(13), 4546; https://doi.org/10.3390/jcm14134546 - 26 Jun 2025
Viewed by 392
Abstract
Background: The World Health Organization defines myopia as a global epidemic. Its growing prevalence and the increasingly early age onset all raise a major concern for public health due to the elevated risk of loss and deterioration of visual function as a result [...] Read more.
Background: The World Health Organization defines myopia as a global epidemic. Its growing prevalence and the increasingly early age onset all raise a major concern for public health due to the elevated risk of loss and deterioration of visual function as a result of myopia-related ocular pathological complications. However, it remains unclear whether the vision-related quality of life of patients with myopia is the same as in healthy individuals. The aim of the present study is to assess the quality of life in children and adolescents with myopia between the ages of 8 and 16 years, who underwent observation at USBOBAL-Varna. Methods: This study prospectively included 190 patients with myopia between −1.00 and −5.50 D, corrected with different optical aids. After a thorough physical ocular examination and inquiry into the best visual acuity with and without distance correction, specially designed questionnaires were completed by the patients and their parents/guardians for the purpose of the study. The data from the questionnaires was statistically processed. The mean age of the patients in the study was 11.65 years, 101 were female and 89 were male. Of these, 83 wore monofocal glasses, 50 were monofocal and 47 were multifocal contact lenses, and 10 wore ortho-K lenses. Results: No significant difference in best corrected visual acuity (BCVA) was found among the three types of optical correction (p-value > 0.05). Cronbach’s alpha of the questionnaire for all 10 factors was higher than 0.6, indicating acceptable internal consistency. Significantly higher scores were reported for overall, near, and distance vision, symptoms, appearance, attitude, activities and hobbies, handling, and perception for soft contact lens wearers than for spectacle wearers (p-value < 0.05). Ortho-K wearers performed better than spectacle wearers in all aspects except for pronounced symptoms (p = 0.74). No significant difference was found between ortho-K wearers and soft contact lens wearers for any factor (p > 0.05). Conclusions: Patients wearing spectacles and with myopia above −5.00 D had the highest anxiety scores and lower quality of life among all myopic participants. The research on the quality of life in children with myopia with different refractive errors and optical correction devices is crucial for improving corrective devices and meeting the needs of patients. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

10 pages, 1173 KiB  
Article
Effectiveness of Enhanced Monofocal Intraocular Lens with Mini-Monovision in Improving Visual Acuity
by Santaro Noguchi, Shunsuke Nakakura, Asuka Noguchi and Hitoshi Tabuchi
J. Clin. Med. 2025, 14(13), 4517; https://doi.org/10.3390/jcm14134517 - 26 Jun 2025
Viewed by 460
Abstract
Objectives: This study compared the clinical outcomes of Vivinex Impress (XY1-EM) enhanced monofocal and standard Vivinex (XY1) intraocular lenses (IOLs) in mini-monovision cataract surgery. In this retrospective study, patients underwent bilateral implantation with either XY1-EM (33 patients, 66 eyes) or XY1 (24 [...] Read more.
Objectives: This study compared the clinical outcomes of Vivinex Impress (XY1-EM) enhanced monofocal and standard Vivinex (XY1) intraocular lenses (IOLs) in mini-monovision cataract surgery. In this retrospective study, patients underwent bilateral implantation with either XY1-EM (33 patients, 66 eyes) or XY1 (24 patients, 48 eyes) in a −1D mini-monovision configuration. Methods: Visual acuity was evaluated from 5 m to 30 cm, along with spectacle dependence, contrast sensitivity, and patient-reported outcomes. Results: The XY1-EM group demonstrated significantly better intermediate and near visual acuity at distances of 30−50 cm (p < 0.05) and reduced spectacle dependence for intermediate distances (p = 0.02). Visual function questionnaire (VFQ-25) scores were significantly higher in the XY1-EM group for general vision, role difficulty, mental health, dependency, and near activity domains (p < 0.05). No significant differences were found in glare, contrast sensitivity, or quality of vision scores. Conclusions: The XY1-EM lens in mini-monovision configuration provides enhanced intermediate and near visual acuity with reduced spectacle dependence compared to standard monofocal IOLs, offering a valuable option for patients seeking improved quality of vision with reduced spectacle use. Full article
Show Figures

Figure 1

21 pages, 12722 KiB  
Article
PC3D-YOLO: An Enhanced Multi-Scale Network for Crack Detection in Precast Concrete Components
by Zichun Kang, Kedi Gu, Andrew Yin Hu, Haonan Du, Qingyang Gu, Yang Jiang and Wenxia Gan
Buildings 2025, 15(13), 2225; https://doi.org/10.3390/buildings15132225 - 25 Jun 2025
Viewed by 467
Abstract
Crack detection in precast concrete components aims to achieve precise extraction of crack features within complex image backgrounds. Current computer vision-based methods typically conduct limited local searches at a single scale, constraining the model’s capacity for feature extraction and fusion in information-rich environments. [...] Read more.
Crack detection in precast concrete components aims to achieve precise extraction of crack features within complex image backgrounds. Current computer vision-based methods typically conduct limited local searches at a single scale, constraining the model’s capacity for feature extraction and fusion in information-rich environments. To address these limitations, we propose PC3D-YOLO, an enhanced framework derived from YOLOv11, which strengthens long-range dependency modeling through multi-scale feature integration, offering a novel approach for crack detection in precast concrete structures. Our methodology involves three key innovations: (1) the Multi-Dilation Spatial-Channel Fusion with Shuffling (MSFS) module, employing dilated convolutions and channel shuffling to enable global feature fusion, replaces the C3K2 bottleneck module to enhance long-distance dependency capture; (2) the AIFI_M2SA module substitutes the conventional SPPF to mitigate its restricted receptive field and information loss, incorporating multi-scale attention for improved near-far contextual integration; (3) a redesigned neck network (MSCD-Net) preserves rich contextual information across all feature scales. Experimental results demonstrate that, on the self-developed dataset, the proposed algorithm achieves a recall of 78.8%, an AP@50 of 86.3%, and an AP@50-95 of 65.6%, outperforming the YOLOv11 algorithm. Furthermore, evaluations on the CRACKS_MANISHA and DECA datasets also confirm the proposed model’s strong generalization capability across different data domains. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

Back to TopTop