Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,141)

Search Parameters:
Keywords = illumination test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 2237 KiB  
Article
Can Green Building Science Support Systems Thinking for Energy Education?
by Laura B. Cole, Jessica Justice, Delaney O’Brien, Jayedi Aman, Jong Bum Kim, Aysegul Akturk and Laura Zangori
Sustainability 2025, 17(15), 7008; https://doi.org/10.3390/su17157008 (registering DOI) - 1 Aug 2025
Abstract
Systems thinking (ST) is a foundational cognitive skillset to advance sustainability education but has not been well examined for learners prior to higher education. This case study research in rural middle schools in the Midwestern U.S. examines systems thinking outcomes of a place-based [...] Read more.
Systems thinking (ST) is a foundational cognitive skillset to advance sustainability education but has not been well examined for learners prior to higher education. This case study research in rural middle schools in the Midwestern U.S. examines systems thinking outcomes of a place-based energy literacy unit focused on energy-efficient building design. The unit employs the science of energy-efficient, green buildings to illuminate the ways in which energy flows between natural and built environments. The unit emphasized electrical, light, and thermal energy systems and the ways these systems interact to create functional and energy-efficient buildings. This study focuses on three case study classrooms where students across schools (n = 89 students) created systems models as part of pre- and post-unit tests (n = 162 models). The unit tests consisted of student drawings, annotations, and writings, culminating into student-developed systems models. Growth from pre- to post-test was observed in both the identification of system elements and the linkages between elements. System elements included in the models were common classroom features, such as windows, lights, and temperature control, suggesting that rooting the unit in place-based teaching may support ST skills. Full article
(This article belongs to the Special Issue Sustainability Education through Green Infrastructure)
13 pages, 692 KiB  
Article
Contrast Sensitivity Comparison of Daily Simultaneous-Vision Center-Near Multifocal Contact Lenses: A Pilot Study
by David P. Piñero, Ainhoa Molina-Martín, Elena Martínez-Plaza, Kevin J. Mena-Guevara, Violeta Gómez-Vicente and Dolores de Fez
Vision 2025, 9(3), 67; https://doi.org/10.3390/vision9030067 (registering DOI) - 1 Aug 2025
Abstract
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) [...] Read more.
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) was evaluated. A randomized crossover pilot study was conducted. Four daily disposable lens designs, based on simultaneous-vision and center-near correction, were compared. The achromatic contrast sensitivity function (CSF) was measured binocularly using the CSV1000e test under two lighting conditions: room light on and off. Chromatic CSF was measured using the OptoPad-CSF test. Comparison of achromatic results with room lighting showed a statistically significant difference only for 3 cpd (p = 0.03) between the baseline visit (with spectacles) and all MCLs. Comparison of achromatic results without room lighting showed no statistically significant differences between the baseline and all MCLs for any spatial frequency (p > 0.05 in all cases). Comparison of CSF-T results showed a statistically significant difference only for 4 cpd (p = 0.002). Comparison of CSF-D results showed no statistically significant difference for all frequencies (p > 0.05 in all cases). The MCL designs analyzed provided satisfactory achromatic contrast sensitivity results for distance vision, similar to those obtained with spectacles, with no remarkable differences between designs. Chromatic contrast sensitivity for the red-green and blue-yellow mechanisms revealed some differences from the baseline that should be further investigated in future studies. Full article
Show Figures

Figure 1

21 pages, 8731 KiB  
Article
Individual Segmentation of Intertwined Apple Trees in a Row via Prompt Engineering
by Herearii Metuarea, François Laurens, Walter Guerra, Lidia Lozano, Andrea Patocchi, Shauny Van Hoye, Helin Dutagaci, Jeremy Labrosse, Pejman Rasti and David Rousseau
Sensors 2025, 25(15), 4721; https://doi.org/10.3390/s25154721 (registering DOI) - 31 Jul 2025
Viewed by 44
Abstract
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree [...] Read more.
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree separately. We focus on segmenting individual apple trees as the main task in this context. Segmenting individual apple trees in dense orchard rows is challenging because of the complexity of outdoor illumination and intertwined branches. Traditional methods rely on supervised learning, which requires a large amount of annotated data. In this study, we explore an alternative approach using prompt engineering with the Segment Anything Model and its variants in a zero-shot setting. Specifically, we first detect the trunk and then position a prompt (five points in a diamond shape) located above the detected trunk to feed to the Segment Anything Model. We evaluate our method on the apple REFPOP, a new large-scale European apple tree dataset and on another publicly available dataset. On these datasets, our trunk detector, which utilizes a trained YOLOv11 model, achieves a good detection rate of 97% based on the prompt located above the detected trunk, achieving a Dice score of 70% without training on the REFPOP dataset and 84% without training on the publicly available dataset.We demonstrate that our method equals or even outperforms purely supervised segmentation approaches or non-prompted foundation models. These results underscore the potential of foundational models guided by well-designed prompts as scalable and annotation-efficient solutions for plant segmentation in complex agricultural environments. Full article
Show Figures

Figure 1

17 pages, 3206 KiB  
Article
Inverse Punicines: Isomers of Punicine and Their Application in LiAlO2, Melilite and CaSiO3 Separation
by Maximilian H. Fischer, Ali Zgheib, Iliass El Hraoui, Alena Schnickmann, Thomas Schirmer, Gunnar Jeschke and Andreas Schmidt
Separations 2025, 12(8), 202; https://doi.org/10.3390/separations12080202 - 30 Jul 2025
Viewed by 86
Abstract
The transition to sustainable energy systems demands efficient recycling methods for critical raw materials like lithium. In this study, we present a new class of pH- and light-switchable flotation collectors based on isomeric derivatives of the natural product Punicine, termed inverse Punicines. [...] Read more.
The transition to sustainable energy systems demands efficient recycling methods for critical raw materials like lithium. In this study, we present a new class of pH- and light-switchable flotation collectors based on isomeric derivatives of the natural product Punicine, termed inverse Punicines. These amphoteric molecules were synthesized via a straightforward four-step route and structurally tuned for hydrophobization by alkylation. Their performance as collectors was evaluated in microflotation experiments of lithium aluminate (LiAlO2) and silicate matrix minerals such as melilite and calcium silicate. Characterization techniques including ultraviolet-visible (UV-Vis), nuclear magnetic resonance (NMR) and electron spin resonance (ESR) spectroscopy as well as contact angle, zeta potential (ζ potential) and microflotation experiments revealed strong pH- and structure-dependent interactions with mineral surfaces. Notably, N-alkylated inverse Punicine derivatives showed high flotation yields for LiAlO2 at pH of 11, with a derivative possessing a dodecyl group attached to the nitrogen as collector achieving up to 86% recovery (collector conc. 0.06 mmol/L). Preliminary separation tests showed Li upgrading from 5.27% to 6.95%. Radical formation and light-response behavior were confirmed by ESR and flotation tests under different illumination conditions. These results demonstrate the potential of inverse Punicines as tunable, sustainable flotation reagents for advanced lithium recycling from complex slag systems. Full article
(This article belongs to the Special Issue Application of Green Flotation Technology in Mineral Processing)
Show Figures

Graphical abstract

29 pages, 7518 KiB  
Article
LEDs for Underwater Optical Wireless Communication
by Giuseppe Schirripa Spagnolo, Giorgia Satta and Fabio Leccese
Photonics 2025, 12(8), 749; https://doi.org/10.3390/photonics12080749 - 25 Jul 2025
Viewed by 338
Abstract
LEDs are readily controllable and demonstrate rapid switching capabilities. These attributes facilitate their efficient integration across a broad spectrum of applications. Indeed, their inherent versatility renders them ideally suited for diverse sectors, including consumer electronics, traffic signage, automotive technology, and architectural illumination. Furthermore, [...] Read more.
LEDs are readily controllable and demonstrate rapid switching capabilities. These attributes facilitate their efficient integration across a broad spectrum of applications. Indeed, their inherent versatility renders them ideally suited for diverse sectors, including consumer electronics, traffic signage, automotive technology, and architectural illumination. Furthermore, LEDs serve as effective light sources for applications in spectroscopy, agriculture, pest control, and wireless optical transmission. The capability to choice high-efficiency LED devices with a specified dominant wavelength renders them particularly well-suited for integration into underwater optical communication systems. In this paper, we present the state-of-the-art of Light-Emitting Diodes (LEDs) for use in underwater wireless optical communications (UOWC). In particular, we focus on the challenges posed by water turbidity and evaluate the optimal wavelengths for communication in coastal environments, especially in the presence of chlorophyll or suspended particulate matter. Given the growing development and applications of underwater optical communication, it is crucial that the topic becomes not only a subject of research but also part of the curricula in technical school and universities. To this end, we introduce a simple and cost-effective UOWC system designed for educational purposes. Some tests have been conducted to evaluate the system’s performance, and the results have been reported. Full article
Show Figures

Figure 1

12 pages, 557 KiB  
Article
Advancing Diagnostics with Semi-Automatic Tear Meniscus Central Area Measurement for Aqueous Deficient Dry Eye Discrimination
by Hugo Pena-Verdeal, Jacobo Garcia-Queiruga, Belen Sabucedo-Villamarin, Carlos Garcia-Resua, Maria J. Giraldez and Eva Yebra-Pimentel
Medicina 2025, 61(8), 1322; https://doi.org/10.3390/medicina61081322 - 22 Jul 2025
Viewed by 192
Abstract
Background and Objectives: To clinically validate a semi-automatic measurement of Tear Meniscus Central Area (TMCA) to differentiate between Non-Aqueous Deficient Dry Eye (Non-ADDE) and Aqueous Deficient Dry Eye (ADDE) patients. Materials and Methods: 120 volunteer participants were included in the study. Following [...] Read more.
Background and Objectives: To clinically validate a semi-automatic measurement of Tear Meniscus Central Area (TMCA) to differentiate between Non-Aqueous Deficient Dry Eye (Non-ADDE) and Aqueous Deficient Dry Eye (ADDE) patients. Materials and Methods: 120 volunteer participants were included in the study. Following TFOS DEWS II diagnostic criteria, a battery of tests was conducted for dry eye diagnosis: Ocular Surface Disease Index questionnaire, tear film osmolarity, tear film break-up time, and corneal staining. Additionally, lower tear meniscus videos were captured with Tearscope illumination and, separately, with fluorescein using slit-lamp blue light and a yellow filter. Tear meniscus height was measured from Tearscope videos to differentiate Non-ADDE from ADDE participants, while TMCA was obtained from fluorescein videos. Both parameters were analyzed using the open-source software NIH ImageJ. Results: Receiver Operating Characteristics analysis showed that semi-automatic TMCA evaluation had significant diagnostic capability to differentiate between Non-ADDE and ADDE participants, with an optimal cut-off value to differentiate between the two groups of 54.62 mm2 (Area Under the Curve = 0.714 ± 0.051, p < 0.001; specificity: 71.7%; sensitivity: 68.9%). Conclusions: The semi-automatic TMCA evaluation showed preliminary valuable results as a diagnostic tool for distinguishing between ADDE and Non-ADDE individuals. Full article
(This article belongs to the Special Issue Advances in Diagnosis and Therapies of Ocular Diseases)
Show Figures

Figure 1

26 pages, 829 KiB  
Article
Enhanced Face Recognition in Crowded Environments with 2D/3D Features and Parallel Hybrid CNN-RNN Architecture with Stacked Auto-Encoder
by Samir Elloumi, Sahbi Bahroun, Sadok Ben Yahia and Mourad Kaddes
Big Data Cogn. Comput. 2025, 9(8), 191; https://doi.org/10.3390/bdcc9080191 - 22 Jul 2025
Viewed by 339
Abstract
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. [...] Read more.
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. A parallel hybrid deep learning model for feature extraction and classification is proposed. An ensemble of three parallel extraction layer models learns the best representative features using CNN and RNN. 2D LBP and 3D Mesh LBP are computed on face images to extract image features as input to two RNNs. A stacked autoencoder (SAE) merged the feature vectors extracted from the three CNN-RNN parallel layers. We tested the designed 2D/3D CNN-RNN framework on four standard datasets. We achieved an accuracy of 98.9%. The hybrid deep learning model significantly improves FR against similar state-of-the-art methods. The proposed model was also tested on an unconstrained conditions human crowd dataset, and the results were very promising with an accuracy of 95%. Furthermore, our model shows an 11.5% improvement over similar hybrid CNN-RNN architectures, proving its robustness in complex environments where the face can undergo different transformations. Full article
Show Figures

Figure 1

23 pages, 4267 KiB  
Article
Proof of Concept of an Integrated Laser Irradiation and Thermal/Visible Imaging System for Optimized Photothermal Therapy in Skin Cancer
by Diogo Novas, Alessandro Fortes, Pedro Vieira and João M. P. Coelho
Sensors 2025, 25(14), 4495; https://doi.org/10.3390/s25144495 - 19 Jul 2025
Viewed by 367
Abstract
Laser energy is widely used as a selective photothermal heating agent in cancer treatment, standing out for not relying on ionizing radiation. However, in vivo tests have highlighted the need to develop irradiation techniques that allow precise control over the illuminated area, adapting [...] Read more.
Laser energy is widely used as a selective photothermal heating agent in cancer treatment, standing out for not relying on ionizing radiation. However, in vivo tests have highlighted the need to develop irradiation techniques that allow precise control over the illuminated area, adapting it to the tumor size to further minimize damage to surrounding healthy tissue. To address this challenge, a proof of concept based on a laser irradiation system has been designed, enabling control over energy, exposure time, and irradiated area, using galvanometric mirrors. The control software, implemented in Python, employs a set of cameras (visible and infrared) to detect and monitor real-time thermal distributions in the region of interest, transmitting this information to a microcontroller responsible for adjusting the laser power and controlling the scanning process. Image alignment procedures, tunning of the controller’s gain parameters and the impact of the different engineering parameters are illustrated on a dedicated setup. As proof of concept, this approach has demonstrated the ability to irradiate a phantom of black modeling clay within an area of up to 5 cm × 5 cm, from 15 cm away, as well as to monitor and regulate the temperature over time (5 min). Full article
Show Figures

Graphical abstract

22 pages, 15594 KiB  
Article
Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning
by Xike Song and Ziyang Li
Remote Sens. 2025, 17(14), 2482; https://doi.org/10.3390/rs17142482 - 17 Jul 2025
Viewed by 301
Abstract
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing [...] Read more.
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing vast ocean areas. In contrast, medium-resolution imagery, such as 10-m Sentinel-2, provides broad ocean coverage but depicts turbines only as small bright spots and shadows, making accurate detection challenging. To address these limitations, We propose a novel deep learning approach to capture the variability in OWT appearance and shadows caused by changes in solar illumination and satellite viewing geometry. Our method learns intrinsic, imaging geometry-invariant features of OWTs, enabling robust detection across multi-seasonal Sentinel-2 imagery. This approach is implemented using Faster R-CNN as the baseline, with three enhanced extensions: (1) direct integration of imaging parameters, where Geowise-Net incorporates solar and view angular information of satellite metadata to improve geometric awareness; (2) implicit geometry learning, where Contrast-Net employs contrastive learning on seasonal image pairs to capture variability in turbine appearance and shadows caused by changes in solar and viewing geometry; and (3) a Composite model that integrates the above two geometry-aware models to utilize their complementary strengths. All four models were evaluated using Sentinel-2 imagery from offshore regions in China. The ablation experiments showed a progressive improvement in detection performance in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. Seasonal tests demonstrated that the proposed models maintained high performance on summer images against the baseline, where turbine shadows are significantly shorter than in winter scenes. The Composite model, in particular, showed only a 0.8% difference in the F1 score between the two seasons, compared to up to 3.7% for the baseline, indicating strong robustness to seasonal variation. By applying our approach to 887 Sentinel-2 scenes from China’s offshore regions (2023.1–2025.3), we built the China OWT Dataset, mapping 7369 turbines as of March 2025. Full article
Show Figures

Graphical abstract

22 pages, 4636 KiB  
Article
SP-GEM: Spatial Pattern-Aware Graph Embedding for Matching Multisource Road Networks
by Chenghao Zheng, Yunfei Qiu, Jian Yang, Bianying Zhang, Zeyuan Li, Zhangxiang Lin, Xianglin Zhang, Yang Hou and Li Fang
ISPRS Int. J. Geo-Inf. 2025, 14(7), 275; https://doi.org/10.3390/ijgi14070275 - 15 Jul 2025
Viewed by 275
Abstract
Identifying correspondences of road segments in different road networks, namely road-network matching, is an essential task for road network-centric data processing such as data integration of road networks and data quality assessment of crowd-sourced road networks. Traditional road-network matching usually relies on feature [...] Read more.
Identifying correspondences of road segments in different road networks, namely road-network matching, is an essential task for road network-centric data processing such as data integration of road networks and data quality assessment of crowd-sourced road networks. Traditional road-network matching usually relies on feature engineering and parameter selection of the geometry and topology of road networks for similarity measurement, resulting in poor performance when dealing with dense and irregular road network structures. Recent development of graph neural networks (GNNs) has demonstrated unsupervised modeling power on road network data, which learn the embedded vector representation of road networks through spatial feature induction and topology-based neighbor aggregation. However, weighting spatial information on the node feature alone fails to give full play to the expressive power of GNNs. To this end, this paper proposes a Spatial Pattern-aware Graph EMbedding learning method for road-network matching, named SP-GEM, which explores the idea of spatially-explicit modeling by identifying spatial patterns in neighbor aggregation. Firstly, a road graph is constructed from the road network data, and geometric, topological features are extracted as node features of the road graph. Then, four spatial patterns, including grid, high branching degree, irregular grid, and circuitous, are modelled in a sector-based road neighborhood for road embedding. Finally, the similarity of road embedding is used to find data correspondences between road networks. We conduct an algorithmic accuracy test to verify the effectiveness of SP-GEM on OSM and Tele Atlas data. The algorithmic accuracy experiments show that SP-GEM improves the matching accuracy and recall by at least 6.7% and 10.2% among the baselines, with high matching success rate (>70%), and improves the matching accuracy and recall by at least 17.7% and 17.0%, compared to the baseline GNNs, without spatially-explicit modeling. Further embedding analysis also verifies the effectiveness of the induction of spatial patterns. This study not only provides an effective and practical algorithm for road-network matching, but also serves as a test bed in exploring the role of spatially-explicit modeling in GNN-based road network modeling. The experimental performances of SP-GEM illuminate the path to develop GeoEmbedding services for geospatial applications. Full article
Show Figures

Figure 1

19 pages, 3619 KiB  
Article
An Adaptive Underwater Image Enhancement Framework Combining Structural Detail Enhancement and Unsupervised Deep Fusion
by Semih Kahveci and Erdinç Avaroğlu
Appl. Sci. 2025, 15(14), 7883; https://doi.org/10.3390/app15147883 - 15 Jul 2025
Viewed by 234
Abstract
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To [...] Read more.
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To address these issues, this study proposes a detail-oriented hybrid framework for underwater image enhancement that synergizes the strengths of traditional image processing with the powerful feature extraction capabilities of unsupervised deep learning. Our framework introduces a novel multi-scale detail enhancement unit to accentuate structural information, followed by a Latent Low-Rank Representation (LatLRR)-based simplification step. This unique combination effectively suppresses common artifacts like oversharpening, spurious edges, and noise by decomposing the image into meaningful subspaces. The principal structural features are then optimally combined with a gamma-corrected luminance channel using an unsupervised MU-Fusion network, achieving a balanced optimization of both global contrast and local details. The experimental results on the challenging Test-C60 and OceanDark datasets demonstrate that our method consistently outperforms state-of-the-art fusion-based approaches, achieving average improvements of 7.5% in UIQM, 6% in IL-NIQE, and 3% in AG. Wilcoxon signed-rank tests confirm that these performance gains are statistically significant (p < 0.01). Consequently, the proposed method significantly mitigates prevalent issues such as color aberration, detail loss, and artificial haze, which are frequently encountered in existing techniques. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 8486 KiB  
Article
An Efficient Downwelling Light Sensor Data Correction Model for UAV Multi-Spectral Image DOM Generation
by Siyao Wu, Yanan Lu, Wei Fan, Shengmao Zhang, Zuli Wu and Fei Wang
Drones 2025, 9(7), 491; https://doi.org/10.3390/drones9070491 - 11 Jul 2025
Viewed by 210
Abstract
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral [...] Read more.
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral cameras, as well as external disturbances such as strong wind gusts and abrupt changes in flight attitude, DLS data often become unreliable, particularly at UAV turning points. Building upon traditional angle compensation methods, this study proposes an improved correction approach—FIM-DC (Fitting and Interpolation Model-based Data Correction)—specifically designed for data collection under clear-sky conditions and stable atmospheric illumination, with the goal of significantly enhancing the accuracy of reflectance retrieval. The method addresses three key issues: (1) field tests conducted in the Qingpu region show that FIM-DC markedly reduces the standard deviation of reflectance at tie points across multiple spectral bands and flight sessions, with the most substantial reduction from 15.07% to 0.58%; (2) it effectively mitigates inconsistencies in reflectance within image mosaics caused by anomalous DLS readings, thereby improving the uniformity of DOMs; and (3) FIM-DC accurately corrects the spectral curves of six land cover types in anomalous images, making them consistent with those from non-anomalous images. In summary, this study demonstrates that integrating FIM-DC into DLS data correction workflows for UAV-based multispectral imagery significantly enhances reflectance calculation accuracy and provides a robust solution for improving image quality under stable illumination conditions. Full article
Show Figures

Figure 1

21 pages, 14169 KiB  
Article
High-Precision Complex Orchard Passion Fruit Detection Using the PHD-YOLO Model Improved from YOLOv11n
by Rongxiang Luo, Rongrui Zhao, Xue Ding, Shuangyun Peng and Fapeng Cai
Horticulturae 2025, 11(7), 785; https://doi.org/10.3390/horticulturae11070785 - 3 Jul 2025
Viewed by 322
Abstract
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This [...] Read more.
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This study introduces a pioneering partial convolution module (ParConv), which employs a channel grouping and independent processing strategy to mitigate computational complexity. The module under consideration has been demonstrated to enhance the efficacy of local feature extraction in dense fruit regions by integrating sub-group feature-independent convolution and channel concatenation mechanisms. Secondly, deep separable convolution (DWConv) is adopted to replace standard convolution. The proposed method involves decoupling spatial convolution and channel convolution, a strategy that enables the retention of multi-scale feature expression capabilities while achieving a substantial reduction in model computation. The integration of the HSV Attentional Fusion (HSVAF) module within the backbone network facilitates the fusion of HSV color space characteristics with an adaptive attention mechanism, thereby enhancing feature discriminability under dynamic lighting conditions. The experiment was conducted on a dataset of 1212 original images collected from a planting base in Yunnan, China, covering multiple periods and angles. The dataset was constructed using enhancement strategies, including rotation and noise injection, and contains 2910 samples. The experimental results demonstrate that the improved model achieves a detection accuracy of 95.4%, a recall rate of 85.0%, mAP@0.5 of 91.5%, and an F1 score of 90.0% on the test set, which are 0.7%, 3.5%, 1.3%, and 2. The model demonstrated a 4% increase in accuracy compared to the baseline model YOLOv11n, with a single-frame inference time of 0.6 milliseconds. The model exhibited significant robustness in scenarios with dense fruits, leaf occlusion, and backlighting, validating the synergistic enhancement of staged convolution optimization and hybrid attention mechanisms. This solution offers a means to automate the monitoring of orchards, achieving a balance between accuracy and real-time performance. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

18 pages, 9092 KiB  
Article
A Unified YOLOv8 Approach for Point-of-Care Diagnostics of Salivary α-Amylase
by Youssef Amin, Paola Cecere and Pier Paolo Pompa
Biosensors 2025, 15(7), 421; https://doi.org/10.3390/bios15070421 - 2 Jul 2025
Viewed by 415
Abstract
Salivary α-amylase (sAA) is a widely recognized biomarker for stress and autonomic nervous system activity. However, conventional enzymatic assays used to quantify sAA are limited by time-consuming, lab-based protocols. In this study, we present a portable, AI-driven point-of-care system for automated sAA [...] Read more.
Salivary α-amylase (sAA) is a widely recognized biomarker for stress and autonomic nervous system activity. However, conventional enzymatic assays used to quantify sAA are limited by time-consuming, lab-based protocols. In this study, we present a portable, AI-driven point-of-care system for automated sAA classification via colorimetric image analysis. The system integrates SCHEDA, a custom-designed imaging device providing and ensuring standardized illumination, with a deep learning pipeline optimized for mobile deployment. Two classification strategies were compared: (1) a modular YOLOv4-CNN architecture and (2) a unified YOLOv8 segmentation-classification model. The models were trained on a dataset of 1024 images representing an eight-class classification problem corresponding to distinct sAA concentrations. The results show that red-channel input significantly enhances YOLOv4-CNN performance, achieving 93.5% accuracy compared to 88% with full RGB images. The YOLOv8 model further outperformed both approaches, reaching 96.5% accuracy while simplifying the pipeline and enabling real-time, on-device inference. The system was deployed and validated on a smartphone, demonstrating consistent results in live tests. This work highlights a robust, low-cost platform capable of delivering fast, reliable, and scalable salivary diagnostics for mobile health applications. Full article
Show Figures

Figure 1

15 pages, 1887 KiB  
Article
Multispectral Reconstruction in Open Environments Based on Image Color Correction
by Jinxing Liang, Xin Hu, Yifan Li and Kaida Xiao
Electronics 2025, 14(13), 2632; https://doi.org/10.3390/electronics14132632 - 29 Jun 2025
Viewed by 200
Abstract
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open [...] Read more.
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open environment presents significant challenges for spectral reconstruction. This is because a spectral reconstruction model established under one set of imaging conditions is not suitable for use under different imaging conditions. In this study, considering the principle of multispectral reconstruction, we proposed a method of multispectral reconstruction in open environments based on image color correction. In the proposed method, a whiteboard is used as a medium to calculate the color correction matrices from an open environment and transfer them to the laboratory. After the digital image is corrected, its multispectral image can be reconstructed using the pre-established multispectral reconstruction model in the laboratory. The proposed method was tested in simulations and practical experiments using different datasets and illuminations. The results show that the root-mean-square error of the color chart is below 2.6% in the simulation experiment and below 6.0% in the practical experiment, which illustrates the efficiency of the proposed method. Full article
(This article belongs to the Special Issue Image Fusion and Image Processing)
Show Figures

Figure 1

Back to TopTop