Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (617)

Search Parameters:
Keywords = masked errors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5284 KB  
Article
Impact of Phase Defects on the Aerial Image in High NA Extreme Ultraviolet Lithography
by Kun He and Zhinan Zeng
Micromachines 2025, 16(11), 1210; https://doi.org/10.3390/mi16111210 - 24 Oct 2025
Abstract
With the development of extreme ultraviolet (EUV) lithography technology to higher numerical aperture (NA), it provides higher resolution imaging quality, which may be more sensitive to the phase defect in EUV mask. Therefore, it is necessary to comprehensively understand the effect of phase [...] Read more.
With the development of extreme ultraviolet (EUV) lithography technology to higher numerical aperture (NA), it provides higher resolution imaging quality, which may be more sensitive to the phase defect in EUV mask. Therefore, it is necessary to comprehensively understand the effect of phase defect on the imaging quality depending on the NA. We simulated aerial images of patterned EUV masks for the EUV lithography exposure tool of NA = 0.55 and NA = 0.33 using the rigorous coupled-wave analysis (RCWA) method. The results shows that higher NA enhances the contrast of aerial images, which, in turn, provides greater tolerance for phase defect. This indicates that high NA can mitigate the negative impact of phase defect on imaging quality to some extent. Furthermore, it is found that both the defect signal and the intensity loss ratio of the aerial image first increase and then decrease as the width of the phase defect increases, due to the height/width ratio of the phase defect. Meanwhile, the defect width corresponding to the maximum phase defect signal tends to become smaller as the NA becomes larger. It is also worth noting that when NA = 0.33, variations in the position of the phase defect led to fluctuations in the CD error due to the shadow effect of the absorber, while it diminishes at NA = 0.55. This is because a higher NA of 0.55 provides a stronger background field, which suppresses the shadow effect of the absorber more effectively than it does at NA = 0.33. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

27 pages, 3367 KB  
Article
Amodal Segmentation and Trait Extraction of On-Branch Soybean Pods with a Synthetic Dual-Mask Dataset
by Kaiwen Jiang, Wei Guo and Wenli Zhang
Sensors 2025, 25(20), 6486; https://doi.org/10.3390/s25206486 - 21 Oct 2025
Viewed by 306
Abstract
We address the challenge that occlusions in on-branch soybean images impede accurate pod-level phenotyping. We propose a lab on-branch pipeline that couples a prior-guided synthetic data generator (producing synchronized visible and amodal labels) with an amodal instance segmentation framework based on an improved [...] Read more.
We address the challenge that occlusions in on-branch soybean images impede accurate pod-level phenotyping. We propose a lab on-branch pipeline that couples a prior-guided synthetic data generator (producing synchronized visible and amodal labels) with an amodal instance segmentation framework based on an improved Swin Transformer backbone with a Simple Attention Module (SimAM) and dual heads, trained via three-stage transfer (synthetic excised → synthetic on-branch → few-shot real). Guided by complete (amodal) masks, a morphology-driven module performs pose normalization, axial geometric modeling, multi-scale fused density mapping, marker-controlled watershed, and topological consistency refinement to extract seed per pod (SPP) and geometric traits. On real on-branch data, the model attains Visible Average Precision (AP) 50/75 of 91.6/77.6 and amodal AP50/75 of 90.1/74.7, and incorporating synthetic data yields consistent gains across models, indicating effective occlusion reasoning. On excised pod tests, SPP achieves a mean absolute error (MAE) of 0.07 and a root mean square error (RMSE) of 0.26; pod length/width achieves an MAE of 2.87/3.18 px with high agreement (R2 up to 0.94). Overall, the co-designed data–model–task pipeline recovers complete pod geometry under heavy occlusion and enables non-destructive, high-precision, and low-annotation-cost extraction of key traits, providing a practical basis for standardized laboratory phenotyping and downstream breeding applications. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2025)
Show Figures

Figure 1

20 pages, 6483 KB  
Article
Loop-MapNet: A Multi-Modal HDMap Perception Framework with SDMap Dynamic Evolution and Priors
by Yuxuan Tang, Jie Hu, Daode Zhang, Wencai Xu, Feiyu Zhao and Xinghao Cheng
Appl. Sci. 2025, 15(20), 11160; https://doi.org/10.3390/app152011160 - 17 Oct 2025
Viewed by 269
Abstract
High-definition maps (HDMaps) are critical for safe autonomy on structured roads. Yet traditional production—relying on dedicated mapping fleets and manual quality control—is costly and slow, impeding large-scale, frequent updates. Recently, standard-definition maps (SDMaps) derived from remote sensing have been adopted as priors to [...] Read more.
High-definition maps (HDMaps) are critical for safe autonomy on structured roads. Yet traditional production—relying on dedicated mapping fleets and manual quality control—is costly and slow, impeding large-scale, frequent updates. Recently, standard-definition maps (SDMaps) derived from remote sensing have been adopted as priors to support HDMap perception, lowering cost but struggling with subtle urban changes and localization drift. We propose Loop-MapNet, a self-evolving, multimodal, closed-loop mapping framework. Loop-MapNet effectively leverages surround-view images, LiDAR point clouds, and SDMaps; it fuses multi-scale vision via a weighted BiFPN, and couples PointPillars BEV and SDMap topology encoders for cross-modal sensing. A Transformer-based bidirectional adaptive cross-attention aligns SDMap with online perception, enabling robust fusion under heterogeneity. We further introduce a confidence-guided masked autoencoder (CG-MAE) that leverages confidence and probabilistic distillation to both capture implicit SDMap priors and enhance the detailed representation of low-confidence HDMap regions. With spatiotemporal consistency checks, Loop-MapNet incrementally updates SDMaps to form a perception–mapping–update loop, compensating remote-sensing latency and enabling online map optimization. On nuScenes, within 120 m, Loop-MapNet attains 61.05% mIoU, surpassing the best baseline by 0.77%. Under extreme localization errors, it maintains 60.46% mIoU, improving robustness by 2.77%; CG-MAE pre-training raises accuracy in low-confidence regions by 1.72%. These results demonstrate advantages in fusion and robustness, moving beyond one-way prior injection and enabling HDMap–SDMap co-evolution for closed-loop autonomy and rapid SDMap refresh from remote sensing. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 9844 KB  
Article
Deep Learning and Geometric Modeling for 3D Reconstruction of Subsurface Utilities from GPR Data
by Peyman Jafary, Davood Shojaei and Krista A. Ehinger
Sensors 2025, 25(20), 6414; https://doi.org/10.3390/s25206414 - 17 Oct 2025
Viewed by 195
Abstract
Accurate underground utility mapping remains a critical yet complex task in Ground Penetrating Radar (GPR) interpretation, essential to avoiding costly and dangerous excavation errors. This study presents a novel deep learning-based pipeline for 3D reconstruction of buried linear utilities from high-resolution GPR B-scan [...] Read more.
Accurate underground utility mapping remains a critical yet complex task in Ground Penetrating Radar (GPR) interpretation, essential to avoiding costly and dangerous excavation errors. This study presents a novel deep learning-based pipeline for 3D reconstruction of buried linear utilities from high-resolution GPR B-scan data. Three state-of-the-art models—YOLOv8, YOLOv11, and Mask R-CNN—were employed for both bounding box and keypoint detection of hyperbolic reflections, using a real-world GPR dataset. On the test set, Mask R-CNN achieved the highest keypoint F1-score (0.822) and bounding box F1-score (0.867), outperforming the YOLO models. Detected summit points were clustered using a 3D DBSCAN algorithm to approximate the spatial trajectories of buried utilities. RANSAC-based line fitting was then applied to each cluster, yielding an average RMSE of 0.06 across all fitted 3D paths. The key innovation of this hybrid model lies in its use of real-world data (avoiding synthetic augmentation), direct summit point detection (beyond bounding box analysis), and a geometric 3D reconstruction pipeline. This approach addresses key limitations in prior studies, including poor generalizability to complex real-world scenarios and the reliance on full 3D data volumes. Our method offers a more practical and scalable solution for subsurface utility mapping in real-world settings. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 2733 KB  
Article
Style Transfer from Sentinel-1 to Sentinel-2 for Fluvial Scenes with Multi-Modal and Multi-Temporal Image Fusion
by Patrice E. Carbonneau
Remote Sens. 2025, 17(20), 3445; https://doi.org/10.3390/rs17203445 - 15 Oct 2025
Viewed by 221
Abstract
Recently, there has been significant progress in the area of semantic classification of water bodies at global scales with deep learning. For the key purposes of water inventory and change detection, advanced deep learning classifiers such as UNets and Vision Transformers have been [...] Read more.
Recently, there has been significant progress in the area of semantic classification of water bodies at global scales with deep learning. For the key purposes of water inventory and change detection, advanced deep learning classifiers such as UNets and Vision Transformers have been shown to be both accurate and flexible when applied to large-scale, or even global, satellite image datasets from optical (e.g., Sentinel-2) and radar sensors (e.g., Sentinel-1). Most of this work is conducted with optical sensors, which usually have better image quality, but their obvious limitation is cloud cover, which is why radar imagery is an important complementary dataset. However, radar imagery is generally more sensitive to soil moisture than optical data. Furthermore, topography and wind-ripple effects can alter the reflected intensity of radar waves, which can induce errors in water classification models that fundamentally rely on the fact that water is darker than the surrounding landscape. In this paper, we develop a solution to the use of Sentinel-1 radar images for the semantic classification of water bodies that uses style transfer with multi-modal and multi-temporal image fusion. Instead of developing new semantic classification models that work directly on Sentinel-1 images, we develop a global style transfer model that produces synthetic Sentinel-2 images from Sentinel-1 input. The resulting synthetic Sentinel-2 imagery can then be classified with existing models. This has the advantage of obviating the need for large volumes of manually labeled Sentinel-1 water masks. Next, we show that fusing an 8-year cloud-free composite of the near-infrared band 8 of Sentinel-2 to the input Sentinel-1 image improves the classification performance. Style transfer models were trained and validated with global scale data covering the years 2017 to 2024, and include every month of the year. When tested against a global independent benchmark, S1S2-Water, the semantic classifications produced from our synthetic imagery show a marked improvement with the use of image fusion. When we use only Sentinel-1 data, we find an overall IoU (Intersection over Union) score of 0.70, but when we add image fusion, the overall IoU score rises to 0.93. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

13 pages, 3442 KB  
Article
Patterning Fidelity Enhancement and Aberration Mitigation in EUV Lithography Through Source–Mask Optimization
by Qi Wang, Qiang Wu, Ying Li, Xianhe Liu and Yanli Li
Micromachines 2025, 16(10), 1166; https://doi.org/10.3390/mi16101166 - 14 Oct 2025
Viewed by 333
Abstract
Extreme ultraviolet (EUV) lithography faces critical challenges in aberration control and patterning fidelity as technology nodes shrink below 3 nm. This work demonstrates how Source–Mask Optimization (SMO) simultaneously addresses both illumination and mask design to enhance pattern transfer accuracy and mitigate aberrations. Through [...] Read more.
Extreme ultraviolet (EUV) lithography faces critical challenges in aberration control and patterning fidelity as technology nodes shrink below 3 nm. This work demonstrates how Source–Mask Optimization (SMO) simultaneously addresses both illumination and mask design to enhance pattern transfer accuracy and mitigate aberrations. Through a comprehensive optimization framework incorporating key process metrics, including critical dimension (CD), exposure latitude (EL), and mask error factor (MEF), we achieve significant improvements in imaging quality and process window for 40 nm minimum pitch patterns, representative of 2 nm node back-end-of-line (BEOL) requirements. Our analysis reveals that intelligent SMO implementation not only enables robust patterning solutions but also compensates for inherent EUV aberrations by balancing source characteristics with mask modifications. On average, our results show a 4.02% reduction in CD uniformity variation, concurrent with a 1.48% improvement in exposure latitude and a 5.45% reduction in MEF. The proposed methodology provides actionable insights for aberration-aware SMO strategies, offering a pathway to maintain lithographic performance as feature sizes continue to scale. These results underscore SMO’s indispensable role in advancing EUV lithography capabilities for next-generation semiconductor manufacturing. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

30 pages, 4855 KB  
Article
Towards Reliable High-Resolution Satellite Products for the Monitoring of Chlorophyll-a and Suspended Particulate Matter in Optically Shallow Coastal Lagoons
by Samuel Martin, Philippe Bryère, Pierre Gernez, Pannimpullath Remanan Renosh and David Doxaran
Remote Sens. 2025, 17(20), 3430; https://doi.org/10.3390/rs17203430 - 14 Oct 2025
Viewed by 278
Abstract
Coastal lagoons are fragile and dynamic ecosystems that are particularly vulnerable to climate change and anthropogenic pressures such as urbanization and eutrophication. These vulnerabilities highlight the need for frequent and spatially extensive monitoring of water quality (WQ). While satellite remote sensing offers a [...] Read more.
Coastal lagoons are fragile and dynamic ecosystems that are particularly vulnerable to climate change and anthropogenic pressures such as urbanization and eutrophication. These vulnerabilities highlight the need for frequent and spatially extensive monitoring of water quality (WQ). While satellite remote sensing offers a valuable tool to support this effort, the optical complexity and shallow depths of lagoons pose major challenges for retrieving water column biogeochemical parameters such as chlorophyll-a ([chl-a]) and suspended particulate matter ([SPM]) concentrations. In this study, we develop and evaluate a robust satellite-based processing chain using Sentinel-2 MSI imagery over two French Mediterranean lagoon systems (Berre and Thau), supported by extensive in situ radiometric and biogeochemical datasets. Our approach includes the following: (i) a comparative assessment of six atmospheric correction (AC) processors, (ii) the development of an Optically Shallow Water Probability Algorithm (OSWPA), a new semi-empirical algorithm to estimate the probability of bottom contamination (BC), and (iii) the evaluation of several [chl-a] and [SPM] inversion algorithms. Results show that the Sen2Cor AC processor combined with a near-infrared similarity correction (NIR-SC) yields relative errors below 30% across all bands for retrieving remote-sensing reflectance Rrs(λ). OSWPA provides a spatially continuous and physically consistent alternative to binary BC masks. A new [chl-a] algorithm based on a near-infrared/blue Rrs ratio improves the retrieval accuracy while the 705 nm band appears to be the most suitable for retrieving [SPM] in optically shallow lagoons. This processing chain enables high-resolution WQ monitoring of two coastal lagoon systems and supports future large-scale assessments of ecological trends under increasing climate and anthropogenic stress. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

30 pages, 5508 KB  
Article
Phase-Aware Complex-Spectrogram Autoencoder for Vibration Preprocessing: Fault-Component Separation via Input-Phasor Orthogonality Regularization
by Seung-yeol Yoo, Ye-na Lee, Jae-chul Lee, Se-yun Hwang, Jae-yun Lee and Soon-sup Lee
Machines 2025, 13(10), 945; https://doi.org/10.3390/machines13100945 - 13 Oct 2025
Viewed by 261
Abstract
We propose a phase-aware complex-spectrogram autoencoder (AE) for preprocessing raw vibration signals of rotating electrical machines. The AE reconstructs normal components and separates fault components as residuals, guided by an input-phasor phase-orthogonality regularization that defines parallel/orthogonal residuals with respect to the local signal [...] Read more.
We propose a phase-aware complex-spectrogram autoencoder (AE) for preprocessing raw vibration signals of rotating electrical machines. The AE reconstructs normal components and separates fault components as residuals, guided by an input-phasor phase-orthogonality regularization that defines parallel/orthogonal residuals with respect to the local signal phase. We use a U-Net-based AE with a mask-bias head to refine local magnitude and phase. Decisions are based on residual features—magnitude/shape, frequency distribution, and projections onto the normal manifold. Using the AI Hub open dataset from field ventilation motors, we evaluate eight representative motor cases (2.2–5.5 kW: misalignment, unbalance, bearing fault, belt looseness). The preprocessing yielded clear residual patterns (low-frequency floor rise, resonance-band peaks, harmonic-neighbor spikes), and achieved an area under the receiver operating characteristic curve (ROC-AUC) = 0.998–1.000 across eight cases, with strong leave-one-file-out generalization and good calibration (expected calibration error (ECE) ≤ 0.023). The results indicate that learning to remove normal structure while enforcing phase consistency provides an unsupervised front-end that enhances fault evidence while preserving interpretability on field data. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

26 pages, 4780 KB  
Article
Uncertainty Quantification Based on Block Masking of Test Images
by Pai-Xuan Wang, Chien-Hung Liu and Shingchern D. You
Information 2025, 16(10), 885; https://doi.org/10.3390/info16100885 - 11 Oct 2025
Viewed by 153
Abstract
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential [...] Read more.
In image classification tasks, models may occasionally produce incorrect predictions, which can lead to severe consequences in safety-critical applications. For instance, if a model mistakenly classifies a red traffic light as green, it could result in a traffic accident. Therefore, it is essential to assess the confidence level associated with each prediction. Predictions accompanied by high confidence scores are generally more reliable and can serve as a basis for informed decision-making. To address this, the present paper extends the block-scaling approach—originally developed for estimating classifier accuracy on unlabeled datasets—to compute confidence scores for individual samples in image classification. The proposed method, termed block masking confidence (BMC), applies a sliding mask filled with random noise to occlude localized regions of the input image. Each masked variant is classified, and predictions are aggregated across all variants. The final class is selected via majority voting, and a confidence score is derived based on prediction consistency. To evaluate the effectiveness of BMC, we conducted experiments comparing it against Monte Carlo (MC) dropout and a vanilla baseline across image datasets of varying sizes and distortion levels. While BMC does not consistently outperform the baselines under standard (in-distribution) conditions, it shows clear advantages on distorted and out-of-distribution (OOD) samples. Specifically, on the level-3 distorted iNaturalist 2018 dataset, BMC achieves a median expected calibration error (ECE) of 0.135, compared to 0.345 for MC dropout and 0.264 for the vanilla approach. On the level-3 distorted Places365 dataset, BMC yields an ECE of 0.173, outperforming MC dropout (0.290) and vanilla (0.201). For OOD samples in Places365, BMC achieves a peak entropy of 1.43, higher than the 1.06 observed for both MC dropout and vanilla. Furthermore, combining BMC with MC dropout leads to additional improvements. On distorted Places365, the median ECE is reduced to 0.151, and the peak entropy for OOD samples increases to 1.73. Overall, the proposed BMC method offers a promising framework for uncertainty quantification in image classification, particularly under challenging or distribution-shifted conditions. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

23 pages, 5434 KB  
Article
Deep Reinforcement Learning for Sim-to-Real Robot Navigation with a Minimal Sensor Suite for Beach-Cleaning Applications
by Guillermo Cid Ampuero, Gabriel Hermosilla, Germán Varas and Matías Toribio Clark
Appl. Sci. 2025, 15(19), 10719; https://doi.org/10.3390/app151910719 - 5 Oct 2025
Viewed by 799
Abstract
Autonomous beach-cleaning robots require reliable, low-cost navigation on sand. We study Sim-to-Real transfer of deep reinforcement learning (DRL) policies using a minimal sensor suite—wheel-encoder odometry and a single 2-D LiDAR—on a 30 kg differential-drive platform (Raspberry Pi 4). Two policies, Proximal Policy Optimization [...] Read more.
Autonomous beach-cleaning robots require reliable, low-cost navigation on sand. We study Sim-to-Real transfer of deep reinforcement learning (DRL) policies using a minimal sensor suite—wheel-encoder odometry and a single 2-D LiDAR—on a 30 kg differential-drive platform (Raspberry Pi 4). Two policies, Proximal Policy Optimization (PPO) and a masked-action variant (PPO-Mask), were trained in Gazebo + Gymnasium and deployed on the physical robot without hyperparameter retuning. Field trials on firm sand and on a natural loose-sand beach show that PPO-Mask reduces tracking error versus PPO on firm ground (16.6% ISE reduction; 5.2% IAE reduction) and executes multi-waypoint paths faster (square path: 112.48 s vs. 103.46 s). On beach sand, all waypoints were reached within a 1 m tolerance, with mission times of 115.72 s (square) and 81.77 s (triangle). These results indicate that DRL-based navigation with minimal sensing and low-cost compute is feasible in beach settings. Full article
Show Figures

Figure 1

18 pages, 4927 KB  
Article
Automated Grading of Boiled Shrimp by Color Level Using Image Processing Techniques and Mask R-CNN with Feature Pyramid Networks
by Manit Chansuparp, Nantipa Pansawat and Sansanee Wangvoralak
Appl. Sci. 2025, 15(19), 10632; https://doi.org/10.3390/app151910632 - 1 Oct 2025
Viewed by 268
Abstract
Color grading of boiled shrimp is a critical factor influencing market price, yet the process is usually conducted visually by buyers such as middlemen and processing plants. This subjective practice raises concerns about accuracy, impartiality, and fairness, often resulting in disputes with farmers. [...] Read more.
Color grading of boiled shrimp is a critical factor influencing market price, yet the process is usually conducted visually by buyers such as middlemen and processing plants. This subjective practice raises concerns about accuracy, impartiality, and fairness, often resulting in disputes with farmers. To address this issue, this study proposes a standardized and automated grading approach based on image processing and artificial intelligence. The method requires only a photograph of boiled shrimp placed alongside a color grading ruler. The grading process involves two stages: segmentation of shrimp and ruler regions in the image, followed by color comparison. For segmentation, deep learning models based on Mask R-CNN with a Feature Pyramid Network backbone were employed. Four model configurations were tested, using ResNet and ResNeXt backbones with and without a Boundary Loss function. Results show that the ResNet + Boundary Loss model achieved the highest segmentation performance, with IoU scores of 91.2% for shrimp and 87.8% for the color ruler. In the grading step, color similarity was evaluated in the CIELAB color space by computing Euclidean distances in the L (lightness) and a (red–green) channels, which align closely with human perception of shrimp coloration. The system achieved grading accuracy comparable to human experts, with a mean absolute error of 1.2, demonstrating its potential to provide consistent, objective, and transparent shrimp quality assessment. Full article
Show Figures

Figure 1

19 pages, 2933 KB  
Article
Image-Based Detection of Chinese Bayberry (Myrica rubra) Maturity Using Cascaded Instance Segmentation and Multi-Feature Regression
by Hao Zheng, Li Sun, Yue Wang, Han Yang and Shuwen Zhang
Horticulturae 2025, 11(10), 1166; https://doi.org/10.3390/horticulturae11101166 - 1 Oct 2025
Viewed by 276
Abstract
The accurate assessment of Chinese bayberry (Myrica rubra) maturity is critical for intelligent harvesting. This study proposes a novel cascaded framework combining instance segmentation and multi-feature regression for accurate maturity detection. First, a lightweight SOLOv2-Light network is employed to segment each [...] Read more.
The accurate assessment of Chinese bayberry (Myrica rubra) maturity is critical for intelligent harvesting. This study proposes a novel cascaded framework combining instance segmentation and multi-feature regression for accurate maturity detection. First, a lightweight SOLOv2-Light network is employed to segment each fruit individually, which significantly reduces computational costs with only a marginal drop in accuracy. Then, a multi-feature extraction network is developed to fuse deep semantic, color (LAB space), and multi-scale texture features, enhanced by a channel attention mechanism for adaptive weighting. The maturity ground truth is defined using the a*/b* ratio measured by a colorimeter, which correlates strongly with anthocyanin accumulation and visual ripeness. Experimental results demonstrated that the proposed method achieves a mask mAP of 0.788 on the instance segmentation task, outperforming Mask R-CNN and YOLACT. For maturity prediction, a mean absolute error of 3.946% is attained, which is a significant improvement over the baseline. When the data are discretized into three maturity categories, the overall accuracy reaches 95.51%, surpassing YOLOX-s and Faster R-CNN by a considerable margin while reducing processing time by approximately 46%. The modular design facilitates easy adaptation to new varieties. This research provides a robust and efficient solution for in-field bayberry maturity detection, offering substantial value for the development of automated harvesting systems. Full article
Show Figures

Figure 1

25 pages, 6044 KB  
Article
Computer Vision-Based Multi-Feature Extraction and Regression for Precise Egg Weight Measurement in Laying Hen Farms
by Yunxiao Jiang, Elsayed M. Atwa, Pengguang He, Jinhui Zhang, Mengzui Di, Jinming Pan and Hongjian Lin
Agriculture 2025, 15(19), 2035; https://doi.org/10.3390/agriculture15192035 - 28 Sep 2025
Viewed by 402
Abstract
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during [...] Read more.
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during transportation and low-contrast edges, which limits the widespread adoption of such methods. To address this, we propose an egg measurement method based on a computer vision and multi-feature extraction and regression approach. The proposed pipeline integrates two artificial neural networks: Central differential-EfficientViT YOLO (CEV-YOLO) and Egg Weight Measurement Network (EWM-Net). CEV-YOLO is an enhanced version of YOLOv11, incorporating central differential convolution (CDC) and efficient Vision Transformer (EfficientViT), enabling accurate pixel-level egg segmentation in the presence of occlusions and low-contrast edges. EWM-Net is a custom-designed neural network that utilizes the segmented egg masks to perform advanced feature extraction and precise weight estimation. Experimental results show that CEV-YOLO outperforms other YOLO-based models in egg segmentation, with a precision of 98.9%, a recall of 97.5%, and an Average Precision (AP) at an Intersection over Union (IoU) threshold of 0.9 (AP90) of 89.8%. EWM-Net achieves a mean absolute error (MAE) of 0.88 g and an R2 of 0.926 in egg weight measurement, outperforming six mainstream regression models. This study provides a practical and automated solution for precise egg weight measurement in practical production scenarios, which is expected to improve the accuracy and efficiency of feed-to-egg ratio measurement in laying hen farms. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

34 pages, 9527 KB  
Article
High-Resolution 3D Thermal Mapping: From Dual-Sensor Calibration to Thermally Enriched Point Clouds
by Neri Edgardo Güidi, Andrea di Filippo and Salvatore Barba
Appl. Sci. 2025, 15(19), 10491; https://doi.org/10.3390/app151910491 - 28 Sep 2025
Viewed by 449
Abstract
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to [...] Read more.
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to generate thermally enriched 3D point clouds by fusing RGB and thermal imagery acquired simultaneously with a dual-sensor unmanned aerial vehicle system. The methodology includes geometric calibration of both cameras, image undistortion, cross-spectral feature matching, and projection of radiometric data onto the photogrammetric model through a computed homography. Thermal values are extracted using a custom parser and assigned to 3D points based on visibility masks and interpolation strategies. Calibration achieved 81.8% chessboard detection, yielding subpixel reprojection errors. Among twelve evaluated algorithms, LightGlue retained 99% of its matches and delivered a reprojection accuracy of 18.2% at 1 px, 65.1% at 3 px and 79% at 5 px. A case study on photovoltaic panels demonstrates the method’s capability to map thermal patterns with low temperature deviation from ground-truth data. Developed entirely in Python, the workflow integrates into Agisoft Metashape or other software. The proposed approach enables cost-effective, high-resolution thermal mapping with applications in civil engineering, cultural heritage conservation, and environmental monitoring applications. Full article
Show Figures

Figure 1

31 pages, 10644 KB  
Article
An Instance Segmentation Method for Agricultural Plastic Residual Film on Cotton Fields Based on RSE-YOLO-Seg
by Huimin Fang, Quanwang Xu, Xuegeng Chen, Xinzhong Wang, Limin Yan and Qingyi Zhang
Agriculture 2025, 15(19), 2025; https://doi.org/10.3390/agriculture15192025 - 26 Sep 2025
Viewed by 414
Abstract
To address the challenges of multi-scale missed detections, false positives, and incomplete boundary segmentation in cotton field residual plastic film detection, this study proposes the RSE-YOLO-Seg model. First, a PKI module (adaptive receptive field) is integrated into the C3K2 block and combined with [...] Read more.
To address the challenges of multi-scale missed detections, false positives, and incomplete boundary segmentation in cotton field residual plastic film detection, this study proposes the RSE-YOLO-Seg model. First, a PKI module (adaptive receptive field) is integrated into the C3K2 block and combined with the SegNext attention mechanism (multi-scale convolutional kernels) to capture multi-scale residual film features. Second, RFCAConv replaces standard convolutional layers to differentially process regions and receptive fields of different sizes, and an Efficient-Head is designed to reduce parameters. Finally, an NM-IoU loss function is proposed to enhance small residual film detection and boundary segmentation. Experiments on a self-constructed dataset show that RSE-YOLO-Seg improves the object detection average precision (mAP50(B)) by 3% and mask segmentation average precision (mAP50(M)) by 2.7% compared with the baseline, with all module improvements being statistically significant (p < 0.05). Across four complex scenarios, it exhibits stronger robustness than mainstream models (YOLOv5n-seg, YOLOv8n-seg, YOLOv10n-seg, YOLO11n-seg), and achieves 17/38 FPS on Jetson Nano B01/Orin. Additionally, when combined with DeepSORT, compared with random image sampling, the mean error between predicted and actual residual film area decreases from 232.30 cm2 to 142.00 cm2, and the root mean square error (RMSE) drops from 251.53 cm2 to 130.25 cm2. This effectively mitigates pose-induced random errors in static images and significantly improves area estimation accuracy. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop