Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (324)

Search Parameters:
Keywords = mask correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 8549 KiB  
Article
A Fully Automated Analysis Pipeline for 4D Flow MRI in the Aorta
by Ethan M. I. Johnson, Haben Berhane, Elizabeth Weiss, Kelly Jarvis, Aparna Sodhi, Kai Yang, Joshua D. Robinson, Cynthia K. Rigsby, Bradley D. Allen and Michael Markl
Bioengineering 2025, 12(8), 807; https://doi.org/10.3390/bioengineering12080807 - 27 Jul 2025
Viewed by 235
Abstract
Four-dimensional (4D) flow MRI has shown promise for the assessment of aortic hemodynamics. However, data analysis traditionally requires manual and time-consuming human input at several stages. This limits reproducibility and affects analysis workflows, such that large-cohort 4D flow studies are lacking. Here, a [...] Read more.
Four-dimensional (4D) flow MRI has shown promise for the assessment of aortic hemodynamics. However, data analysis traditionally requires manual and time-consuming human input at several stages. This limits reproducibility and affects analysis workflows, such that large-cohort 4D flow studies are lacking. Here, a fully automated artificial intelligence (AI) 4D flow analysis pipeline was developed and evaluated in a cohort of over 350 subjects. The 4D flow MRI analysis pipeline integrated a series of previously developed and validated deep learning networks, which replaced traditionally manual processing tasks (background-phase correction, noise masking, velocity anti-aliasing, aorta 3D segmentation). Hemodynamic parameters (global aortic pulse wave velocity (PWV), peak velocity, flow energetics) were automatically quantified. The pipeline was evaluated in a heterogeneous single-center cohort of 379 subjects (age = 43.5 ± 18.6 years, 118 female) who underwent 4D flow MRI of the thoracic aorta (n = 147 healthy controls, n = 147 patients with a bicuspid aortic valve [BAV], n = 10 with mechanical valve prostheses, n = 75 pediatric patients with hereditary aortic disease). Pipeline performance with BAV and control data was evaluated by comparing to manual analysis performed by two human observers. A fully automated 4D flow pipeline analysis was successfully performed in 365 of 379 patients (96%). Pipeline-based quantification of aortic hemodynamics was closely correlated with manual analysis results (peak velocity: r = 1.00, p < 0.001; PWV: r = 0.99, p < 0.001; flow energetics: r = 0.99, p < 0.001; overall r ≥ 0.99, p < 0.001). Bland–Altman analysis showed close agreement for all hemodynamic parameters (bias 1–3%, limits of agreement 6–22%). Notably, limits of agreement between different human observers’ quantifications were moderate (4–20%). In addition, the pipeline 4D flow analysis closely reproduced hemodynamic differences between age-matched adult BAV patients and controls (median peak velocity: 1.74 m/s [automated] or 1.76 m/s [manual] BAV vs. 1.31 [auto.] vs. 1.29 [manu.] controls, p < 0.005; PWV: 6.4–6.6 m/s all groups, any processing [no significant differences]; kinetic energy: 4.9 μJ [auto.] or 5.0 μJ [manu.] BAV vs. 3.1 μJ [both] control, p < 0.005). This study presents a framework for the complete automation of quantitative 4D flow MRI data processing with a failure rate of less than 5%, offering improved measurement reliability in quantitative 4D flow MRI. Future studies are warranted to reduced failure rates and evaluate pipeline performance across multiple centers. Full article
(This article belongs to the Special Issue Recent Advances in Cardiac MRI)
Show Figures

Figure 1

16 pages, 5703 KiB  
Article
Document Image Shadow Removal Based on Illumination Correction Method
by Depeng Gao, Wenjie Liu, Shuxi Chen, Jianlin Qiu, Xiangxiang Mei and Bingshu Wang
Algorithms 2025, 18(8), 468; https://doi.org/10.3390/a18080468 - 26 Jul 2025
Viewed by 203
Abstract
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal [...] Read more.
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal research has achieved good results in natural scenes, specific studies on document images are lacking. To effectively remove shadows in document images, the dark illumination correction network is proposed, which mainly consists of two modules: shadow detection and illumination correction. First, a simplified shadow-corrected attention block is designed to combine spatial and channel attention, which is used to extract the features, detect the shadow mask, and correct the illumination. Then, the shadow detection block detects shadow intensity and outputs a soft shadow mask to determine the probability of each pixel belonging to shadow. Lastly, the illumination correction block corrects dark illumination with a soft shadow mask and outputs a shadow-free document image. Our experiments on five datasets show that the proposed method achieved state-of-the-art results, proving the effectiveness of illumination correction. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

17 pages, 2104 KiB  
Article
Rotational Projection Errors in Coronal Knee Alignment on Weight-Bearing Whole-Leg Radiographs: A 3D CT Reference Across CPAK Morphotypes
by Igor Strahovnik, Andrej Strahovnik and Samo Karel Fokter
Bioengineering 2025, 12(8), 794; https://doi.org/10.3390/bioengineering12080794 - 23 Jul 2025
Viewed by 425
Abstract
Whole-leg radiographs (WLRs) are widely used to assess coronal alignment before total knee arthroplasty (TKA), but may be inaccurate in patients with atypical morphotypes or malrotation. This study evaluated the discrepancy between WLR and 3D computed tomography (CT) scans across coronal plane alignment [...] Read more.
Whole-leg radiographs (WLRs) are widely used to assess coronal alignment before total knee arthroplasty (TKA), but may be inaccurate in patients with atypical morphotypes or malrotation. This study evaluated the discrepancy between WLR and 3D computed tomography (CT) scans across coronal plane alignment of the knee (CPAK) morphotypes and introduced a novel projection index—the femoral notch projection ratio (FNPR). In CPAK III knees, 19% of cases exceeded a clinically relevant threshold (>3° difference), prompting investigation of underlying projection factors. In 187 knees, coronal angles—including the medial distal femoral angle (MDFA°), medial proximal tibial angle (MPTA°), femoral mechanical angle (FMA°), and arithmetic hip–knee–ankle angle (aHKA°)—were measured using WLR and CT. Rotational positioning on WLR was assessed using FNPR and the patellar projection ratio (PPR). CPAK classification was applied. WLR systematically underestimated alignment, with the greatest bias in CPAK III (MDFA° + 1.5° ± 2.0°, p < 0.001). FNPR was significantly higher in CPAK III and VI (+1.9° vs. −0.3°, p < 0.001), indicating a tendency toward internally rotated limb positioning during imaging. The PPR–FNPR mismatch peaked in CPAK III (4.1°, p < 0.001), suggesting patellar-based centering may mask rotational malprojection. Projection artifacts from anterior osteophytes contributed to outlier measurements but were correctable. Valgus morphotypes with oblique joint lines (CPAK III) were especially prone to projection error. FNPR more accurately reflected rotational malposition than PPR in morphotypes prone to patellar subluxation. A 3D method (e.g., CT) or repeated imaging may be considered in CPAK III to improve surgical planning. Full article
Show Figures

Figure 1

29 pages, 10358 KiB  
Article
Smartphone-Based Sensing System for Identifying Artificially Marbled Beef Using Texture and Color Analysis to Enhance Food Safety
by Hong-Dar Lin, Yi-Ting Hsieh and Chou-Hsien Lin
Sensors 2025, 25(14), 4440; https://doi.org/10.3390/s25144440 - 16 Jul 2025
Viewed by 264
Abstract
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability [...] Read more.
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability of fat-injected beef, has led to the proliferation of mislabeled “Wagyu-grade” products sold at premium prices, posing potential food safety risks such as allergen exposure or consumption of unverified additives, which can adversely affect consumer health. Addressing this, this study introduces a smart sensing system integrated with handheld mobile devices, enabling consumers to capture beef images during purchase for real-time health-focused assessment. The system analyzes surface texture and color, transmitting data to a server for classification to determine if the beef is artificially marbled, thus supporting informed dietary choices and reducing health risks. Images are processed by applying a region of interest (ROI) mask to remove background noise, followed by partitioning into grid blocks. Local binary pattern (LBP) texture features and RGB color features are extracted from these blocks to characterize surface properties of three beef types (Wagyu, regular, and fat-injected). A support vector machine (SVM) model classifies the blocks, with the final image classification determined via majority voting. Experimental results reveal that the system achieves a recall rate of 95.00% for fat-injected beef, a misjudgment rate of 1.67% for non-fat-injected beef, a correct classification rate (CR) of 93.89%, and an F1-score of 95.80%, demonstrating its potential as a human-centered healthcare tool for ensuring food safety and transparency. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 4530 KiB  
Article
Clinical Validation of a Computed Tomography Image-Based Machine Learning Model for Segmentation and Quantification of Shoulder Muscles
by Hamidreza Rajabzadeh-Oghaz, Josie Elwell, Bradley Schoch, William Aibinder, Bruno Gobbato, Daniel Wessell, Vikas Kumar and Christopher P. Roche
Algorithms 2025, 18(7), 432; https://doi.org/10.3390/a18070432 - 14 Jul 2025
Viewed by 217
Abstract
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim [...] Read more.
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim of this study is to conduct shoulder expert validation of a novel deltoid ML auto-segmentation and quantification tool. Materials and Methods: A SwinUnetR-based ML model trained on labeled CT scans is validated by three expert shoulder surgeons for 32 unique patients. The validation evaluates the quality of the auto-segmented deltoid images. Specifically, each of the three surgeons reviewed the auto-segmented masks relative to CT images, rated masks for clinical acceptance, and performed a correction on the ML-generated deltoid mask if the ML mask did not completely contain the full deltoid muscle, or if the ML mask included any tissue other than the deltoid. Non-inferiority of the ML model was assessed by comparing ML-generated to surgeon-corrected deltoid masks versus the inter-surgeon variation in metrics, such as volume and fatty infiltration. Results: The results of our expert shoulder surgeon validation demonstrates that 97% of ML-generated deltoid masks were clinically acceptable. Only two of the ML-generated deltoid masks required major corrections and only one was deemed clinically unacceptable. These corrections had little impact on the deltoid measurements, as the median error in the volume and fatty infiltration measurements was <1% between the ML-generated deltoid masks and the surgeon-corrected deltoid masks. The non-inferiority analysis demonstrates no significant difference between the ML-generated to surgeon-corrected masks relative to inter-surgeon variations. Conclusions: Shoulder expert validation of this CT image analysis tool demonstrates clinically acceptable performance for deltoid auto-segmentation, with no significant differences observed between deltoid image-based measurements derived from the ML generated masks and those corrected by surgeons. These findings suggest that this CT image analysis tool has potential to reliably quantify deltoid muscle size, shape, and quality. Incorporating these CT image-based measurements into the pre-operative planning process may facilitate more personalized treatment decision making, and help orthopedic surgeons make more evidence-based clinical decisions. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

25 pages, 13659 KiB  
Article
Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images
by Mingsheng Huang, Yanghang Zhu, Qingwu Duan, Yaohua Zhu, Jingyu Jiang and Yong Zhang
Sensors 2025, 25(14), 4287; https://doi.org/10.3390/s25144287 - 9 Jul 2025
Viewed by 297
Abstract
Stripe noise along the scanning direction significantly degrades the quality of high-resolution infrared line-scan images and impairs downstream tasks such as target detection and radiometric analysis. This paper presents a lightweight, single-frame, reference-free non-uniformity correction (NUC) method tailored for such images. The proposed [...] Read more.
Stripe noise along the scanning direction significantly degrades the quality of high-resolution infrared line-scan images and impairs downstream tasks such as target detection and radiometric analysis. This paper presents a lightweight, single-frame, reference-free non-uniformity correction (NUC) method tailored for such images. The proposed approach enhances the directionality of stripe noise by projecting the 2D image into a 1D row-mean signal, followed by adaptive guided filtering driven by local median absolute deviation (MAD) to ensure spatial adaptivity and structure preservation. A spectral-entropy-constrained frequency-domain masking strategy is further introduced to suppress periodic and non-periodic interference. Extensive experiments on simulated and real datasets demonstrate that the method consistently outperforms six state-of-the-art algorithms across multiple metrics while maintaining the fastest runtime. The proposed method is highly suitable for real-time deployment in airborne, satellite-based, and embedded infrared imaging systems. It provides a robust and interpretable framework for future infrared enhancement tasks. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Applying Reinforcement Learning to Protect Deep Neural Networks from Soft Errors
by Peng Su, Yuhang Li, Zhonghai Lu and Dejiu Chen
Sensors 2025, 25(13), 4196; https://doi.org/10.3390/s25134196 - 5 Jul 2025
Viewed by 533
Abstract
With the advance of Artificial Intelligence, Deep Neural Networks are widely employed in various sensor-based systems to analyze operational conditions. However, due to the inherently nondeterministic and probabilistic natures of neural networks, the assurance of overall system performance could become a challenging task. [...] Read more.
With the advance of Artificial Intelligence, Deep Neural Networks are widely employed in various sensor-based systems to analyze operational conditions. However, due to the inherently nondeterministic and probabilistic natures of neural networks, the assurance of overall system performance could become a challenging task. In particular, soft errors could weaken the robustness of such networks and thereby threaten the system’s safety. Conventional fault-tolerant techniques by means of hardware redundancy and software correction mechanisms often involve a tricky trade-off between effectiveness and scalability in addressing the extensive design space of Deep Neural Networks. In this work, we propose a Reinforcement-Learning-based approach to protect neural networks from soft errors by addressing and identifying the vulnerable bits. The approach consists of three key steps: (1) analyzing layer-wise resiliency of Deep Neural Networks by a fault injection simulation; (2) generating layer-wise bit masks by a Reinforcement-Learning-based agent to reveal the vulnerable bits and to protect against them; and (3) synthesizing and deploying bit masks across the network with guaranteed operation efficiency by adopting transfer learning. As a case study, we select several existing neural networks to test and validate the design. The performance of the proposed approach is compared with the performance of other baseline methods, including Hamming code and the Most Significant Bits protection schemes. The results indicate that the proposed method exhibits a significant improvement. Specifically, we observe that the proposed method achieves a significant performance gain of at least 10% to 15% over on the test network. The results indicate that the proposed method dynamically and efficiently protects the vulnerable bits compared with the baseline methods. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

26 pages, 6653 KiB  
Article
Development of a Calibration Procedure of the Additive Masked Stereolithography Method for Improving the Accuracy of Model Manufacturing
by Paweł Turek, Anna Bazan, Paweł Kubik and Michał Chlost
Appl. Sci. 2025, 15(13), 7412; https://doi.org/10.3390/app15137412 - 1 Jul 2025
Viewed by 403
Abstract
The article presents a three-stage methodology for calibrating 3D printing using mSLA technology, aimed at improving dimensional accuracy and print repeatability. The proposed approach is based on procedures that enable the collection and analysis of numerical data, thereby minimizing the influence of the [...] Read more.
The article presents a three-stage methodology for calibrating 3D printing using mSLA technology, aimed at improving dimensional accuracy and print repeatability. The proposed approach is based on procedures that enable the collection and analysis of numerical data, thereby minimizing the influence of the operator’s subjective judgment, which is commonly relied upon in traditional calibration methods. In the first stage, compensation for the uneven illumination of the LCD matrix was performed by establishing a regression model that describes the relationship between UV radiation intensity and pixel brightness. Based on this model, a grayscale correction mask was developed. The second stage focused on determining the optimal exposure time, based on its effect on dimensional accuracy, detail reproduction, and model strength. The optimal exposure time is defined as the duration that provides the highest possible mechanical strength without significant loss of detail due to the light bleed phenomenon (i.e., diffusion of UV radiation beyond the mask edge). In the third stage, scale correction was applied to compensate for shrinkage and geometric distortions, further reducing the impact of light bleed on the dimensional fidelity of printed components. The proposed methodology was validated using an Anycubic Photon M3 Premium printer with Anycubic ABS-Like Resin Pro 2.0. Compensating for light intensity variation reduced the original standard deviation from 0.26 to 0.17 mW/cm2, corresponding to a decrease of more than one third. The methodology reduced surface displacement due to shrinkage from 0.044% to 0.003%, and the residual internal dimensional error from 0.159 mm to 0.017 mm (a 72% reduction). Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

30 pages, 3461 KiB  
Article
A Privacy-Preserving Record Linkage Method Based on Secret Sharing and Blockchain
by Shumin Han, Zikang Wang, Qiang Zhao, Derong Shen, Chuang Wang and Yangyang Xue
Appl. Syst. Innov. 2025, 8(4), 92; https://doi.org/10.3390/asi8040092 - 28 Jun 2025
Viewed by 455
Abstract
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during [...] Read more.
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during computation, such approaches often require computationally intensive cryptographic techniques. This can introduce significant computational overhead, limiting the method’s efficiency and scalability. To address this performance bottleneck, we combine blockchain with the distributed computation of secret sharing to propose a PPRL method based on blockchain-coordinated distributed computation. At its core, the approach utilizes Bloom filters to encode data and employs Boolean and arithmetic secret sharing to decompose the data into secret shares, which are uploaded to the InterPlanetary File System (IPFS). Combined with masking and random permutation mechanisms, it enhances privacy protection. Computing nodes perform similarity calculations locally, interacting with IPFS only a limited number of times, effectively reducing communication overhead. Furthermore, blockchain manages the entire computation process through smart contracts, ensuring transparency and correctness of the computation, achieving efficient and secure record linkage. Experimental results demonstrate that this method effectively safeguards data privacy while exhibiting high linkage quality and scalability. Full article
Show Figures

Figure 1

25 pages, 9860 KiB  
Article
Indoor Dynamic Environment Mapping Based on Semantic Fusion and Hierarchical Filtering
by Yiming Li, Luying Na, Xianpu Liang and Qi An
ISPRS Int. J. Geo-Inf. 2025, 14(7), 236; https://doi.org/10.3390/ijgi14070236 - 21 Jun 2025
Viewed by 672
Abstract
To address the challenges of dynamic object interference and redundant information representation in map construction for indoor dynamic environments, this paper proposes an indoor dynamic environment mapping method based on semantic fusion and hierarchical filtering. First, prior dynamic object masks are obtained using [...] Read more.
To address the challenges of dynamic object interference and redundant information representation in map construction for indoor dynamic environments, this paper proposes an indoor dynamic environment mapping method based on semantic fusion and hierarchical filtering. First, prior dynamic object masks are obtained using the YOLOv8 model, and geometric constraints between prior static objects and dynamic regions are introduced to identify non-prior dynamic objects, thereby eliminating all dynamic features (both prior and non-prior). Second, an initial semantic point cloud map is constructed by integrating prior static features from a semantic segmentation network with pose estimates from an RGB-D camera. Dynamic noise is then removed using statistical outlier removal (SOR) filtering, while voxel filtering optimizes point cloud density, generating a compact yet texture-rich semantic dense point cloud map with minimal dynamic artifacts. Subsequently, a multi-resolution semantic octree map is built using a recursive spatial partitioning algorithm. Finally, point cloud poses are corrected via Transform Frame (TF) transformation, and a 2D traversability grid map is generated using passthrough filtering and grid projection. Experimental results demonstrate that the proposed method constructs multi-level semantic maps with rich information, clear structure, and high reliability in indoor dynamic scenarios. Additionally, the map file size is compressed by 50–80%, significantly enhancing the reliability of mobile robot navigation and the efficiency of path planning. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

35 pages, 8283 KiB  
Article
PIABC: Point Spread Function Interpolative Aberration Correction
by Chanhyeong Cho, Chanyoung Kim and Sanghoon Sull
Sensors 2025, 25(12), 3773; https://doi.org/10.3390/s25123773 - 17 Jun 2025
Viewed by 436
Abstract
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. [...] Read more.
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. Optical and sensor-level noise are distinct and hard to separate, but prior studies suggest that improving optical fidelity can suppress or mask sensor noise. Upon this understanding, we introduce a framework that utilizes densely interpolated Point Spread Functions (PSFs) to recover high-fidelity images. The process begins by simulating Gaussian-based PSFs as pixel-wise chromatic and spatial distortions derived from real degraded images. These PSFs are then encoded into a latent space to enhance their features and used to generate refined PSFs via similarity-weighted interpolation at each target position. The interpolated PSFs are applied through Wiener filtering, followed by residual correction, to restore images with improved structural fidelity and perceptual quality. We compare our method—based on pixel-wise, physical correction, and densely interpolated PSF at pre-processing—with post-processing networks, including deformable convolutional neural networks (CNNs) that enhance image quality without modeling degradation. Evaluations on DIV2K and RealSR-V3 confirm that our strategy not only enhances structural restoration but also more effectively suppresses sensor-induced artifacts, demonstrating the benefit of explicit physical priors for perceptual fidelity. Full article
(This article belongs to the Special Issue Sensors for Pattern Recognition and Computer Vision)
Show Figures

Figure 1

20 pages, 6756 KiB  
Article
Optimization of Film Thickness Uniformity in Hemispherical Resonator Coating Process Based on Simulation and Reinforcement Learning Algorithms
by Jingyu Pan, Dongsheng Zhang, Shijie Liu, Jianguo Wang and Jianda Shao
Coatings 2025, 15(6), 700; https://doi.org/10.3390/coatings15060700 - 10 Jun 2025
Viewed by 500
Abstract
Hemispherical resonator gyroscopes (HRGs) are critical components in high-precision inertial navigation systems, typically used in fields such as navigation, weaponry, and deep space exploration. Film thickness uniformity affects device performance through its impact on the resonator’s Q value. Due to the irregular structure [...] Read more.
Hemispherical resonator gyroscopes (HRGs) are critical components in high-precision inertial navigation systems, typically used in fields such as navigation, weaponry, and deep space exploration. Film thickness uniformity affects device performance through its impact on the resonator’s Q value. Due to the irregular structure of the resonator, there has been limited research on the uniformity of film thickness on the inner wall of the resonator. This study addresses the challenge of thickness non-uniformity in metallization coatings, particularly in the meridional direction of the resonator. By integrating COMSOL-based finite element simulations with reinforcement learning-driven optimization through the Proximal Policy Optimization (PPO) algorithm, a new paradigm for coating process optimization is established. Furthermore, a correction mask is designed to address the issue of low coating rate. Finally, a Zygo white-light interferometer is used to measure film thickness uniformity. The results show that the optimized coating process achieves a film thickness uniformity of 11.0% in the meridional direction across the resonator. This study provides useful information and guidelines for the design and optimization of the coating process for hemispherical resonators, and the presented optimization method constitutes a process flow framework that can also be used for precision coating engineering in semiconductor components and optical elements. Full article
(This article belongs to the Special Issue AI-Driven Surface Engineering and Coating)
Show Figures

Figure 1

20 pages, 454 KiB  
Article
Large Language Model Text Adversarial Defense Method Based on Disturbance Detection and Error Correction
by Lei Che, Chengcong Wu and Yan Hou
Electronics 2025, 14(11), 2267; https://doi.org/10.3390/electronics14112267 - 31 May 2025
Cited by 1 | Viewed by 600
Abstract
This study aims to effectively defend against complex and diverse adversarial attacks, enhance adversarial defense performance in the field of textual adversarial examples, and address existing issues in current defense methods such as models’ inability to accurately detect perturbations and perform effective error [...] Read more.
This study aims to effectively defend against complex and diverse adversarial attacks, enhance adversarial defense performance in the field of textual adversarial examples, and address existing issues in current defense methods such as models’ inability to accurately detect perturbations and perform effective error correction. To this end, we propose a Large Language Model Adversarial Defense (LLMAD) method based on perturbation detection and correction. The LLMAD framework consists of two modules: a perturbation detection module and a perturbation correction module. The detection module combines pre-trained models with soft-masking layers to rapidly determine whether samples contain adversarial perturbations. The correction module employs large language models fine-tuned using an adversarial example perturbation dataset constructed through data augmentation, enhancing task performance in adversarial defense and enabling efficient and accurate error correction of adversarial samples. Experimental results demonstrate that, across multiple datasets, various target models defended through the LLMAD method achieve satisfactory defense effectiveness against different types of adversarial attacks. The classification accuracy of defended models shows an average improvement of 66.8% compared to undefended baselines and outperforms existing methods by 13.4% on average. Additional perturbation detection performance tests and ablation studies further validate the accuracy of detection capabilities and the effectiveness of module combinations. A series of experiments confirm that the LLMAD method can significantly enhance defensive effectiveness against textual adversarial attacks. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
Show Figures

Figure 1

39 pages, 13529 KiB  
Article
Intelligent Monitoring of BECS Conveyors via Vision and the IoT for Safety and Separation Efficiency
by Shohreh Kia and Benjamin Leiding
Appl. Sci. 2025, 15(11), 5891; https://doi.org/10.3390/app15115891 - 23 May 2025
Viewed by 695
Abstract
Conveyor belts are critical in various industries, particularly in the barrier eddy current separator systems used in recycling processes. However, hidden issues, such as belt misalignment, excessive heat that can lead to fire hazards, and the presence of sharp or irregularly shaped materials, [...] Read more.
Conveyor belts are critical in various industries, particularly in the barrier eddy current separator systems used in recycling processes. However, hidden issues, such as belt misalignment, excessive heat that can lead to fire hazards, and the presence of sharp or irregularly shaped materials, reduce operational efficiency and pose serious threats to the health and safety of personnel on the production floor. This study presents an intelligent monitoring and protection system for barrier eddy current separator conveyor belts designed to safeguard machinery and human workers simultaneously. In this system, a thermal camera continuously monitors the surface temperature of the conveyor belt, especially in the area above the magnetic drum—where unwanted ferromagnetic materials can lead to abnormal heating and potential fire risks. The system detects temperature anomalies in this critical zone. The early detection of these risks triggers audio–visual alerts and IoT-based warning messages that are sent to technicians, which is vital in preventing fire-related injuries and minimizing emergency response time. Simultaneously, a machine vision module autonomously detects and corrects belt misalignment, eliminating the need for manual intervention and reducing the risk of worker exposure to moving mechanical parts. Additionally, a line-scan camera integrated with the YOLOv11 AI model analyses the shape of materials on the conveyor belt, distinguishing between rounded and sharp-edged objects. This system enhances the accuracy of material separation and reduces the likelihood of injuries caused by the impact or ejection of sharp fragments during maintenance or handling. The YOLOv11n-seg model implemented in this system achieved a segmentation mask precision of 84.8 percent and a recall of 84.5 percent in industry evaluations. Based on this high segmentation accuracy and consistent detection of sharp particles, the system is expected to substantially reduce the frequency of sharp object collisions with the BECS conveyor belt, thereby minimizing mechanical wear and potential safety hazards. By integrating these intelligent capabilities into a compact, cost-effective solution suitable for real-world recycling environments, the proposed system contributes significantly to improving workplace safety and equipment longevity. This project demonstrates how digital transformation and artificial intelligence can play a pivotal role in advancing occupational health and safety in modern industrial production. Full article
Show Figures

Figure 1

30 pages, 10008 KiB  
Article
Integrating Stride Attention and Cross-Modality Fusion for UAV-Based Detection of Drought, Pest, and Disease Stress in Croplands
by Yan Li, Yaze Wu, Wuxiong Wang, Huiyu Jin, Xiaohan Wu, Jinyuan Liu, Chen Hu and Chunli Lv
Agronomy 2025, 15(5), 1199; https://doi.org/10.3390/agronomy15051199 - 15 May 2025
Viewed by 586
Abstract
Timely and accurate detection of agricultural disasters is crucial for ensuring food security and enhancing post-disaster response efficiency. This paper proposes a deployable UAV-based multimodal agricultural disaster detection framework that integrates multispectral and RGB imagery to simultaneously capture the spectral responses and spatial [...] Read more.
Timely and accurate detection of agricultural disasters is crucial for ensuring food security and enhancing post-disaster response efficiency. This paper proposes a deployable UAV-based multimodal agricultural disaster detection framework that integrates multispectral and RGB imagery to simultaneously capture the spectral responses and spatial structural features of affected crop regions. To this end, we design an innovative stride–cross-attention mechanism, in which stride attention is utilized for efficient spatial feature extraction, while cross-attention facilitates semantic fusion between heterogeneous modalities. The experimental data were collected from representative wheat and maize fields in Inner Mongolia, using UAVs equipped with synchronized multispectral (red, green, blue, red edge, near-infrared) and high-resolution RGB sensors. Through a combination of image preprocessing, geometric correction, and various augmentation strategies (e.g., MixUp, CutMix, GridMask, RandAugment), the quality and diversity of the training samples were significantly enhanced. The model trained on the constructed dataset achieved an accuracy of 93.2%, an F1 score of 92.7%, a precision of 93.5%, and a recall of 92.4%, substantially outperforming mainstream models such as ResNet50, EfficientNet-B0, and ViT across multiple evaluation metrics. Ablation studies further validated the critical role of the stride attention and cross-attention modules in performance improvement. This study demonstrates that the integration of lightweight attention mechanisms with multimodal UAV remote sensing imagery enables efficient, accurate, and scalable agricultural disaster detection under complex field conditions. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application—2nd Edition)
Show Figures

Figure 1

Back to TopTop