Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (436)

Search Parameters:
Keywords = fusion software

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 22991 KB  
Article
Intelligent Fault Detection in the Mechanical Structure of a Wheeled Mobile Robot
by Viorel Ionuț Gheorghe, Laurențiu Adrian Cartal, Constantin Daniel Comeagă, Bogdan-Costel Mocanu, Alexandra Rotaru, Mircea-Iulian Nistor, Mihai-Vlad Vartic and Ștefana Arina Tăbușcă
Technologies 2026, 14(1), 25; https://doi.org/10.3390/technologies14010025 (registering DOI) - 1 Jan 2026
Abstract
This paper establishes an integrated framework combining self-induced vibration measurements with deep learning for vibration-based remaining useful life (RUL) prediction of mechanical frame structures in mobile robots. The main innovations comprise (1) a self-induced vibration excitation system that utilizes the robot’s drive wheels [...] Read more.
This paper establishes an integrated framework combining self-induced vibration measurements with deep learning for vibration-based remaining useful life (RUL) prediction of mechanical frame structures in mobile robots. The main innovations comprise (1) a self-induced vibration excitation system that utilizes the robot’s drive wheels to generate controlled mechanical oscillations, using a five-sensor micro-electro-mechanical system (MEMS) accelerometer array to capture non-uniform vibration mode shapes across the robot’s structure, and (2) a processing pipeline for RUL prediction using accelerometer data and early feature fusion in two machine-learning models (long short-term memory (LSTM) and a convolutional neural network (CNN)). Our research methodology includes (i) modal analysis to identify the robot’s natural frequencies, (ii) verification platform evaluation, comparing low-cost MEMS accelerometers against a reference integrated electronic piezoelectric (IEPE) accelerometer, demonstrating industrial-grade measurement quality (coherence > 98%, uncertainty 4.79–7.21%), and (iii) data-driven validation using real data from the mechanical frame, showing that the LSTM model outperforms the CNN with a 2.61× root-mean-square error (RMSE) improvement (R² = 0.99). Our solution demonstrates that early feature fusion provides sufficient information to model degradation and detect faults early at a lower cost, offering a feasible alternative to classical maintenance procedures through combined hardware validation and lightweight software suitable for Industrial Internet-of-Things (IIoT) deployment. Full article
19 pages, 11278 KB  
Article
Design and Experimental Validation of a Round Inductosyn-Based Angular Measurement System
by Jian Wang, Jianyuan Wang, Jinbao Chen, Chukang Zhong and Yuankui Shao
Micromachines 2026, 17(1), 5; https://doi.org/10.3390/mi17010005 - 20 Dec 2025
Viewed by 214
Abstract
This paper presents the design, implementation, and experimental validation of a high-precision angular measurement system based on a round inductosyn. Dedicated hardware circuits, including excitation, signal conditioning, and resolver-to-digital conversion modules, together with software algorithms for coarse–fine data fusion and linear interpolation-based error [...] Read more.
This paper presents the design, implementation, and experimental validation of a high-precision angular measurement system based on a round inductosyn. Dedicated hardware circuits, including excitation, signal conditioning, and resolver-to-digital conversion modules, together with software algorithms for coarse–fine data fusion and linear interpolation-based error compensation, are developed to achieve accurate and stable angular measurement. Experimental results obtained from repeated measurements over a full rotation demonstrate reliable system operation and effective suppression of nonlinear errors. After compensation, the residual angular error is limited to within ±3″, while measurement consistency across repeated experiments is significantly improved. The output angle exhibits good continuity and stability, confirming the feasibility and effectiveness of the proposed system for high-precision servo control and aerospace attitude measurement applications. Full article
(This article belongs to the Special Issue Recent Advances in Electromagnetic Devices, 2nd Edition)
Show Figures

Figure 1

19 pages, 623 KB  
Article
Early-Stage Graph Fusion with Refined Graph Neural Networks for Semantic Code Search
by Longhao Ao and Rongzhi Qi
Appl. Sci. 2026, 16(1), 12; https://doi.org/10.3390/app16010012 - 19 Dec 2025
Viewed by 237
Abstract
Code search has received significant attention in the field of computer science research. Its core objective is to retrieve the most semantically relevant code snippets by aligning the semantics of natural language queries with those of programming languages, thereby contributing to improvements in [...] Read more.
Code search has received significant attention in the field of computer science research. Its core objective is to retrieve the most semantically relevant code snippets by aligning the semantics of natural language queries with those of programming languages, thereby contributing to improvements in software development quality and efficiency. As the scale of public code repositories continues to expand rapidly, the ability to accurately understand and efficiently match relevant code has become a critical challenge. Furthermore, while numerous studies have demonstrated the efficacy of deep learning in code-related tasks, the mapping and semantic correlations are often inadequately addressed, leading to the disruption of structural integrity and insufficient representational capacity during semantic matching. To overcome these limitations, we propose the Functional Program Graph for Code Search (called FPGraphCS), a novel code search method that leverages the construction of functional program graphs and an early fusion strategy. By incorporating abstract syntax tree (AST), data dependency graph (DDG), and control flow graph (CFG), the method constructs a comprehensive multigraph representation, enriched with contextual information. Additionally, we propose an improved metapath aggregation graph neural network (IMAGNN) model for the extraction of code features with complex semantic correlations from heterogeneous graphs. Through the use of metapath-associated subgraphs and dynamic metapath selection via a graph attention mechanism, FPGraphCS significantly enhances its search capability. The experimental results demonstrate that FPGraphCS outperforms existing baseline methods, achieving an MRR of 0.65 and ACC@10 of 0.842, showing a significant improvement over previous approaches. Full article
Show Figures

Figure 1

26 pages, 22711 KB  
Article
Advanced Servo Control and Adaptive Path Planning for a Vision-Aided Omnidirectional Launch Platform in Sports-Training Applications
by Shuai Wang, Yinuo Xie, Kangyi Huang, Jun Lang, Qi Liu and Yaoming Zhuang
Actuators 2025, 14(12), 614; https://doi.org/10.3390/act14120614 - 15 Dec 2025
Viewed by 377
Abstract
A system-level scheme that couples a multi-dimensional attention-fused vision model and an improved Dijkstra planner is proposed for basketball robots in complex scenes. Fast-moving object detection, cluttered background recognition, and real-time path decision are targeted. For vision, the proposed YOLO11 with Multi-dimensional Attention [...] Read more.
A system-level scheme that couples a multi-dimensional attention-fused vision model and an improved Dijkstra planner is proposed for basketball robots in complex scenes. Fast-moving object detection, cluttered background recognition, and real-time path decision are targeted. For vision, the proposed YOLO11 with Multi-dimensional Attention Fusion (YOLO11-MAF) is equipped with four modules: Coordinate Attention (CoordAttention), Efficient Channel Attention (ECA), Multi-Scale Channel Attention (MSCA), and Large-Separable Kernel Attention (LSKA). Detection accuracy and robustness for high-speed basketballs are raised. For planning, an improved Dijkstra algorithm is proposed. Binary heap optimization and heuristic fusion cut time complexity from O(V2) to O((V+E)logV). Redundant expansions are removed and planning speed is increased. A complete robot platform integrating mechanical, electronic, and software components is constructed. End-to-end experiments show the improved vision model raises mAP@0.5 by 0.7% while keeping real-time frames per second (FPS). The improved path planning algorithm cuts average compute time by 16% and achieves over 95% obstacle avoidance success. The work offers a new approach for real-time perception and autonomous navigation of intelligent sport robots. It lays a basis for future multi-sensor fusion and adaptive path planning research. Full article
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 490
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

20 pages, 8010 KB  
Article
Laser Pulse-Driven Multi-Sensor Time Synchronization Method for LiDAR Systems
by Jiazhi Yang, Xingguo Han, Wenzhong Deng, Hong Jin and Biao Zhang
Sensors 2025, 25(24), 7555; https://doi.org/10.3390/s25247555 - 12 Dec 2025
Viewed by 360
Abstract
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with [...] Read more.
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with GNSS signals. To address this limitation, this study introduces a novel laser pulse-driven time synchronization (LPTS) method in our custom-developed Light Detecting and Ranging (LiDAR) system. The LPTS method uses electrical pulses, synchronized with laser beams as the time synchronization source, driving the Micro-Controller Unit (MCU) timer within the control system to count with a timing accuracy of 0.1 μs and to timestamp the data from the Positioning and Orientation System (POS) unit or laser scanner unit. By employing interpolation techniques, the POS and laser scanner data are precisely synchronized with laser pulses, ensuring strict correlation through their timestamps. In this article, the working principles and experimental methods of both traditional time synchronization (TRTS) and LPTS methods are discussed. We have implemented both methods on experimental platforms, and the results demonstrate that the LPTS method circumvents the dependency on external time references for inter-sensor alignment and minimizes the impact of laser jitter stemming from third-party time references, without requiring additional hardware. Moreover, it elevates the internal time synchronization resolution to 0.1 μs and significantly improves relative timing precision. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

0 pages, 2770 KB  
Article
Cellular Distribution and Motion of Essential Magnetosome Proteins Expressed in Mammalian Cells
by Qin Sun, Cécile Fradin, Moeiz Ahmed, R. Terry Thompson, Frank S. Prato and Donna E. Goldhawk
Biosensors 2025, 15(12), 797; https://doi.org/10.3390/bios15120797 - 4 Dec 2025
Viewed by 355
Abstract
Magnetosomes are organelle-like structures within magnetotactic bacteria that store iron biominerals in membrane-bound vesicles. In bacteria, formation of these structures is highly regulated by approximately 30 genes, which are conserved throughout different species. To compartmentalize iron in mammalian cells and provide gene-based contrast [...] Read more.
Magnetosomes are organelle-like structures within magnetotactic bacteria that store iron biominerals in membrane-bound vesicles. In bacteria, formation of these structures is highly regulated by approximately 30 genes, which are conserved throughout different species. To compartmentalize iron in mammalian cells and provide gene-based contrast for magnetic resonance imaging, we introduced key magnetosome proteins. The expression of essential magnetosome genes mamI and mamL as fluorescent fusion proteins in a human melanoma cell line confirmed their co-localization and interaction. Here, we investigate the expression of two more essential magnetosome genes, mamB and mamE, using confocal microscopy to describe fluorescent fusion protein expression patterns and analyze the observed intracellular mobility. Custom software was developed to characterize fluorescent particle trajectories. In mammalian cells, essential magnetosome proteins display different diffusive behaviours. However, all magnetosome proteins travelled at similar velocities when interacting with mammalian mobile elements, suggesting that MamL, MamL + MamI, MamB, and MamE interact with similar molecular motor proteins. These results confirm that localization and interaction of essential magnetosome proteins are feasible within the mammalian intracellular compartment. Full article
(This article belongs to the Special Issue Fluorescent Probes: Design and Biological Applications)
Show Figures

Graphical abstract

26 pages, 1005 KB  
Article
A Context-Aware Lightweight Framework for Source Code Vulnerability Detection
by Yousef Sanjalawe, Budoor Allehyani and Salam Al-E’mari
Future Internet 2025, 17(12), 557; https://doi.org/10.3390/fi17120557 - 3 Dec 2025
Viewed by 394
Abstract
As software systems grow increasingly complex and interconnected, detecting vulnerabilities in source code has become a critical and challenging task. Traditional static analysis methods often fall short in capturing deep, context-dependent vulnerabilities and adapting to rapidly evolving threat landscapes. Recent efforts have explored [...] Read more.
As software systems grow increasingly complex and interconnected, detecting vulnerabilities in source code has become a critical and challenging task. Traditional static analysis methods often fall short in capturing deep, context-dependent vulnerabilities and adapting to rapidly evolving threat landscapes. Recent efforts have explored knowledge graphs and transformer-based models to enhance semantic understanding; however, these solutions frequently rely on static knowledge bases, exhibit high computational overhead, and lack adaptability to emerging threats. To address these limitations, we propose DynaKG-NER++, a novel and lightweight framework for context-aware vulnerability detection in source code. Our approach integrates lexical, syntactic, and semantic features using a transformer-based token encoder, dynamic knowledge graph embeddings, and a Graph Attention Network (GAT). We further introduce contrastive learning on vulnerability–patch pairs to improve discriminative capacity and design an attention-based fusion module to combine token and entity representations adaptively. A key innovation of our method is the dynamic construction and continual update of the knowledge graph, allowing the model to incorporate newly published CVEs and evolving relationships without retraining. We evaluate DynaKG-NER++ on five benchmark datasets, demonstrating superior performance across span-level F1 (89.3%), token-level accuracy (93.2%), and AUC-ROC (0.936), while achieving the lowest false positive rate (5.1%) among state-of-the-art baselines. Sta tistical significance tests confirm that these improvements are robust and meaningful. Overall, DynaKG-NER++ establishes a new standard in vulnerability detection, balancing accuracy, adaptability, and efficiency, making it highly suitable for deployment in real-world static analysis pipelines and resource-constrained environments. Full article
(This article belongs to the Topic Addressing Security Issues Related to Modern Software)
Show Figures

Figure 1

19 pages, 3693 KB  
Article
Factor Graph-Based Time-Synchronized Trajectory Planning for UAVs in Ground Radar Environment Simulation
by Paweł Słowak, Paweł Kaczmarek, Adrian Kapski and Piotr Kaniewski
Sensors 2025, 25(23), 7326; https://doi.org/10.3390/s25237326 - 2 Dec 2025
Viewed by 415
Abstract
The use of unmanned aerial vehicles (UAVs) as mobile sensor platforms has grown significantly in recent years, including applications where drones emulate radar targets or serve as dynamic measurement systems. This paper presents a novel approach to time-synchronized UAV trajectory planning for radar [...] Read more.
The use of unmanned aerial vehicles (UAVs) as mobile sensor platforms has grown significantly in recent years, including applications where drones emulate radar targets or serve as dynamic measurement systems. This paper presents a novel approach to time-synchronized UAV trajectory planning for radar environment simulation. The proposed method considers a UAV equipped with a software-defined radio (SDR) capable of reproducing the radar signature of a simulated airborne object, e.g., a high-maneuverability or high-speed aerial platform. The UAV must follow a spatial trajectory that replicates the viewing geometry—specifically, the observation angles—of the reference target as seen from a ground-based radar. The problem is formulated within a factor graph framework, enabling joint optimization of the UAV trajectory, observation geometry, and temporal synchronization constraints. While factor graphs have been extensively used in robotics and sensor fusion, their application to trajectory planning under temporal and sensing constraints remains largely unexplored. The proposed approach enables unified optimization over space and time, ensuring that the UAV reproduces the target motion as perceived by the radar, both geometrically and with appropriate signal timing. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

25 pages, 3205 KB  
Article
Coordinated Radio Emitter Detection Process Using Group of Unmanned Aerial Vehicles
by Maciej Mazuro, Paweł Skokowski and Jan M. Kelner
Sensors 2025, 25(23), 7298; https://doi.org/10.3390/s25237298 - 30 Nov 2025
Viewed by 544
Abstract
The rapid expansion of wireless communications has led to increasing demand and interference in the electromagnetic spectrum, raising the question of how to achieve reliable and adaptive monitoring in complex and dynamic environments. This study aims to investigate whether groups of unmanned aerial [...] Read more.
The rapid expansion of wireless communications has led to increasing demand and interference in the electromagnetic spectrum, raising the question of how to achieve reliable and adaptive monitoring in complex and dynamic environments. This study aims to investigate whether groups of unmanned aerial vehicles (UAVs) can provide an effective alternative to conventional, static spectrum monitoring systems. We propose a cooperative monitoring system in which multiple UAVs, integrated with software-defined radios (SDRs), conduct energy measurements and share their observations with a data fusion center. The fusion process is based on Dempster–Shafer theory (DST), which models uncertainty and combines partial or conflicting data from spatially distributed sensors. A simulation environment developed in MATLAB emulates UAV mobility, communication delays, and propagation effects in various swarm formations and environmental conditions. The results confirm that cooperative spectrum monitoring using UAVs with DST data fusion improves detection robustness and reduces susceptibility to noise and interference compared to single-sensor approaches. Even under challenging propagation conditions, the system maintains reliable performance, and DST fusion provides decision-supporting results. The proposed methodology demonstrates that UAV groups can serve as scalable, adaptive tools for real-time spectrum monitoring and contributes to the development of intelligent monitoring architectures in cognitive radio networks. Full article
Show Figures

Figure 1

28 pages, 4565 KB  
Article
Improving VR Welding Simulator Tracking Accuracy Through IMU-SLAM Fusion
by Kwang-Seong Shin, Jong Chan Kim, Kyung Won Cho and Won Ik Cho
Electronics 2025, 14(23), 4693; https://doi.org/10.3390/electronics14234693 - 28 Nov 2025
Viewed by 632
Abstract
Virtual reality (VR) welding simulators provide safe and cost-effective training environments, but precise torch tracking remains a key challenge. Current commercial systems are limited in accurate bead simulation and posture feedback due to tracking errors of 3–10 mm, while external motion capture systems [...] Read more.
Virtual reality (VR) welding simulators provide safe and cost-effective training environments, but precise torch tracking remains a key challenge. Current commercial systems are limited in accurate bead simulation and posture feedback due to tracking errors of 3–10 mm, while external motion capture systems offer high precision but suffer from high cost and installation complexity issues. Therefore, a new approach is needed that achieves high precision while maintaining cost efficiency. This paper proposes an IMU-SLAM fusion-based tracking algorithm. The method combines Inertial Measurement Unit (IMU) data with visual–inertial SLAM (Simultaneous Localization and Mapping) for sensor fusion and applies a drift correction technique utilizing the periodic weaving patterns of the welding torch. This achieves precision below 5 mm without requiring external equipment. Experimental results demonstrate an average 3.8 mm RMSE (Root Mean Square Error) across 15 datasets spanning three welding scenarios, showing a 1.8× accuracy improvement over commercial baselines. Results were validated against OptiTrack ground truth data. Latency was maintained below 100 ms to meet real-time haptic feedback requirements, ensuring responsive interaction during training sessions. The proposed approach is a software solution using only standard VR hardware, eliminating the need for expensive external tracking equipment installation. User studies confirmed significant improvements in tracking quality perception from 6.8 to 8.4/10 and bead simulation realism from 7.1 to 8.7/10, demonstrating the practical effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Virtual Reality Applications in Enhancing Human Lives)
Show Figures

Figure 1

18 pages, 6345 KB  
Article
Comparative Analysis of the Structure, Properties and Internal Stresses of MAG Welded Joints Made of S960QL Steel Subjected to Heat Treatment and Pneumatic Needle Peening
by Jacek Górka, Mateusz Przybyła and Bernard Wyględacz
Materials 2025, 18(23), 5363; https://doi.org/10.3390/ma18235363 - 28 Nov 2025
Viewed by 255
Abstract
The aim of the research was to analyse the impact of peening each of the beads on the properties of a butt joint made of S960QL steel welded with ceramic backing on a robotic workstation using the 135 (MAG) method, and to determine [...] Read more.
The aim of the research was to analyse the impact of peening each of the beads on the properties of a butt joint made of S960QL steel welded with ceramic backing on a robotic workstation using the 135 (MAG) method, and to determine the impact of pneumatic needle peening on the stress level. This analysis was based on a comparison of three butt joints: in the as-welded state, with each weld bead peened and post-weld heat treatment—stress relief annealing—performed. High-frequency peening (90 Hz) of each weld was performed to reduce stresses in the welded joint by introducing tensile stresses into it. A Weld Line 10 pneumatic hammer from PITEC GmBH was used for this purpose. The test joints obtained were tested in accordance with the requirements of EN ISO 15614-1. In order to determine the state of residual stresses, stress measurements were carried out using the Barkhausen effect based on the testing procedure of the technology supplier, NNT. This meter measures the intensity of the Barkhausen effect using a standard probe (with a single core). In order to verify the stress measurement using the Barkhausen method, stress measurements were performed using the XRD sin 2ψ technique based on the X’Pert Stress Plus program, which contains a database of material constants necessary for calculations. Structural studies, including phase analysis and crystallographic grain orientation, were performed using the backscattered electron diffraction method with a high-resolution scanning electron microscope and an EBSD (Electron Backscatter Diffraction) detector, as well as EDAX OIM analysis software. In addition, X-ray diffraction testing was performed on a Panalytical X’Pert PRO device using filtered cobalt anode tube radiation (λ = 1.79021 A). Qualitative X-ray phase analysis of the tested materials was performed in a Bragg–Brentano system using an Xcelerator strip detector. The tests showed that the high-frequency peening of each bead did not cause negative results in the required tests during qualification of the S960QL plate-welding technology compared to the test plates in the as-welded and post-stress-relief heat treatment states. Interpass peening of the weld face and HAZ resulted in a reduction in residual stresses after welding at a distance of 15 mm from the joint axis compared to the stress measurement result for the sample in the as-welded condition. This allows for a positive assessment of peening in terms of reducing the crack initiator in the form of the concentration of tensile stresses in the area of the fusion line and HAZ. Full article
(This article belongs to the Special Issue Fusion Bonding/Welding of Metal and Non-Metallic Materials)
Show Figures

Figure 1

12 pages, 584 KB  
Article
Analysis of Operator Expertise in MRI/TRUS Fusion-Guided Prostate Biopsy
by Rouvier Al-Monajjed, Lars Schimmöller, Jale Lakes, Anna Herzum, Anne Hübner, Isabelle Bußhoff, Tim Ullrich, Alexandra Ljimani, Irene Esposito, Peter Albers, Gerald Antoch, Jan Philipp Radtke and Matthias Boschheidgen
Cancers 2025, 17(23), 3811; https://doi.org/10.3390/cancers17233811 - 28 Nov 2025
Viewed by 297
Abstract
Background/Objectives: This study analyzed the impact of operator experience on the detection of PC and csPC using a standardized MRI/TRUS-fusion biopsy protocol in an experienced high-volume center. Methods: Men with mpMRI and subsequent combined TB and SB (2019–2024) using transrectal, software-assisted [...] Read more.
Background/Objectives: This study analyzed the impact of operator experience on the detection of PC and csPC using a standardized MRI/TRUS-fusion biopsy protocol in an experienced high-volume center. Methods: Men with mpMRI and subsequent combined TB and SB (2019–2024) using transrectal, software-assisted MRI/TRUS-fusion were retrospectively included. Operators were stratified by experience subgroups (<100 vs. ≥100 procedures). Clinical, MRI, and biopsy data have been assessed. The primary objective was the analysis of the effect of biopsy experience on patient-level PC detection. The secondary objective was the PC detection of PI-RADS and DRE. Results: A total of 683 consecutive patients were included (median age 63 years, median PSA 6.5 ng/mL, and median prostate volume 41 mL). Overall, PC and csPC detection were 67% and 51%, with no significant difference in the operator experience subgroups (p = 0.63; p = 0.23). There were no significant differences for additional csPC detection by SB (7% vs. 5%; p = 0.31) or TB (9% vs. 10%; p = 0.93) in both subgroups. DRE showed limited diagnostic value (SEN 32%, SPE 88%, PPV 74%, NPV 55%) with no significant variation regarding the experience (p = 0.12–1.0). Limitations include a single-center, retrospective design and a lack of a radical prostatectomy specimen. Conclusions: In a standardized MRI-targeted biopsy setting, operator experience seems to have a lower influence on PC or csPC detection. High csPC detection in PI-RADS 4–5 supports a TB-only approach, while low rates in PI-RADS 3 suggest follow-up MRI over immediate biopsy. Limited DRE accuracy highlights its declining role in PC assessment. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

30 pages, 7547 KB  
Review
Artificial Intelligence Applications in Interventional Radiology
by Carolina Lanza, Salvatore Alessio Angileri, Serena Carriero, Sonia Triggiani, Velio Ascenti, Simone Raul Mortellaro, Marco Ginolfi, Alessia Leo, Francesca Arnone, Pierluca Torcia, Pierpaolo Biondetti, Anna Maria Ierardi and Gianpaolo Carrafiello
J. Pers. Med. 2025, 15(12), 569; https://doi.org/10.3390/jpm15120569 - 28 Nov 2025
Viewed by 1122
Abstract
This review is a brief overview of the current status and the potential role of artificial intelligence (AI) in interventional radiology (IR). The literature published in the last decades was reviewed and the technical developments in terms of radiomics, virtual reality, robotics, fusion [...] Read more.
This review is a brief overview of the current status and the potential role of artificial intelligence (AI) in interventional radiology (IR). The literature published in the last decades was reviewed and the technical developments in terms of radiomics, virtual reality, robotics, fusion imaging, cone-beam computed tomography (CBCT) and Imaging Guidance Software were analyzed. The evidence shows that AI significatively improves pre-procedural planning, intra-procedural navigation, and post-procedural assessment. Radiomics extracts features from optical images of personalized treatment strategies. Virtual reality offers innovative tools especially for training and procedural simulation. Robotic systems, combined with AI, could enhance precision and reproducibility of IR procedures while reducing operator exposure to X-ray. Fusion imaging and CBCT, augmented by AI software, improve real-time guidance and procedural outcomes. Full article
Show Figures

Figure 1

26 pages, 49356 KB  
Article
A Methodology to Detect Changes in Water Bodies by Using Radar and Optical Fusion of Images: A Case Study of the Antioquia near East in Colombia
by César Olmos-Severiche, Juan Valdés-Quintero, Jean Pierre Díaz-Paz, Sandra P. Mateus, Andres Felipe Garcia-Henao, Oscar E. Cossio-Madrid, Blanca A. Botero and Juan C. Parra
Appl. Sci. 2025, 15(23), 12559; https://doi.org/10.3390/app152312559 - 27 Nov 2025
Viewed by 304
Abstract
This study presents a novel methodology for the detection and monitoring of changes in surface water bodies, with a particular emphasis on the near-eastern region of Antioquia, Colombia. The proposed approach integrates remote sensing and artificial intelligence techniques through the fusion of multi-source [...] Read more.
This study presents a novel methodology for the detection and monitoring of changes in surface water bodies, with a particular emphasis on the near-eastern region of Antioquia, Colombia. The proposed approach integrates remote sensing and artificial intelligence techniques through the fusion of multi-source imagery, specifically Synthetic Aperture Radar (SAR) and optical data. The framework is structured in several stages. First, radar imagery is pre-processed using an autoencoder-based despeckling model, which leverages deep learning to reduce noise while preserving structural information critical for environmental monitoring. Concurrently, optical imagery is processed through the computation of normalized spectral indices, including NDVI, NDWI, and NDBI, capturing essential characteristics related to vegetation, water presence, and surrounding built-up areas. These complementary sources are subsequently fused into synthetic RGB composite representations, ensuring spatial and spectral consistency between radar and optical domains. To operationalize this methodology, a standardized and reproducible workflow was implemented for automated image acquisition, preprocessing, fusion, and segmentation. The Segment Anything Model (SAM) was integrated into the process to generate semantically interpretable classes, enabling more precise delineation of hydrological features, flood-prone areas, and urban expansion near waterways. This automated system was embedded in a software prototype, allowing local users to manage large volumes of satellite data efficiently and consistently. The results demonstrate that the combination of SAR and optical datasets provides a robust solution for monitoring dynamic hydrological environments, particularly in tropical mountainous regions with persistent cloud cover. The fused products enhanced the detection of small streams and complex hydrological patterns that are typically challenging to monitor using optical imagery alone. By integrating these technical advancements, the methodology supports improved environmental monitoring and provides actionable insights for decision-makers. At the local scale, municipal governments can use these outputs for urban planning and flood risk mitigation; at the regional level, environmental and territorial authorities can strengthen water resource management and conservation strategies; and at the national level, risk management institutions can incorporate this information into early warning systems and disaster preparedness programs. Overall, this research delivers a scalable and automated tool for surface water monitoring, bridging the gap between scientific innovation and operational decision-making to support sustainable watershed management under increasing pressures from climate change and urbanization. Full article
Show Figures

Figure 1

Back to TopTop