Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,421)

Search Parameters:
Keywords = sensor modalities

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2156 KB  
Article
Research on Pedestrian Detection Method Based on Dual-Branch YOLOv8 Network of Visible Light and Infrared Images
by Zhuomin He and Xuewen Chen
World Electr. Veh. J. 2026, 17(4), 177; https://doi.org/10.3390/wevj17040177 - 26 Mar 2026
Abstract
In complex traffic environments such as low light, strong glare, occlusion and at night, systems that rely solely on visible light single sensors for pedestrian detection have drawbacks such as low detection accuracy and poor robustness. Based on the YOLOv8 convolutional network, this [...] Read more.
In complex traffic environments such as low light, strong glare, occlusion and at night, systems that rely solely on visible light single sensors for pedestrian detection have drawbacks such as low detection accuracy and poor robustness. Based on the YOLOv8 convolutional network, this paper adopts a dual-branch structure to process visible light and infrared images simultaneously, fully utilizing feature information at different scales to effectively detect pedestrian targets in complex and changeable environments. To address the issues of insufficient interaction of modal feature information and fixed fusion weights, a cross-modal feature interaction and enhancement mechanism was introduced. A modal-channel interaction block (MCI-Block) was designed, in which residual connection structures and weight interaction were added within the module to achieve feature enhancement and filter out noise information. Introduce a dynamic weighted feature fusion strategy, adaptively adjusting the contribution ratio of different modal features in the fusion process, aiming to enhance the discrimination ability of the key pedestrian area. The training and testing of the network designed in this paper were completed on the visible light and infrared pedestrian detection dataset LLVIP and Kaist. At the same time, the test results of the dual-branch model and the model designed in this paper were further verified in actual traffic scenarios. The results show that the dual-branch YOLOv8 network for visible light and infrared images, which was constructed in this paper, can reliably enhance the detection performance of pedestrian targets in complex traffic environments, including accuracy, recall rate, and mAP@0.5, etc., thereby improving the robustness of pedestrian detection. Full article
(This article belongs to the Section Vehicle and Transportation Systems)
Show Figures

Figure 1

25 pages, 1648 KB  
Review
Freezing of Gait in Parkinson’s Disease: A Scoping Review on the Path Towards Real-Time Therapies
by Meenakshi Singhal, Christina Grannie, Margaret Burnette, Manuel E. Hernandez and Samar A. Hegazy
Sensors 2026, 26(7), 2042; https://doi.org/10.3390/s26072042 - 25 Mar 2026
Abstract
Background: Freezing of gait (FoG) is a common symptom of Parkinson’s disease, especially in its later stages of progression. Characterized by involuntary stopping during normal gait patterns, FoG greatly increases fall risk, reducing quality of life. Given the complex presentation and etiology of [...] Read more.
Background: Freezing of gait (FoG) is a common symptom of Parkinson’s disease, especially in its later stages of progression. Characterized by involuntary stopping during normal gait patterns, FoG greatly increases fall risk, reducing quality of life. Given the complex presentation and etiology of FoG, current treatments have proven ineffective in managing episodes. In recent years, machine learning algorithms have been leveraged to derive actionable clinical insights from biomedical datasets. As a manifestation of neuromechanical dysfunction, impending FoG episodes may be characterized through data collected by wearable devices and sensors. Objective: This scoping review evaluates the current landscape of machine and deep learning-derived biomarkers to enhance the personalized management of FoG. Methods: This scoping review was conducted using established methodological frameworks for scoping reviews and is reported in accordance using the PRISMA-ScR checklist. Three databases were queried, with screening yielding 60 studies. Results: Thirty-nine papers reported on deep learning techniques, with the most common architectures being convolutional neural networks and long short-term memory models. Conclusions: Inertial measurement units, which can be worn on various locations, may be a promising modality for practical implementation. To generate closed-loop FoG therapies, algorithms can be integrated into real-time systems like robotic exoskeletons or adaptive deep brain stimulation. Future work in generating datasets from ambulatory devices, as well as distributed computing strategies, may lead to real-time FoG management. Full article
(This article belongs to the Special Issue Flexible Wearable Sensors for Biomechanical Applications)
Show Figures

Figure 1

25 pages, 3612 KB  
Article
Learning Modality Complementarity for RGB-D Salient Object Detection via Dynamic Neural Network
by Yuanhao Li, Jia Song, Chenglizhao Chen and Xinyu Liu
Electronics 2026, 15(7), 1361; https://doi.org/10.3390/electronics15071361 - 25 Mar 2026
Viewed by 50
Abstract
RGB-D salient object detection (RGB-D SOD) aims to accurately localize and segment visually salient objects by jointly leveraging RGB images and depth maps. Some existing methods rely on static fusion strategies with fixed paths and weights, which treat all regions equally and fail [...] Read more.
RGB-D salient object detection (RGB-D SOD) aims to accurately localize and segment visually salient objects by jointly leveraging RGB images and depth maps. Some existing methods rely on static fusion strategies with fixed paths and weights, which treat all regions equally and fail to capture the varying importance of different regions and modalities. Although some attention-based methods alleviate the limitations of static fusion by assigning adaptive weights to different regions and modalities, the quality of RGB and depth data may degrade in real-world scenarios due to sensor noise, illumination changes, or environmental interference. These attention-based methods often overlook inter-modality quality differences and complementarity, making them prone to over-relying on a certain modality, which can lead to noise introduction, feature conflicts, and performance degradation. To address these limitations, this paper proposes a novel dynamic feature routing and fusion framework for RGB-D SOD, which adaptively adjusts the fusion strategy according to the quality of input modalities. To enable modality quality awareness, the proposed method characterizes the modality complementarity between RGB and depth features in a task-driven manner inspired by information-theoretic principles. We introduce a task-relevance scoring function which is integrated with a mutual information estimator to quantify such complementarity, and emphasizes task-relevant features while suppressing redundancy. A dynamic routing module is then designed to perform feature selection guided by the captured complementarity. In addition, we propose a novel cross-modal fusion module to adaptively fuse the features selected by the dynamic routing module, which effectively enhances complementary representations while suppressing redundant features and noise interference. Extensive experiments conducted on seven public RGB-D SOD benchmark datasets demonstrate that the proposed method consistently achieves competitive performance, outperforming existing methods by an average of approximately 1% across multiple evaluation metrics. Notably, in challenging scenarios with severe modality quality degradation, the proposed method outperforms existing best-performing methods by up to 1.8%, demonstrating strong robustness against cluttered backgrounds, complex object structures, and diverse object scales. Overall, the proposed dynamic fusion framework provides a novel solution to modality quality imbalance in RGB-D salient object detection. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 7541 KB  
Article
Spatiotemporal Ergonomic Fatigue Analysis in Seated Postures Using a Multimodal Smart-Skin System: A Comparative Study Between Mannequin and Human Measurements
by Giva Andriana Mutiara, Muhammad Rizqy Alfarisi, Paramita Mayadewi, Lisda Meisaroh and Periyadi
Appl. Syst. Innov. 2026, 9(4), 67; https://doi.org/10.3390/asi9040067 - 24 Mar 2026
Viewed by 181
Abstract
Continuous monitoring of sitting posture is crucial for ergonomic assessment and fatigue prevention, yet many existing approaches rely on vision-based systems or single-modality sensing that are limited in capturing spatial and temporal biomechanical dynamics. This paper presents a multimodal smart-skin sensing system for [...] Read more.
Continuous monitoring of sitting posture is crucial for ergonomic assessment and fatigue prevention, yet many existing approaches rely on vision-based systems or single-modality sensing that are limited in capturing spatial and temporal biomechanical dynamics. This paper presents a multimodal smart-skin sensing system for spatial and temporal ergonomic fatigue analysis in sitting postures. The proposed platform integrates 42 distributed pressure, temperature, and vibration sensors arranged in 14 trimodal sensing nodes embedded across anatomical seating and back regions to enable real-time multimodal acquisition of human–chair interaction patterns. The study introduces an analytical framework combining anatomical heatmap visualization, temporal evolution analysis, delta pressure mapping, fatigue intensity estimation, and hotspot detection to characterize dynamic pressure redistribution during prolonged sitting. Experimental evaluations were conducted using a biomechanical mannequin and a single human participant with identical anthropometric characteristics (165 cm height and 62 kg body mass) across nine seated conditions, including neutral sitting, reclining, leaning, periodic shifting, and vibration-induced motion. Each posture condition was recorded as a time-series session and segmented into temporal phases to analyze fatigue evolution during prolonged sitting. Statistical analysis of pressure redistribution dynamics indicates significantly higher pressure drift in human measurements compared with the mechanically stable mannequin baseline (p < 0.001). The proposed framework provides a scalable sensing approach for ergonomic monitoring, intelligent seating systems, and human–machine interface applications. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

23 pages, 1109 KB  
Review
Strategies for Class-Imbalanced Learning in Multi-Sensor Medical Imaging
by Da Zhou, Song Gao and Xinrui Huang
Sensors 2026, 26(6), 1998; https://doi.org/10.3390/s26061998 - 23 Mar 2026
Viewed by 165
Abstract
This narrative critical review addresses class imbalance in medical imaging, particularly within the context of multi-sensor and multi-modal environments, poses a critical challenge to developing reliable AI diagnostic systems. The integration of heterogeneous data from sources like CT, MRI, and PET presents a [...] Read more.
This narrative critical review addresses class imbalance in medical imaging, particularly within the context of multi-sensor and multi-modal environments, poses a critical challenge to developing reliable AI diagnostic systems. The integration of heterogeneous data from sources like CT, MRI, and PET presents a unique opportunity to address data scarcity for rare conditions through fusion techniques. This review provides a structured analysis of strategies to tackle class imbalance, categorizing them into data-centric (e.g., advanced resampling like SMOTE-ENC for mixed data types, GAN-based synthesis) and model-centric (e.g., loss function engineering, transfer learning, and ensemble methods) approaches. Crucially, we highlight how multi-sensor feature fusion and decision-level fusion paradigms can inherently enrich representations for minority classes, offering a powerful frontier beyond single-modality learning. We evaluate each method’s merits, clinical viability, and compliance considerations (e.g., FDA). Finally, we identify emerging trends where imbalance-aware learning synergizes with multi-sensor fusion frameworks, federated learning, and explainable AI, charting a roadmap toward robust, equitable, and clinically deployable diagnostic tools. Our quantitative synthesis shows that data-centric strategies can improve minority class recall by 12–35% in datasets with imbalance ratios (majority:minority) ≥10:1, while model-centric strategies achieve an average AUC improvement of 0.08–0.21 in multi-sensor medical imaging tasks with sample sizes ranging from 50 to 50,000. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

22 pages, 14276 KB  
Article
DualFOD: A Dual-Modality Deep Learning Framework for UAS-Based Foreign Object Debris Detection Using Thermal and RGB Imagery
by Owais Ahmed, Caleb S. Caldwell and Adeel Khalid
Drones 2026, 10(3), 225; https://doi.org/10.3390/drones10030225 - 23 Mar 2026
Viewed by 205
Abstract
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined [...] Read more.
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined with Artificial Intelligence (AI) for rapid FOD identification. While prior research has extensively evaluated optical sensors such as RGB imaging and radar, limited work has investigated the potential of thermal imaging for improved FOD visibility under challenging environmental conditions. This study proposes DualFOD, a dual-modality detection framework that integrates a supervised YOLO12-based RGB detector with an unsupervised thermal anomaly extraction pipeline for identifying debris on runway surfaces. A decision-level fusion algorithm combines detections from both branches using spatial proximity matching to produce a unified FOD inventory. The RGB branch achieves a precision of 0.954 and mAP@0.5 of 0.890 on the held-out test set. Cross-site validation at the Cobb County Sport Aviation Complex demonstrates that thermal detection recovers debris missed by RGB at higher altitudes, with the fused output consistently outperforming either single-modality branch. This research contributes toward scalable autonomous FOD monitoring that enhances operational safety in aviation environments. Full article
Show Figures

Figure 1

36 pages, 19343 KB  
Article
HMI Design of Intelligent Vehicles Based on Multimodal Experiments of Driver Emotions
by Tongyue Sun, Yongjia Li and Xihui Yang
Multimodal Technol. Interact. 2026, 10(3), 33; https://doi.org/10.3390/mti10030033 - 21 Mar 2026
Viewed by 126
Abstract
Negative driving emotions constitute a significant factor compromising road safety. Current intelligent vehicle human machine interaction (HMI) systems predominantly focus on functional implementation, lacking the capability to perceive and adapt to the driver’s psychological state. To address this issue, this study investigates the [...] Read more.
Negative driving emotions constitute a significant factor compromising road safety. Current intelligent vehicle human machine interaction (HMI) systems predominantly focus on functional implementation, lacking the capability to perceive and adapt to the driver’s psychological state. To address this issue, this study investigates the intrinsic relationship between driving emotions and HMI through multimodal experiments. Experiment One reveals the distribution patterns of drivers’ visual attentional scope under different emotional states. Experiment Two establishes a color preference model for HMI interfaces corresponding to specific emotions. Experiment Three quantitatively analyzes the impact of emotional variations on the perceptual efficiency of auditory warnings. Based on the experimental data, an interaction design principle matching “Emotion-Scene-Modality” is formulated, guiding the design of a data-driven, emotion-adaptive HMI prototype system. This system can perceive the driver’s emotional state in real time via multimodal sensors and dynamically adjust interface color themes, information layout, warning sound effects, and voice interaction style according to predefined interaction strategies. Usability testing demonstrates that, compared to traditional static HMI, this affective adaptive system effectively mitigates the driver’s negative emotional load and provides alerts that are more perceptible and less likely to cause irritation during critical moments. Consequently, it offers a significant theoretical foundation and practical reference for constructing a safer and more comfortable next-generation intelligent vehicle cockpit interaction paradigm. Full article
Show Figures

Figure 1

18 pages, 1895 KB  
Article
Multimodal Remote Sensing Image Clustering on Superpixel Manifolds
by Shujun Liu, Yuhong Yao and Luxi Xiao
Remote Sens. 2026, 18(6), 939; https://doi.org/10.3390/rs18060939 - 19 Mar 2026
Viewed by 203
Abstract
Despite offering rich complementary information, multimodal remote sensing images collected by diverse sensors increase the computational burden in clustering. To alleviate this issue, we devise an efficient multimodal clustering approach (MCSM) on superpixel manifolds formed by superpixel segmentation. The MCSM jointly learns cluster [...] Read more.
Despite offering rich complementary information, multimodal remote sensing images collected by diverse sensors increase the computational burden in clustering. To alleviate this issue, we devise an efficient multimodal clustering approach (MCSM) on superpixel manifolds formed by superpixel segmentation. The MCSM jointly learns cluster representation of all modalities and a consensus cluster membership graph that fuses the multimodal representation to yield clusters. To capture the local geometric structure of the superpixel manifolds, the optimization is constrained by manifold regularization of the consensus graph. In contrast to vanilla multiview subspace clustering techniques, the proposed approach does not rely on spectral clustering, and only involves element-wise product and multiplication on small-scale matrices. In addition, we prove that the MSCM is a special case of classic low-rank subspace clustering models, providing a perspective for understanding the learned cluster graphs. Extensive experiments are conducted on three popular multimodal remote sensing datasets, showing that the proposed method achieves competitive clustering performance compared to state-of-the-art methods, and significantly outperforms the latter in computational efficiency. Full article
Show Figures

Figure 1

31 pages, 3479 KB  
Article
MV-S2CD: A Modality-Bridged Vision Foundation Model-Based Framework for Unsupervised Optical–SAR Change Detection
by Yongqi Shi, Ruopeng Yang, Changsheng Yin, Yiwei Lu, Bo Huang, Yongqi Wen, Yihao Zhong and Zhaoyang Gu
Remote Sens. 2026, 18(6), 931; https://doi.org/10.3390/rs18060931 - 19 Mar 2026
Viewed by 230
Abstract
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps [...] Read more.
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps in a fully unsupervised manner. To robustly adapt pretrained VFM priors to heterogeneous inputs with minimal task-specific parameters, MV-S2CD incorporates lightweight modality-specific adapters and parameter-efficient low-rank adaptation (LoRA) in high-level layers. A shared projector embeds the two observations into a common geometry, enabling consistent cross-modal comparison and reducing sensor-induced domain shift. Building on the bridged representation, we design a dual-branch change reasoning module that decouples structure-sensitive cues from semantic-consistency cues: a structure pathway preserves fine boundaries and local variations, while a semantic-consistency pathway employs reliability gating and multi-scale context aggregation to suppress pseudo-changes caused by modality-specific nuisances and residual misregistration. For label-free optimization, we develop a difference-centric self-supervision scheme with two perturbation views and reliability-guided pseudo-partitioning, jointly enforcing pseudo-unchanged invariance, pseudo-changed/unchanged separability, and sparsity and edge-preserving regularization. Experiments on three heterogeneous optical–SAR benchmarks demonstrate that MV-S2CD consistently improves the Precision–Recall trade-off and achieves state-of-the-art performance among unsupervised baselines, while remaining backbone-flexible and efficient. Full article
Show Figures

Figure 1

25 pages, 36715 KB  
Article
Development of an Autonomous UAV for Multi-Modal Mapping of Underground Mines
by Luis Escobar, David Akhihiero, Jason N. Gross and Guilherme A. S. Pereira
Robotics 2026, 15(3), 63; https://doi.org/10.3390/robotics15030063 - 19 Mar 2026
Viewed by 235
Abstract
Underground mine inspection is a critical operation for safety and resource management. It presents unique challenges, including confined spaces, harsh environments, and the lack of reliable positioning systems. This paper presents the design, development, and evaluation of an Unmanned Aerial Vehicle (UAV) specifically [...] Read more.
Underground mine inspection is a critical operation for safety and resource management. It presents unique challenges, including confined spaces, harsh environments, and the lack of reliable positioning systems. This paper presents the design, development, and evaluation of an Unmanned Aerial Vehicle (UAV) specifically engineered for supervised autonomous inspection in subterranean scenarios. Key technical contributions include mechanical adaptations for collision tolerance, an optimized sensor-actuator selection for navigation, and the deployment of a mission-governing state machine for seamless autonomous acquisition. Furthermore, we detail the data treatment workflow, employing a multi-modal point cloud registration technique that successfully integrates high-resolution visual-depth scans of critical mine pillars into a comprehensive, globally referenced map derived from Light Detection and Ranging (LiDAR) data of the entire workspace. We show experiments that illustrate and validate our approach in two real-world scenarios, a simulated coal mine used to train mine rescue teams and an operating Limestone mine. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

23 pages, 5045 KB  
Article
A Wearable Multi-Modal Measurement System with Self-Developed IMUs and Plantar Pressure Sensors for Real-Time Gait Recognition
by Xiuyu Li, Yunong Gao, Guanzhong Chen, Meiyan Zhang, Jingxiao Liao, Zhaoyun Wang and Jinwei Sun
Micromachines 2026, 17(3), 371; https://doi.org/10.3390/mi17030371 - 19 Mar 2026
Viewed by 254
Abstract
To address the limitations of existing wearable gait recognition, such as drift in static actions and difficulty in recognizing transition states, this paper proposed a gait recognition system based on the data fusion of MEMS Inertial Measurement Units (IMUs) and flexible plantar pressure [...] Read more.
To address the limitations of existing wearable gait recognition, such as drift in static actions and difficulty in recognizing transition states, this paper proposed a gait recognition system based on the data fusion of MEMS Inertial Measurement Units (IMUs) and flexible plantar pressure sensors. A low-power wearable device comprising four inertial and two pressure sensing nodes was developed to achieve synchronized multi-source data collection. Regarding the algorithm, a sensor-characteristic-based two-stage hierarchical framework was constructed. The first stage utilized plantar pressure features to efficiently decouple static postures from dynamic gaits. The second stage employed a lightweight Support Vector Machine combined with a Finite State Machine for static and transitional actions, while an ensemble learning model based on Soft Voting was used for complex dynamic gaits. Experimental results under Leave-One-Out Cross-Validation demonstrate a comprehensive recognition accuracy of 96.17%, with 100% accuracy for standing and 97% for sit-to-stand transitions. These findings validate the significant advantages of the multi-modal fusion approach in enhancing the robustness and generalization capabilities of gait recognition. Full article
(This article belongs to the Special Issue Flexible and Wearable Electronics for Biomedical Applications)
Show Figures

Figure 1

17 pages, 2684 KB  
Article
Semantic-Enhanced Bidirectional Multimodal Fusion for 3D Object Detection Under Adverse Weather
by Tianzhe Jiao, Yuming Chen, Xiaoyue Feng, Chaopeng Guo and Jie Song
Appl. Sci. 2026, 16(6), 2943; https://doi.org/10.3390/app16062943 - 18 Mar 2026
Viewed by 160
Abstract
Multimodal fusion methods leveraging various sensors provide strong support for 3D object detection. However, under adverse weather conditions such as rain, fog, snow, and intense glare, complex environmental factors can degrade sensor data quality, leading to increased false positives and missed detections. In [...] Read more.
Multimodal fusion methods leveraging various sensors provide strong support for 3D object detection. However, under adverse weather conditions such as rain, fog, snow, and intense glare, complex environmental factors can degrade sensor data quality, leading to increased false positives and missed detections. In addition, sensor modalities (e.g., LiDAR and cameras) inherently vary in information density, and directly fusing them can cause critical details in high-density data to be diluted by low-density data, thereby increasing errors. To address these issues, we propose a Semantic-Enhanced Bidirectional Multimodal Fusion (SeBFusion) framework. By introducing a semantic enhancement mechanism and a bidirectional fusion strategy, SeBFusion mitigates the impact of noise under adverse weather and alleviates information dilution in multimodal fusion. Specifically, SeBFusion first employs a virtual point generation and camera semantic injection module to selectively map image semantic features into 3D space, producing semantically enhanced LiDAR features to compensate for the sparsity of the raw LiDAR point cloud. Then, during cross-modal interaction, we design a bidirectional cross-attention fusion module. This module estimates the confidence of each modality and adaptively reweights the bidirectional information flow, thereby reducing the risk of noise propagation across modalities and improving the robustness and accuracy of 3D object detection in complex environments. Experiments on adverse-weather versions of datasets such as KITTI-C and nuScenes-C validate the effectiveness and superiority of the proposed method. On the nuScenes-C dataset, it achieves 66.2% mAP and 66.6% mAP under fog and snow conditions, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Computer Vision Technology and Its Applications)
Show Figures

Figure 1

23 pages, 1512 KB  
Review
Antitumor Mechanisms of Pulsed Electromagnetic Fields in Cancer Cells: A Review of Molecular and Cellular Evidence
by Jesús Antonio Lara-Reyes, Libia Xamanek Cortijo-Palacios, María Elena Hernández-Aguilar, Gonzalo E. Aranda-Abreu and Fausto Rojas-Durán
Radiation 2026, 6(1), 12; https://doi.org/10.3390/radiation6010012 - 18 Mar 2026
Viewed by 350
Abstract
Cancer remains a significant global health burden, often requiring conventional treatments characterized by considerable side effects and limited tumor specificity. This review addresses the critical gap in understanding the non-thermal mechanisms by which Pulsed Electromagnetic fields (PEMFs) exert selective anti-tumor effects. Our primary [...] Read more.
Cancer remains a significant global health burden, often requiring conventional treatments characterized by considerable side effects and limited tumor specificity. This review addresses the critical gap in understanding the non-thermal mechanisms by which Pulsed Electromagnetic fields (PEMFs) exert selective anti-tumor effects. Our primary objective is to analyze the molecular and cellular events through which low-intensity PEMF triggers stress responses and apoptosis in neoplastic cells without impacting normal cell viability. This comprehensive review synthesizes current evidence on the biological effects of PEMFs. Findings indicate that PEMFs disrupts intracellular homeostasis, induces reactive oxygen species-mediated oxidative stress, and activates endoplasmic reticulum stress, collectively driving malignant cells towards apoptosis or cell cycle arrest. Importantly, these effects are preferentially observed in cancer cells due to their inherent biophysical vulnerabilities—such as depolarized membrane potentials—and depend critically on specific PEMFs parameters. In conclusion, PEMFs acts as a multifaceted disruptor of cancer cell homeostasis, representing a promising non-invasive therapeutic modality. Further research is essential to optimize dosimetry and identify primary molecular sensors such as radical pair dynamics, to enhance clinical application and explore synergistic combinations with existing therapies. Full article
Show Figures

Graphical abstract

11 pages, 930 KB  
Article
Quantitative Comparative Analysis of Annual Training Volume and Intensity Distribution of Male Biathlon National Team and University Athletes Using Global Positioning Systems and Wearable Devices
by Guanmin Zhang, Qiuju Hu, Yonghwan Kim and Yongchul Choi
Sensors 2026, 26(6), 1910; https://doi.org/10.3390/s26061910 - 18 Mar 2026
Viewed by 118
Abstract
Background: Wearable sensors and global positioning systems (GPS) can enable objective monitoring of training loads in outdoor endurance sports. In biathlons, comparing training characteristics across developmental stages can help identify structural gaps and support evidence-informed progression within long-term athlete development (LTAD). This study [...] Read more.
Background: Wearable sensors and global positioning systems (GPS) can enable objective monitoring of training loads in outdoor endurance sports. In biathlons, comparing training characteristics across developmental stages can help identify structural gaps and support evidence-informed progression within long-term athlete development (LTAD). This study aimed to quantitatively compare the annual training characteristics of Korean male biathlon national team (NT) and university (UNV) athletes. Methods: Annual physical training data (2022–2024) from NT (n = 6) and UNV (n = 6) athletes were collected using Catapult Vector S7 GPS devices and Polar H10 heart rate monitors. Training volume, intensity distribution (zones 1–3 based on %HRmax), modality (skiing vs. running), and periodization were compared using Mann–Whitney U tests with rank-biserial correlation (r_rb). Results: NT athletes accumulated a higher annual training time and distance than UNV athletes (812 vs. 606 h; 6359 vs. 4130 km; p = 0.002, r_rb = 1.000 for both). The NT athletes spent a lower proportion of time on low-intensity training and a higher proportion on mid and high intensities than UNV athletes (p ≤ 0.015). During high-intensity training, NT athletes maintained a higher proportion of ski-specific training, whereas UNV athletes relied more on running (skiing: 78.5% vs. 46.4%; running: 21.5% vs. 53.6%; both p < 0.001, r_rb = 1.000). The UNV group also showed a more concentrated structure during competition periods than NT athletes (COMP: 28.3% vs. 14.6%; p < 0.05). The absolute annual strength training time did not differ, but UNV athletes showed a higher strength ratio (23.3% vs. 16.8%; p < 0.001, r_rb = 1.000). Conclusion: UNV athletes exhibited a lower total volume, more low-intensity-skewed distribution, and reduced ski-specific exposure during high-intensity training compared with NT athletes. These observed structural gaps can provide empirical benchmarks that may help coaches plan stage-appropriate progression, and they illustrate the practical value of GPS- and wearable-based monitoring for identifying training divergences across developmental stages. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

31 pages, 3578 KB  
Review
Measurement of Percentage Depth–Dose Distributions in Clinical Dosimetry: Conventional Techniques and Emerging Sensor Technologies
by Giada Petringa, Luigi Raffaele, Giacomo Cuttone, Mariacristina Guarrera, Alma Kurmanova, Roberto Catalano and Giuseppe Antonio Pablo Cirrone
Sensors 2026, 26(6), 1908; https://doi.org/10.3390/s26061908 - 18 Mar 2026
Viewed by 205
Abstract
Percentage depth–dose (PDD) distributions are fundamental to characterizing radiation beams in radiotherapy. This review provides an overview of both methods and sensor technologies for measuring PDD in photon, electron, proton, and carbon-ion beams. We summarize conventional dosimetry techniques, including water-phantom scanning with ionization [...] Read more.
Percentage depth–dose (PDD) distributions are fundamental to characterizing radiation beams in radiotherapy. This review provides an overview of both methods and sensor technologies for measuring PDD in photon, electron, proton, and carbon-ion beams. We summarize conventional dosimetry techniques, including water-phantom scanning with ionization chambers (cylindrical and parallel-plate) and radiochromic film, and discuss their strengths (established accuracy, calibration traceability) and limitations (volume averaging, delayed readout). We then examine emerging sensor technologies designed to improve spatial resolution, speed, and radiation hardness: multi-layer ionization chambers and Faraday cups for one-shot PDD acquisition; scintillator-based detectors (liquid, plastic, and fiber-optic) enabling real-time and high-resolution depth–dose measurements; advanced semiconductor detectors including silicon carbide diodes; as well as novel approaches such as ionoacoustic range sensing for proton beams. For each modality and detector type, we emphasize clinical relevance, measurement accuracy, spatial resolution, radiation durability, and suitability for high dose-per-pulse environments (e.g., FLASH radiotherapy). Current challenges, such as detector response in regions of steep dose gradient, saturation or recombination at ultra-high dose rates, and energy-dependent sensitivity in mixed radiation fields, are analyzed in detail. We also highlight the limitations of each technique and discuss ongoing improvements and prospects for clinical implementation. In summary, no single detector technology fully satisfies all requirements for fast, high-accuracy, high-resolution, radiation-hard PDD measurement, but the integration of emerging sensor innovations into clinical dosimetry promises to enhance the precision and efficiency of radiotherapy quality assurance. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

Back to TopTop