Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (215)

Search Parameters:
Keywords = dynamic-static interferences

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 20749 KB  
Article
A Multi-Factor Constrained Autonomous Decision-Making Method for Ship Maneuvering in Complex Shallow Water Areas
by Ke Zhang, Jie Wen, Xiongfei Geng, Chunxu Li, Xingya Zhao, Kexin Xu and Yucheng Zhou
J. Mar. Sci. Eng. 2026, 14(7), 603; https://doi.org/10.3390/jmse14070603 (registering DOI) - 25 Mar 2026
Viewed by 174
Abstract
The navigation of ships in complex shallow water areas is constrained by various factors such as water depth, channel boundaries, and environmental interference. Therefore, it is crucial to improve the adaptability and effectiveness of collision avoidance decisions for ships in complex shallow water [...] Read more.
The navigation of ships in complex shallow water areas is constrained by various factors such as water depth, channel boundaries, and environmental interference. Therefore, it is crucial to improve the adaptability and effectiveness of collision avoidance decisions for ships in complex shallow water scenarios. To address these issues, this paper proposes a multi-factor constrained autonomous decision-making method for complex shallow water vessel maneuvering. Firstly, a digital transportation environment was constructed by combining dynamic and static information, such as water depth, tides, channel boundaries, changes in maneuvering characteristics, and navigation rules, and a navigable water area model that was suitable for shallow water was proposed. Then, considering the constraints of ship maneuverability and the navigation environment, a shallow water ship motion model affected by wind flow was developed. A complex shallow water adaptive maneuvering coupled decision-making method was constructed, considering the influence of ship navigation rules and channel constraints. This method utilizes the Kalman filtering algorithm to correct residuals and predict the maneuvering of the target vessel. Integrated improved heading control and guidance algorithms achieved automatic heading control and future position prediction. Through testing and verification in the complex waters of the Yangtze River estuary, the results show that the autonomous collision avoidance decision-making method proposed in this paper can effectively make collision avoidance decisions in complex multi-ship shallow water areas. This study can provide innovative and practical solutions for the technological development of autonomous ship collision avoidance decision-making. Full article
Show Figures

Figure 1

25 pages, 3612 KB  
Article
Learning Modality Complementarity for RGB-D Salient Object Detection via Dynamic Neural Network
by Yuanhao Li, Jia Song, Chenglizhao Chen and Xinyu Liu
Electronics 2026, 15(7), 1361; https://doi.org/10.3390/electronics15071361 - 25 Mar 2026
Viewed by 174
Abstract
RGB-D salient object detection (RGB-D SOD) aims to accurately localize and segment visually salient objects by jointly leveraging RGB images and depth maps. Some existing methods rely on static fusion strategies with fixed paths and weights, which treat all regions equally and fail [...] Read more.
RGB-D salient object detection (RGB-D SOD) aims to accurately localize and segment visually salient objects by jointly leveraging RGB images and depth maps. Some existing methods rely on static fusion strategies with fixed paths and weights, which treat all regions equally and fail to capture the varying importance of different regions and modalities. Although some attention-based methods alleviate the limitations of static fusion by assigning adaptive weights to different regions and modalities, the quality of RGB and depth data may degrade in real-world scenarios due to sensor noise, illumination changes, or environmental interference. These attention-based methods often overlook inter-modality quality differences and complementarity, making them prone to over-relying on a certain modality, which can lead to noise introduction, feature conflicts, and performance degradation. To address these limitations, this paper proposes a novel dynamic feature routing and fusion framework for RGB-D SOD, which adaptively adjusts the fusion strategy according to the quality of input modalities. To enable modality quality awareness, the proposed method characterizes the modality complementarity between RGB and depth features in a task-driven manner inspired by information-theoretic principles. We introduce a task-relevance scoring function which is integrated with a mutual information estimator to quantify such complementarity, and emphasizes task-relevant features while suppressing redundancy. A dynamic routing module is then designed to perform feature selection guided by the captured complementarity. In addition, we propose a novel cross-modal fusion module to adaptively fuse the features selected by the dynamic routing module, which effectively enhances complementary representations while suppressing redundant features and noise interference. Extensive experiments conducted on seven public RGB-D SOD benchmark datasets demonstrate that the proposed method consistently achieves competitive performance, outperforming existing methods by an average of approximately 1% across multiple evaluation metrics. Notably, in challenging scenarios with severe modality quality degradation, the proposed method outperforms existing best-performing methods by up to 1.8%, demonstrating strong robustness against cluttered backgrounds, complex object structures, and diverse object scales. Overall, the proposed dynamic fusion framework provides a novel solution to modality quality imbalance in RGB-D salient object detection. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 6029 KB  
Article
tKeima: A Large-Stokes-Shift Platform for Metal Ion Detection
by Yun Gyo Seo, Dan-Gyeong Han and In Jung Kim
Biosensors 2026, 16(3), 178; https://doi.org/10.3390/bios16030178 - 22 Mar 2026
Viewed by 171
Abstract
Detection of metal ions under complex and heterogeneous conditions is crucial for food safety, environmental monitoring, and cellular studies. Fluorescent proteins (FPs) are attractive biosensors due to their ease of expression, strong emission without external cofactors, and fluorescence quenching upon metal binding. tKeima [...] Read more.
Detection of metal ions under complex and heterogeneous conditions is crucial for food safety, environmental monitoring, and cellular studies. Fluorescent proteins (FPs) are attractive biosensors due to their ease of expression, strong emission without external cofactors, and fluorescence quenching upon metal binding. tKeima features a large Stokes shift, pH sensitivity, and spectral stability, reducing background interference and enabling metal detection in complex samples. Here, we examined tKeima quenching toward biologically relevant metal ions (Fe2+, Fe3+, and Cu2+). Metal titration fitted to the Langmuir isotherm yielded dissociation constants (Kd) of 2710.7 ± 178.6 μM (Fe2+), 3112.0 ± 176.7 μM (Fe3+), and 881.9 ± 76.2 μM (Cu2+), with maximum quenching capacities (Bmax) of 133.8 ± 2.4%, 128.3 ± 2.5%, and 109.2 ± 1.2%, respectively. Limits of detection were 396.0 μM (Fe2+), 428.6 μM (Fe3+), and 457.7 μM (Cu2+), and linear quenching responses were observed up to ~1000, 1500, and 1000 μM, respectively. Sphere-of-action combined with Stern–Volmer analysis indicated primarily dynamic quenching for Fe2+ and Cu2+, whereas Fe3+ showed a stronger static component. tKeima showed partial fluorescence restoration with ethylenediaminetetraacetic acid and moderate selectivity against interfering ions. These findings clarify tKeima’s metal-quenching mechanism and support its use as a platform for metal-responsive biosensors. Full article
(This article belongs to the Special Issue Fluorescent Sensors for Biological and Chemical Detection)
Show Figures

Figure 1

17 pages, 3940 KB  
Article
Unsteady Internal Flow and Cavitation Characteristics of a Hydraulic Dynamometer for Measuring High-Power Gas Turbines
by Ye Yuan, Zhenyang Liu and Qirui Chen
Machines 2026, 14(3), 342; https://doi.org/10.3390/machines14030342 - 18 Mar 2026
Viewed by 157
Abstract
Hydraulic dynamometer is the key equipment to measure the dynamic performance of high-power gas turbines and steam, with its internal flow characteristics directly influencing measurement accuracy and service life. This paper focuses on the power absorption performance and internal flow characteristics of a [...] Read more.
Hydraulic dynamometer is the key equipment to measure the dynamic performance of high-power gas turbines and steam, with its internal flow characteristics directly influencing measurement accuracy and service life. This paper focuses on the power absorption performance and internal flow characteristics of a hydraulic dynamometer with perforated-disk rotor. A hydraulic test platform is established to measure the power absorption performance of megawatt-level hydraulic dynamometers. When the rotor speed reaches a certain value under the full-water condition, the power absorption of the hydraulic dynamometer reaches its limit. Numerical simulations are applied to study the internal flow characteristics and cavitation evolution features of the perforated-disk-type hydraulic dynamometer. The flow within the outermost rotor pores is the primary factor influencing unsteady flow behaviour, with dynamic–static interference playing a key role in inducing flow excitation. Moreover, cavitation mainly occurs in the flow passages of the end rotor and the outermost flow pores of the middle rotor, where the development and collapse of cavitation bubbles lead to flow instability. As the rotation speed decreases, the power absorption performance significantly decreases under cavitation conditions. These findings provide a theoretical basis for the structural optimization and engineering application of high-power hydraulic dynamometers. Full article
Show Figures

Figure 1

24 pages, 9694 KB  
Article
Traceable Suppression of Vehicle-Induced Dust in Industrial Sheds Through Dynamic–Static Feature Enhancement
by Kun Chen, Xujie Zhang, Yan Shao, Hang Xiao, Di Zheng, Zijie Jiang and Siwei Lou
Processes 2026, 14(6), 952; https://doi.org/10.3390/pr14060952 - 17 Mar 2026
Viewed by 253
Abstract
Existing intelligent monitoring methods are limited by insufficient training samples and target-feature degradation in complex environments. To address these issues, an industrial visual inspection scheme with dual verification is proposed for material sheds. The scheme integrates sample enhancement preprocessing based on a Dynamic [...] Read more.
Existing intelligent monitoring methods are limited by insufficient training samples and target-feature degradation in complex environments. To address these issues, an industrial visual inspection scheme with dual verification is proposed for material sheds. The scheme integrates sample enhancement preprocessing based on a Dynamic Enhanced Generative Adversarial Network (DEGAN) with an Attention-Enhanced YOLO-SLOWFAST (AE-YOLO-SLOWFAST) model for target and behavior detection, enabling feature enhancement, real-time dust monitoring, and timely dust suppression. A dynamic enhancement module is first introduced into a GAN, creating DEGAN to generate high-quality samples and augment the training dataset. An AE-YOLO model is then developed to improve static feature extraction under low illumination and enhance small-target detection. The objective function is refined to improve recognition of hard-to-distinguish samples during training. AE-YOLO is combined with SLOWFAST to recognize vehicle behaviors. Dual verification is performed using dust and vehicle detection results together with action recognition outputs, enabling precise control of dust suppression equipment for targeted water mist spraying. The improved AE-YOLO model achieves an mAP@50 of 94.4%. The proposed method delivers a vehicle–dust association matching accuracy of up to 97.2%, which enables all-weather, intelligent, traceable dust suppression in material sheds, reduces false recognition interference, and ensures timely suppression in areas where vehicles are operating. Full article
(This article belongs to the Special Issue Fault Detection and Identification in Process Systems)
Show Figures

Figure 1

19 pages, 1360 KB  
Article
Workload-Aware Adaptive Duplex Mode Selection for Mobile Ad Hoc Networks: A Workload Zone Estimation Approach
by Zhipeng Feng, Changhao Du and Hongru Zhang
Electronics 2026, 15(6), 1143; https://doi.org/10.3390/electronics15061143 - 10 Mar 2026
Viewed by 211
Abstract
Full-duplex (FD) technology holds great promise for enhancing the spectral efficiency of Mobile Ad Hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). However, the practical performance gain of FD over Half-Duplex (HD) is highly sensitive to the dynamic nature of traffic loads and [...] Read more.
Full-duplex (FD) technology holds great promise for enhancing the spectral efficiency of Mobile Ad Hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). However, the practical performance gain of FD over Half-Duplex (HD) is highly sensitive to the dynamic nature of traffic loads and residual self-interference. Existing Optimal Dynamic Selection Strategies (ODSS) often rely on static workload assumptions within a single time window, failing to capture long-term traffic fluctuations. Consequently, applying instantaneous switching strategies in highly bursty environments necessitates excessively frequent mode switching (e.g., the switching frequency can approach the total number of time windows), incurring prohibitive signaling overhead and unignorable MAC-layer adaptation delays. To overcome these concrete bottlenecks, this paper proposes a comprehensive traffic-aware adaptive duplex mode selection framework. First, we model the multi-scale dynamic workload using Dynamic Activated Probability in Short-term (DAPS) and Long-term (DAPL), effectively characterizing both bursty traffic (via Beta distribution) and Markov-modulated stable traffic. Second, by integrating physical layer performance analysis, we define the Break-even Workload Point (BWP) to partition traffic into Oversaturated (OZ) and Unsaturated (UZ) Workload Zones (WZs). Furthermore, to handle unknown future traffic with low complexity, we propose the Pre-scheduling Duplex selection based on the Workload zone Estimation (PDWE) algorithm. PDWE leverages a Hidden Markov Model (HMM) combined with a Rollout algorithm to estimate hidden traffic states and adaptively pre-schedule duplex modes. Simulation results demonstrate that the proposed strategy achieves near-optimal throughput (approximately 91% of the ideal ODSS) while reducing the duplex switching frequency by two orders of magnitude compared to instantaneous switching strategies. This approach offers a robust cross-layer solution for next-generation self-organizing networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
Show Figures

Figure 1

28 pages, 4247 KB  
Article
BiMS-Pose: Enhancing Human Pose Estimation in Orchard Spraying Scenarios via Bidirectional Multi-Scale Collaboration
by Yuhang Ren, Zichen Yang, Hanxin Chen, Zhuochao Chen and Daojin Yao
Agriculture 2026, 16(5), 606; https://doi.org/10.3390/agriculture16050606 - 6 Mar 2026
Viewed by 232
Abstract
Most 2D human pose estimation frameworks utilize static designs for multi-scale feature fusion, where information from various scales is integrated using fixed weights. A drawback of these approaches is that they often lead to localization biases in complex scenarios. This paper addresses the [...] Read more.
Most 2D human pose estimation frameworks utilize static designs for multi-scale feature fusion, where information from various scales is integrated using fixed weights. A drawback of these approaches is that they often lead to localization biases in complex scenarios. This paper addresses the issues of multi-scale feature mismatch and joint localization biases in pose estimation. From the perspective of feature processing, multi-scale weights must be adapted to the size and position of joints, while joint predictions should adhere to human anatomical constraints. Existing methods lack effective dynamic adaptation, structural constraints, and bidirectional complementarity between high-level semantics and low-level details. They often experience localization biases in occluded scenarios, and the peaks of their heatmaps demonstrate insufficient consistency with the actual positions of the joints. Through theoretical analysis, we identify the causes of performance gaps and propose directions for narrowing them. We propose Bidirectional Multi-Scale Collaborative Pose Estimation (BiMS-Pose), a framework that introduces dynamic weights to adjust feature proportions, establishes bidirectional topological constraints for joint relationships, and integrates a bidirectional attention flow. The framework filters key information from three dimensions, adjusts filtering strategies in real time, and is enhanced by heatmap optimization to improve localization accuracy. Extensive experiments conducted on COCO, MPII, and our self-built Orchard Spraying Pose Dataset (OSPD) demonstrate the effectiveness of BiMS-Pose. In general scenarios, it achieves a significant 1.2 percentage-point increase in average precision (AP) on the COCO val2017 dataset compared to ViTPose while utilizing the same backbone. In agricultural orchard spraying scenarios, it effectively addresses interference factors such as changes in illumination, occlusion, and varying shooting distances, achieving 75.4% average precision (AP) and 90.7% percent of correct keypoints (PCKh@0.5) on the OSPD dataset. Additionally, it maintains an average frame rate of 18.3 FPS on embedded devices, effectively meeting the requirements for real-time monitoring. This highlights the model’s potential for precise, stable, and practical human pose estimation in both general and agricultural application scenarios. Full article
(This article belongs to the Special Issue Application of Smart Technologies in Orchard Management)
Show Figures

Figure 1

37 pages, 3787 KB  
Article
PDGV-DETR: Object Detection for Secure On-Site Weapon and Personnel Location Based on Dynamic Convolution and Cross-Scale Semantic Fusion
by Nianfeng Li, Peizeng Xin, Jia Tian, Xinlu Bai, Hongjie Ding, Zhiguo Xiao and Qian Liu
Sensors 2026, 26(5), 1542; https://doi.org/10.3390/s26051542 - 28 Feb 2026
Viewed by 247
Abstract
In public safety scenarios, the precise detection and positioning of prohibited weapons such as firearms and knives along with the involved personnel are the core pre-requisite technologies for violent risk warning and emergency response. However, in security surveillance scenarios, there are common problems [...] Read more.
In public safety scenarios, the precise detection and positioning of prohibited weapons such as firearms and knives along with the involved personnel are the core pre-requisite technologies for violent risk warning and emergency response. However, in security surveillance scenarios, there are common problems such as object occlusion, difficulty in capturing small-sized weapons, and complex background interference, which lead to the shortcomings of existing general object detection models in the tasks of detecting and locating security-related objects, including poor adaptability, low detection accuracy, and insufficient robustness in complex scenarios. Therefore, this paper proposes a threat object detection framework for security scenarios (PDGV-DETR) based on adaptive dynamic convolution and cross-scale semantic fusion, specifically optimized for the detection and positioning tasks of weapons and personnel objects in static security surveillance images. This research focuses on category recognition at the object level and pixel-level spatial positioning, and does not involve the classification and identification of violent behaviors based on temporal information. There are clear technical boundaries and scene limitations between the two. This framework is optimized through three core modules: designing a dynamic hierarchical channel interaction convolution module to reduce computational complexity while enhancing the ability to detect occluded and incomplete objects; constructing an improved bidirectional hybrid feature pyramid network, combining the cross-scale fusion module to strengthen multi-scale feature expression, and adapting to the simultaneous detection requirements of small weapon objects and large personnel objects; and introducing a global semantic weaving and elastic feature alignment network to solve the problem of low discrimination between objects and complex backgrounds. Under the same experimental configuration, the proposed model is verified against current mainstream models on typical datasets: on a dataset of 2421 conflict scene personnel violent images, the peak average precision mAP50 of PDGV-DETR reached 85.9%. Through statistical verification, compared with the baseline model RT-DETR with an average value ± standard deviation of 0.840 ± 0.007, the average value ± standard deviation of PDGV-DETR reached 0.858 ± 0.004, demonstrating statistically significant performance improvement, with a p-value less than 0.01. This model can accurately complete the task of locating the object area of personnel, and compared with the deformable DETR, the accuracy improvement rate reached 15.1%.; on the weapon-specific dataset OD-WeaponDetection, the mAP for gun and knife detection reached 93.0%, improving by 2.2% compared to RT-DETR. Compared to the performance fluctuations of other general object detection models in complex security scenarios, PDGV-DETR not only has better detection and positioning accuracy for security-related objects, but also significantly improves the generalization and stability of the model. The results show that PDGV-DETR effectively balances the accuracy of positioning, detection, and computational efficiency, accurately completing end-to-end detection and positioning of weapon and personnel objects in static security surveillance images, demonstrating highly competitive performance in the detection and positioning of security-related objects in security scenes, providing core object-level pre-processing technology support for scenarios such as public area monitoring, intelligent video monitoring, and early warning of violent risks, and providing basic data for subsequent violent behavior recognition based on temporal data. Full article
Show Figures

Figure 1

17 pages, 2129 KB  
Article
A-SNNMS: An Attentive Shared Neural Normalized Min-Sum Decoder for LDPC Codes
by Fengquan Zheng, Liqian Wang, Kunfeng Liu and Zhiguo Zhang
Electronics 2026, 15(5), 1023; https://doi.org/10.3390/electronics15051023 - 28 Feb 2026
Viewed by 293
Abstract
To address the limitations of static message aggregation and training instability in the existing Shared Neural Normalized Min-Sum (SNNMS) algorithm, this paper proposes A-SNNMS, an attentive deep LDPC decoding network with adaptive training. First, an attention mechanism is introduced into the variable node [...] Read more.
To address the limitations of static message aggregation and training instability in the existing Shared Neural Normalized Min-Sum (SNNMS) algorithm, this paper proposes A-SNNMS, an attentive deep LDPC decoding network with adaptive training. First, an attention mechanism is introduced into the variable node update phase to dynamically weight incoming messages based on their reliability, effectively suppressing noise interference. Second, a collaborative training scheme incorporating an exponential decay adaptive learning rate and L2 regularization is designed to mitigate convergence oscillation and overfitting in long-code training. Simulation results for IEEE 802.16e standard codes demonstrate that A-SNNMS achieves a net coding gain of approximately 0.4 dB over the baseline SNNMS at a Bit Error Rate (BER) of 10−3. Furthermore, it achieves comparable performance with only 50% of the iterations required by the baseline. In conclusion, the A-SNNMS decoder significantly improves both decoding efficiency and system robustness, offering a promising solution for high-reliability communications. Full article
Show Figures

Figure 1

26 pages, 3681 KB  
Article
Intelligent Acquisition of Dynamic Targets via Multi-Source Information: A Fusion Framework Integrating Deep Reinforcement Learning with Evidence Theory
by Jiyao Yu, Bin Zhu, Yi Chen, Bo Xie, Xuanling Feng, Hongfei Yan, Jian Zeng and Runhua Wang
Remote Sens. 2026, 18(5), 689; https://doi.org/10.3390/rs18050689 - 26 Feb 2026
Viewed by 229
Abstract
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, [...] Read more.
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, which often treat data association and fusion from heterogeneous sensors as separate, offline processes, struggle with the dynamic uncertainties and real-time decision requirements of such scenarios. To address these limitations, this paper proposes a novel Evidence–Reinforcement Learning-based Decision and Control (ERL-DC) framework. It operates through a closed-loop architecture consisting of three core modules: A static assessment model for initial target prioritization, a Dempster–Shafer (D–S) evidence-based multi-source data decision generator for dynamic information fusion and uncertainty-aware target selection, and a Deep Reinforcement Learning (DRL) controller for noise-robust sensor steering. A high-fidelity simulation environment was developed to model the multi-source data stream, encompassing radar detection with clutter and false targets, as well as the physical constraints of the electro-optical (EO) servo system. Based on the averaged results from multiple Monte Carlo simulations, the proposed ERL-DC framework reduced the Average Decision Time (ADT) from 7.51 s to 4.53 s, corresponding to an absolute reduction of 2.98 s when compared to the conventional method integrating threshold logic with Model Predictive Control (MPC). Furthermore, the Net Discrimination Accuracy (NDA), derived from the statistical outcomes across all the simulation runs, exhibited an absolute increase of 37.8 percentage points, rising from 57.8% to 95.6%. These results indicate that ERL-DC achieves a more favorable trade-off in terms of scheduling efficiency, decision robustness, and resource utilization. The primary contribution is an intelligent, closed-loop architecture that tightly couples high-level evidential reasoning for multi-source data fusion with low-level adaptive control. Within the simulated environment characterized by clutter, false targets, and angular measurement noise, ERL-DC demonstrates improved target discrimination accuracy and decision efficiency compared to conventional methods. Future work will focus on online parameter adaptation and validation on physical platforms. Full article
Show Figures

Figure 1

25 pages, 896 KB  
Article
Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning
by Negasa Berhanu Fite, Getachew Mamo Wegari and Heidi Steendam
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211 - 23 Feb 2026
Viewed by 852
Abstract
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received [...] Read more.
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems. Full article
Show Figures

Figure 1

22 pages, 4286 KB  
Article
Symmetry-Enhanced Indoor Occupant Locating and Motionless Alarm System: Fusion of BP Neural Network and DS-TWR Technology
by Li Wang, Zhe Wang, Xinhe Meng, Wentao Chen and Aijun Sun
Symmetry 2026, 18(2), 376; https://doi.org/10.3390/sym18020376 - 18 Feb 2026
Viewed by 325
Abstract
To address the critical demand for real-time dynamic tracking of personnel in complex buildings during emergency rescue, a novel system was proposed integrating Back Propagation (BP) neural networks with Double-Sided Two-Way Ranging (DS-TWR) technology to achieve precise indoor localization and motionless detection. Comprising [...] Read more.
To address the critical demand for real-time dynamic tracking of personnel in complex buildings during emergency rescue, a novel system was proposed integrating Back Propagation (BP) neural networks with Double-Sided Two-Way Ranging (DS-TWR) technology to achieve precise indoor localization and motionless detection. Comprising hardware (positioning base stations, tags, POE switches, routers, and a computer) and software (developed on LabVIEW), the system leverages the symmetric signal transmission of DS-TWR and the adaptive learning capability of BP neural networks to effectively mitigate multipath interference, enhancing positioning consistency and accuracy. Thresholds of time period and movement distance were set to determine whether the occupant was trapped. When tested in several common building structures, it demonstrated good stability and high accuracy—the average RMSE of the positioning system was within 0.012–0.018 m (static state) and 0.048–0.065 m (dynamic state). Furthermore, the system could real-time monitor and display the movement trajectory of each person, and automatically alarm when anyone was trapped in a fire scene. Hence, rescue measures can be taken timely according to the alarm information provided by the system, effectively ensuring the safety of personnel and improving the efficiency of fire rescue work. The proposed approach provides a symmetry-driven framework for intelligent building safety. Full article
Show Figures

Figure 1

25 pages, 7838 KB  
Review
Optical Biosensors for Blood Coagulation Monitoring: Advantages, Limitations, and Translational Potential
by Zichen Wang, Gaohong Di and Jing Wang
Biosensors 2026, 16(2), 123; https://doi.org/10.3390/bios16020123 - 16 Feb 2026
Viewed by 487
Abstract
Dynamic monitoring of hemostatic equilibrium is indispensable for clinical safety in high-risk scenarios, while current clinical methods are limited by sample volume, detection speed, and physiological relevance. These shortcomings underscore the demand for novel sensing platforms. Optical biosensors, leveraging label-free detection, rapid response, [...] Read more.
Dynamic monitoring of hemostatic equilibrium is indispensable for clinical safety in high-risk scenarios, while current clinical methods are limited by sample volume, detection speed, and physiological relevance. These shortcomings underscore the demand for novel sensing platforms. Optical biosensors, leveraging label-free detection, rapid response, and multi-level characterization, could serve as a transformative solution for decentralized and point-of-care monitoring. This review systematically summarizes advances in optical coagulation testing, encompassing light transmission aggregometry, laser speckle rheology, optical coherence tomography/elastography, optic–acoustic coupled methods, and fluorescence biosensing. These technologies complementarily capture structural and mechanical and some molecular and cellular dynamics of coagulation, bridging gaps in traditional assays. Despite promising preclinical and clinical correlations, translation barriers persist in lack of standardization of metrics, interference mitigation, and multi-center validation in diverse patient cohorts. Future development of optical biosensing platforms for coagulation testing should focus on modular integration, AI-aided interference correction, and microfluidic miniaturization to realize actionable, real-time coagulation assessment. Optical biosensors hold unparalleled potential to transform hemostatic monitoring from static endpoint testing to dynamic, interpretable evaluation, guiding personalized clinical decisions. Full article
(This article belongs to the Section Optical and Photonic Biosensors)
Show Figures

Graphical abstract

19 pages, 3066 KB  
Article
Dubins-CPSO: A Hybrid Static–Dynamic Method for Coordinated Trajectory Planning of Multiple UAVs
by Xinyu Liu, Yu Fan and Mingrui Hao
Appl. Sci. 2026, 16(4), 1880; https://doi.org/10.3390/app16041880 - 13 Feb 2026
Viewed by 250
Abstract
For the problem of multi-UAV cooperative trajectory planning, this study proposes an integrated static–dynamic trajectory optimization method based on a Dubins-CPSO algorithm. An improved Dubins static path planning method utilizing virtual “Intermediate Points” is introduced, and the reference trajectory generated by this method [...] Read more.
For the problem of multi-UAV cooperative trajectory planning, this study proposes an integrated static–dynamic trajectory optimization method based on a Dubins-CPSO algorithm. An improved Dubins static path planning method utilizing virtual “Intermediate Points” is introduced, and the reference trajectory generated by this method is employed to design the fitness function for the CPSO algorithm. Within the CPSO-based dynamic optimization framework, real-time local trajectory adjustments are performed by incorporating the UAV’s current state and multi-dimensional physical constraints. This approach combines the high reliability and low command variation rate of conventional algorithms with the flexibility and strong disturbance robustness of intelligent algorithms, achieving complementary advantages. The result is a flight trajectory planning method that is more compatible with the physical mechanisms of the aircraft while possessing a degree of autonomy and intelligence. The simulation results demonstrate that the proposed algorithm can adapt to uncertain initial conditions in the studied scenarios. Furthermore, under interference, it exhibits superior real-time regulation capability compared with traditional algorithms alone and greater robustness and practicality than standalone intelligent algorithms. This provides a more implementable trajectory planning solution for UAVs with strict physical constraints in engineering applications. Full article
Show Figures

Figure 1

29 pages, 33196 KB  
Article
Robust Autonomous Perception for Indoor Service Machines via Geometry-Aware RGB-D SLAM and Probabilistic Dynamic Modeling
by Zhiyu Wang, Weili Ding and Wenna Wang
Machines 2026, 14(2), 222; https://doi.org/10.3390/machines14020222 - 12 Feb 2026
Viewed by 297
Abstract
Reliable autonomous perception is essential for indoor service machines operating in human-centered environments, where weak textures, repetitive structures, and frequent dynamic interference often degrade localization stability. Conventional RGB-D SLAM systems typically rely on static-scene assumptions or binary semantic masking, which are insufficient for [...] Read more.
Reliable autonomous perception is essential for indoor service machines operating in human-centered environments, where weak textures, repetitive structures, and frequent dynamic interference often degrade localization stability. Conventional RGB-D SLAM systems typically rely on static-scene assumptions or binary semantic masking, which are insufficient for handling persistent and fine-grained environmental dynamics. This paper presents a robust autonomous perception framework based on geometry-aware RGB-D SLAM, with a particular emphasis on probabilistic dynamic modeling at the feature level. The proposed system integrates multi-granularity geometric representations, including point features, parallel-line structures, and planar regions, to enhance geometric observability in low-texture indoor environments. On this basis, a probabilistic dynamic model is introduced to explicitly characterize feature reliability under motion, where dynamic probabilities are initialized by object detection and continuously updated through temporal consistency, spatial propagation, and multi-view geometric verification. Large-scale planar structures further serve as stable anchors to support robust pose estimation. Experimental results on the TUM RGB-D dynamic benchmark demonstrate that the proposed method significantly improves localization robustness, reducing the average ATE RMSE by approximately 66% compared with representative dynamic SLAM baselines. Additional evaluations on a real-world indoor dataset further validate its effectiveness for long-term autonomous perception under dense motion and frequent occlusions. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Back to TopTop