Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (565)

Search Parameters:
Keywords = induced deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 23758 KB  
Article
Terrain-Aware Self-Supervised Representation Learning for Tree Species Mapping in Mountainous Regions Under Limited Field Samples
by Li He, Leiguang Wang, Liang Hong, Qinling Dai, Wei Gu, Xingyue Du, Mingqi Yang, Juanjuan Liu and Yaoming Feng
Remote Sens. 2026, 18(6), 951; https://doi.org/10.3390/rs18060951 (registering DOI) - 21 Mar 2026
Abstract
Accurate tree species mapping is critical for forest inventory, biodiversity assessment, and ecosystem management. In mountainous regions, terrain-induced radiometric non-stationarity and limited field access often produce scarce, clustered, and environmentally biased samples, limiting model generalization. To address this issue, this study proposes a [...] Read more.
Accurate tree species mapping is critical for forest inventory, biodiversity assessment, and ecosystem management. In mountainous regions, terrain-induced radiometric non-stationarity and limited field access often produce scarce, clustered, and environmentally biased samples, limiting model generalization. To address this issue, this study proposes a terrain-aware self-supervised representation learning framework for tree species classification under small-sample conditions. The framework integrates terrain information into representation learning and adopts a hybrid contrastive–generative self-supervised strategy to learn discriminative and terrain-robust features from large volumes of unlabeled multi-source remote sensing data. These learned representations are subsequently combined with limited field samples to produce regional-scale tree species maps. Experiments conducted across Yunnan Province, China, using Sentinel-1, Sentinel-2 and Landsat time-series data show that the proposed framework substantially improvesa class separability and classification robustness in complex mountainous environments. The framework achieves an overall accuracy of 75.8%, significantly outperforming conventional feature engineering (38.3–40.6%) and supervised deep learning models (37.3–47.8%). Species with relatively homogeneous structure and strong ecological niche dependence can be accurately mapped with limited training samples, whereas structurally complex forest communities require broader environmental sample coverage. Overall, the results highlight the potential of terrain-aware self-supervised representation learning as a scalable and data-efficient paradigm for forest mapping in mountainous and environmentally heterogeneous regions. Full article
16 pages, 2627 KB  
Article
Deep Learning-Based Calibration of a Multi-Point Thin-Film Thermocouple Array for Temperature Field Measurement
by Zewang Zhang, Shigui Gong, Jiajie Ye, Chengfei Zhang, Jun Chen, Zhixuan Su, Heng Wang, Zhichun Liu and Zhenyin Hai
Sensors 2026, 26(6), 1956; https://doi.org/10.3390/s26061956 - 20 Mar 2026
Abstract
Multi-point array thin-film thermocouples have strong potential for high-precision, wide-range temperature monitoring in applications such as aircraft engine thermal condition assessment and industrial process control. However, conventional single-point thin-film thermocouples cannot satisfy the distributed measurement requirements of large-area temperature fields, and the accuracy [...] Read more.
Multi-point array thin-film thermocouples have strong potential for high-precision, wide-range temperature monitoring in applications such as aircraft engine thermal condition assessment and industrial process control. However, conventional single-point thin-film thermocouples cannot satisfy the distributed measurement requirements of large-area temperature fields, and the accuracy of multi-point arrays is often degraded by coupling effects among sensing nodes, which hinders their engineering deployment. In this work, a multi-point array thin-film thermocouple is fabricated via precision welding, and an insulating layer is deposited on the sensor surface using electrospray atomization to establish a multi-point temperature-sensing hardware system. To compensate for coupling-induced deviations, a deep learning–based calibration method is developed: measurements from the array and reference thermocouples are synchronously collected to build the dataset, outliers are removed using the interquartile range (IQR) method, and a three-hidden-layer multilayer perceptron (MLP) is trained for each node independently using the Adam optimizer (learning rate 0.001) with an 8:2 train–test split. Performance is quantified by MAE, MSE, and R2, and the results show that the proposed approach markedly reduces measurement errors and improves the accuracy of the array thermocouples, demonstrating reliable performance and practical applicability for precise large-area temperature-field monitoring. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

29 pages, 5347 KB  
Article
Optimized Reinforcement Learning-Driven Model for Remote Sensing Change Detection
by Yan Zhao, Zhiyun Xiao, Tengfei Bao and Yulong Zhou
J. Imaging 2026, 12(3), 139; https://doi.org/10.3390/jimaging12030139 - 19 Mar 2026
Abstract
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal [...] Read more.
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal illumination changes), and slight residual misregistration—leading to pseudo-changes and fragmented boundaries. Second, prevailing methods follow a static one-pass inference paradigm and lack an explicit feedback mechanism for adaptive error correction, which weakens generalization in complex or unseen scenes. To address these issues, we propose a feedback-driven CD framework that integrates a dual-branch U-Net with deep reinforcement learning (RL) for pixel-level probabilistic iterative refinement of an initial change probability map. The backbone produces a preliminary posterior estimate of change likelihood from multi-scale bi-temporal features, while a PPO-based RL agent formulates refinement as a Markov decision process. The agent leverages a state representation that fuses multi-scale features, prediction confidence/uncertainty, and spatial consistency cues (e.g., neighborhood coherence and edge responses) to apply multi-step corrective actions. From an imaging and interpretation perspective, the RL module can be viewed as a learnable, self-adaptive imaging optimization mechanism: for high-risk regions affected by blurred boundaries, radiometric inconsistencies, and local misalignment, the agent performs feedback-driven multi-step corrections to improve boundary fidelity and spatial coherence while suppressing pseudo-changes caused by shadows and illumination variations. Experiments on four datasets (CDD, SYSU-CD, PVCD, and BRIGHT) verify consistent improvements. Using SiamU-Net as an example, the proposed RL refinement increases mIoU by 3.07, 2.54, 6.13, and 3.1 points on CDD, SYSU-CD, PVCD, and BRIGHT, respectively, with similarly consistent gains observed when the same RL module is integrated into other representative CD backbones. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

22 pages, 18423 KB  
Article
Quantitative Stability Assessment of Landslides Following the 2024 Zixing Rainstorm Using Time-Series InSAR
by Bing Sui, Yu Fang, Dongdong Li, Zhengjia Zhang, Leishi Chen, Dongsheng Du and Tianying Wang
Remote Sens. 2026, 18(6), 929; https://doi.org/10.3390/rs18060929 - 19 Mar 2026
Abstract
In July 2024, a major rainfall-induced landslide disaster occurred in Zixing county, Hunan Province, triggering more than 4000 landslides with a total area exceeding 21 km2. The scale of this hazard underscores a critical need for long-term stability assessment of the [...] Read more.
In July 2024, a major rainfall-induced landslide disaster occurred in Zixing county, Hunan Province, triggering more than 4000 landslides with a total area exceeding 21 km2. The scale of this hazard underscores a critical need for long-term stability assessment of the affected slopes. While previous studies have primarily used optical remote sensing to map landslide distributions, quantitative evaluation of post-failure movement dynamics remains limited. This study developed an integrated monitoring framework that combines time-series SBAS-InSAR displacement measurements (using Sentinel-1 data from August 2024 to September 2025) with deep learning-based optical interpretation, rainfall analysis, and geological data. Our approach enables the quantitative, region-scale stability assessment of the Zixing landslide cluster one year after the initial event. Experimental results reveal sustained surface displacement with rates ranging from −30 to 30 mm/year, and localized displacements exceeding 40 mm/year. Notably, over 48% of the mapped landslides are classified as active or critically active, indicating widespread, ongoing instability. Correlation analysis further establishes precipitation as a key driver of accelerated movement. Beyond the Zixing case, this work provides a transferable methodology for assessing long-term post-disaster landslide behavior, offering direct value for regional hazard management and early-warning systems. Full article
Show Figures

Figure 1

44 pages, 10334 KB  
Article
Yixin Yangshen Granules Target HIF−1 Signaling to Modulate the Neuroimmune Microenvironment in Alzheimer’s Disease: Insights from Integrative Multi-Omics and Deep Learning
by Zhihao Wang, Linshuang Wang, Yusheng Zhang, Sixia Yang, Bo Shi, Dasheng Liu, Han Zhang, Wan Xiao, Junying Zhang, Xuejie Han and Dongfeng Wei
Pharmaceuticals 2026, 19(3), 502; https://doi.org/10.3390/ph19030502 - 18 Mar 2026
Viewed by 33
Abstract
Background/Objectives: Alzheimer’s disease (AD) involves amyloid and tau pathology with neuroimmune dysregulation, and Yixin Yangshen Granules (YXYS) shows neuroprotective promise, though mechanisms remain unclear. This study aimed to elucidate the multi-target mechanisms of YXYS in AD. Methods: The study began by [...] Read more.
Background/Objectives: Alzheimer’s disease (AD) involves amyloid and tau pathology with neuroimmune dysregulation, and Yixin Yangshen Granules (YXYS) shows neuroprotective promise, though mechanisms remain unclear. This study aimed to elucidate the multi-target mechanisms of YXYS in AD. Methods: The study began by analyzing a public human AD hippocampal snRNA-seq dataset to identify cell-type-specific pathological pathways and profiled YXYS constituents by UPLC-QTOF-MS. In vitro, YXYS cytoprotection against mitochondrial dysfunction and oxidative stress was tested in Aβ25–35-challenged HT22 cells; in vivo efficacy was assessed in Aβ142-induced mice via behavioral and histopathological analyses. Integrated transcriptomic and proteomic profiling of brain tissue, with ELISA, qRT-PCR, and Western blot validation, confirmed pathway targets. Using the intersection of transcriptomic and proteomic targets as biological input, the DTIAM deep learning framework was employed to prioritize active YXYS constituents. Finally, molecular docking and 100-ns dynamics simulations demonstrated direct binding of Ganosporelactone A to HIF−1α. Results: AD snRNA-seq analysis highlighted HIF−1 and AGE-RAGE signaling as prominent pathways in the AD hippocampus, particularly enriched in brain microvascular endothelial cells, implicating neurovascular hypoxic and inflammatory stress. In Aβ-induced mice, YXYS improved cognition, reduced Aβ pathology, suppressed neuroinflammation, and promoted neuronal survival, consistent with in vitro evidence of restored mitochondrial function. Multi-omics confirmed convergence on HIF−1 and AGE-RAGE pathways, with YXYS rebalancing the neuroimmune microenvironment by reducing pro-inflammatory M0 macrophages. Screening against these consensus signaling hubs, deep learning analysis prioritized Ganosporelactone A as the top-ranked modulator, and molecular further demonstrated the stable binding of Ganosporelactone A to HIF−1α, linking YXYS to mitigation of hypoxic stress. Conclusions: Guided by multi-omics and deep learning, our findings suggest that YXYS may alleviate AD-related phenotypes through multi-target modulation of the HIF−1 and AGE-RAGE pathways, with associated improvements in neuro-immune homeostasis and reductions in oxidative stress, neuroinflammation, and hypoxia. Full article
26 pages, 4173 KB  
Article
Physics-Guided Variational Causal Intervention Network for Few-Shot Radar Jamming Recognition
by Dong Xia, Liming Lv, Youjian Zhang, Yanxi Lu, Fang Li, Lin Liu, Xiang Liu, Yajun Zeng and Zhan Ge
Sensors 2026, 26(6), 1900; https://doi.org/10.3390/s26061900 - 18 Mar 2026
Viewed by 66
Abstract
Rapid and accurate recognition of radar active jamming is a prerequisite for cognitive electronic countermeasures. However, under complex electromagnetic environments with scarce training samples, existing deep learning models are prone to capturing spurious correlations induced by environmental confounders, resulting in notable performance degradation. [...] Read more.
Rapid and accurate recognition of radar active jamming is a prerequisite for cognitive electronic countermeasures. However, under complex electromagnetic environments with scarce training samples, existing deep learning models are prone to capturing spurious correlations induced by environmental confounders, resulting in notable performance degradation. To address this causal confounding issue, we propose a physics-guided variational causal intervention network (PG-VCIN). First, we reconstruct a structured causal model of jamming signal generation, decoupling observations into robust physical statistical features and sensitive time–frequency image representations. Physical priors are then leveraged to perform dynamic precision-weighted modulation of visual feature extraction, enforcing physical consistency at the representation learning stage. Second, we formulate deconfounding within an active inference framework and introduce a variational information bottleneck to optimize mutual information, thereby filtering out high-complexity redundant information attributable to confounders while preserving the essential causal semantics. Finally, we numerically approximate the causal effect by imposing dual intervention constraints in the latent space, including intra-class invariance and confounder invariance. Experiments on a semi-physical simulation dataset demonstrate that the proposed method achieves substantially higher recognition accuracy than several representative few-shot baselines in extremely low-sample regimes, validating the effectiveness of integrating physical mechanisms with causal inference. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

24 pages, 9297 KB  
Article
AI-Enabled Frequency Diverse Array Spaceborne Surveillance Radar for Space Debris and Threat Detection Under Resource Constraints
by Dayan Guo, Tianyao Huang, Zijian Lin, Jie He and Yue Qi
Remote Sens. 2026, 18(6), 908; https://doi.org/10.3390/rs18060908 - 16 Mar 2026
Viewed by 103
Abstract
Ensuring space environment security through the detection of space debris and non-cooperative threat objects has become a critical mission for next-generation spaceborne surveillance systems. Frequency diversity array (FDA) radar, with its unique range angle-dependent beampattern, offers a transformative capability to distinguish closely-spaced space [...] Read more.
Ensuring space environment security through the detection of space debris and non-cooperative threat objects has become a critical mission for next-generation spaceborne surveillance systems. Frequency diversity array (FDA) radar, with its unique range angle-dependent beampattern, offers a transformative capability to distinguish closely-spaced space threats from intense background clutter. However, the operational deployment of spaceborne FDA is inherently hindered by stringent platform resource constraints, including limited power supply, high hardware complexity, and restricted data transmission bandwidth. These physical limitations inevitably lead to incomplete signal observations, resulting in elevated sidelobes that can obscure small, high-speed space debris. To bridge the gap between hardware constraints and high-fidelity surveillance, this paper proposes an AI-enabled data recovery framework based on deep matrix factorization. Specifically designed to process the complex-valued nature of radar echoes, the proposed framework introduces two specialized architectures: a real-valued representation-based method (DMF-Rr) and a native complex-valued deep matrix factorization (CDMF) network that preserves vital phase coherence. By leveraging deep learning to “enable” sparse-sampled systems, the proposed method effectively reconstructs missing observations without requiring prior knowledge of the signal rank. Numerical results demonstrate that the AI-powered CDMF significantly suppresses the high sidelobes induced by resource-limited sampling, enabling the reliable identification and localization of weak threat objects. This study demonstrates the power of AI in overcoming the physical bottlenecks of spaceborne hardware, providing a robust solution for enhancing space situational awareness in an increasingly crowded orbital environment. Full article
(This article belongs to the Special Issue Advanced Techniques of Spaceborne Surveillance Radar)
Show Figures

Figure 1

16 pages, 310 KB  
Article
A Regularized Backbone-Level Cross-Modal Interaction Framework for Stable Temporal Reasoning in Video-Language Models
by Geon-Woo Kim and Ho-Young Jung
Mathematics 2026, 14(6), 996; https://doi.org/10.3390/math14060996 - 15 Mar 2026
Viewed by 188
Abstract
Deep learning approaches for egocentric video understanding often lack a principled theoretical treatment of stability, particularly when dealing with the sparse, noisy, and temporally ambiguous observations characteristic of first-person imaging. In this work, we frame egocentric video question answering not merely as a [...] Read more.
Deep learning approaches for egocentric video understanding often lack a principled theoretical treatment of stability, particularly when dealing with the sparse, noisy, and temporally ambiguous observations characteristic of first-person imaging. In this work, we frame egocentric video question answering not merely as a classification task, but as an ill-posed inverse problem aimed at reconstructing latent semantic intent from stochastically perturbed visual signals. To address the instability inherent in standard dual-encoder architectures, we present a framework with a mathematical interpretation that incorporates gated cross-modal interaction within the transformer backbone. Formally, the video-side update analyzed in this work is defined as a learnable convex combination of unimodal feature representations and cross-modal attention residuals; the full implementation applies analogous gated cross-modal updates bidirectionally. From a regularization perspective, the gating mechanism can be interpreted as an adaptive parameter that balances data fidelity against language-conditioned structural constraints during feature reconstruction. We provide the Bounded Update Property (Lemma 1) and an analytical layer-wise sensitivity bound and empirically demonstrate that the proposed framework achieves measurable improvements in both accuracy and stability on the EgoTaskQA and MSR-VTT benchmarks. On EgoTaskQA, our model improves accuracy from 27.0% to 31.7% (+4.7 pp) and reduces the accuracy drop under 50% frame drop from 3.93 pp to 0.94 pp. On MSR-VTT, our model improves accuracy by 13.0 pp over the dual-encoder baseline. Under severe perturbation (50% frame drop) on MSR-VTT, our model retains 97.7% of its clean performance, whereas the baseline exhibits near-zero drop accompanied by majority-class behavior. These results provide empirical evidence that the proposed interaction induces stable behavior under perturbations in an ill-posed multimodal inference setting, mitigating sensitivity to sampling variability while preserving query-relevant temporal structure. Furthermore, an entropy-based analysis indicates that the gating mechanism prevents excessive diffusion of attention, promoting coherent temporal reasoning. Overall, this work offers a mathematically informed perspective on designing interaction mechanisms for stable multimodal systems, with a focus on robust reasoning under temporal ambiguity. Full article
Show Figures

Figure 1

35 pages, 3555 KB  
Article
Adaptive Load Optimization and Precision Control Scheme for Vertical Landing Rockets with Sparse Sensing Data
by Chenxiao Fan, Wei He, Yang Zhao, Hutao Cui and Guangsheng Zhu
Aerospace 2026, 13(3), 255; https://doi.org/10.3390/aerospace13030255 - 9 Mar 2026
Viewed by 166
Abstract
High−Altitude wind is a critical factor affecting the recovery safety of reusable rockets, significantly altering aerodynamic loads, flight attitudes, and trajectories—especially during the aerodynamic deceleration phase (engine shutdown) of reentry, posing severe challenges to high-precision guidance and stable control. Currently, accurate advance prediction [...] Read more.
High−Altitude wind is a critical factor affecting the recovery safety of reusable rockets, significantly altering aerodynamic loads, flight attitudes, and trajectories—especially during the aerodynamic deceleration phase (engine shutdown) of reentry, posing severe challenges to high-precision guidance and stable control. Currently, accurate advance prediction of landing site wind fields is difficult with poor real-time performance, necessitating a real-time estimation and prediction method independent of additional measurement equipment. This study addresses this gap by proposing a deep learning-based approach for wind field estimation and prediction, using directly measurable attitude angles and apparent acceleration deviations of the rocket as inputs to train a dedicated deep neural network. Furthermore, to solve the attitude control problem of Reusable Launch Vehicles (RLVs) during recovery, a non-recursive simplified high-order sliding mode control method with online wind disturbance compensation is designed to achieve finite-time convergence. First, a dynamic model for the attitude control of RLVs during recovery is established; second, based on homogeneity theory, a non-recursive simplified homogeneous high-order sliding mode controller is developed to realize finite-time tracking control during RLV recovery with uncertainties, effectively suppressing the chattering inherent in sliding mode control; finally, simulation results verify the effectiveness and engineering feasibility of the proposed method. The combined approach significantly reduces wind-induced disturbance torque and required control torque, enhancing the adaptability and control robustness of vertically recoverable rockets to wind fields. Full article
Show Figures

Figure 1

22 pages, 5753 KB  
Article
LiDAR-Referenced Inflow Wind Condition Estimation from SCADA Data Using a Deep Learning Model
by Shukai He, Hangyu Wang, Jie Yan, Kaibo Wang, Yongqian Liu, Jian Yue, Bo Xu and Guoqing Li
Energies 2026, 19(5), 1373; https://doi.org/10.3390/en19051373 - 8 Mar 2026
Viewed by 235
Abstract
Accurate inflow wind conditions are essential for operational wind farms. However, wind conditions from the Supervisory Control and Data Acquisition (SCADA) system are significantly affected by rotor-induced disturbances and thus cannot reliably represent the true inflow. Although LiDAR can directly measure inflow wind [...] Read more.
Accurate inflow wind conditions are essential for operational wind farms. However, wind conditions from the Supervisory Control and Data Acquisition (SCADA) system are significantly affected by rotor-induced disturbances and thus cannot reliably represent the true inflow. Although LiDAR can directly measure inflow wind conditions, its data availability is highly sensitive to environmental conditions, frequently leading to insufficient valid samples. Existing studies generally apply the Nacelle Transfer Function (NTF) to empirically correct SCADA wind speed, yet its accuracy remains limited. Consequently, this study proposes a deep learning model for LiDAR-referenced inflow wind condition estimation from SCADA data. First, variations in LiDAR data availability and their influencing factors are systematically analyzed. The deviations and correlations between SCADA data and LiDAR measurements are quantitatively characterized. Subsequently, a deep learning model is developed, employing a time–frequency dual-branch residual network to extract features from SCADA data, while incorporating the Gram matrix as an additional input to provide auxiliary information. Finally, the proposed method is validated using measurements from two offshore turbines with different rated capacities. The results demonstrate that the proposed approach outperforms comparative methods, enabling more accurate estimation of inflow wind speed and direction. Full article
Show Figures

Figure 1

26 pages, 38449 KB  
Article
Dual-Stream Difference Modeling with Deep-Guided Multiscale Fusion for Mangrove Change Detection
by Xin Wang, Shuai Tang, Qin Qin, Shunqi Yuan and Xiansheng Liang
Sensors 2026, 26(5), 1701; https://doi.org/10.3390/s26051701 - 8 Mar 2026
Viewed by 211
Abstract
Accurate mangrove change detection is important for coastal ecosystem monitoring but remains challenging due to tidal disturbances, unstable land–water boundaries, and multi-scale distribution variability. Tidal fluctuations introduce spectral variations that obscure real changes. As a result, existing deep learning methods face difficulties in [...] Read more.
Accurate mangrove change detection is important for coastal ecosystem monitoring but remains challenging due to tidal disturbances, unstable land–water boundaries, and multi-scale distribution variability. Tidal fluctuations introduce spectral variations that obscure real changes. As a result, existing deep learning methods face difficulties in distinguishing tide-induced pseudo-changes while balancing semantic consistency and boundary accuracy. To address these issues, we propose DSDGMNet, which incorporates Dual-Stream Difference Modeling and Deep-Guided Multiscale Fusion. The dual-stream difference-driven strategy is designed to reduce tidal interference and improve sensitivity to true structural changes, and the deep-guided multiscale fusion module integrates global context with fine boundary details. Experiments on the GBCNR dataset show that DSDGMNet achieves an F1-score of 71.36% compared to 68.87% by SNUNet (Siamese Densely Connected UNet) and 66.39% by ChangeFormer. On the WHU-CD dataset, DSDGMNet yields an F1-score of 91.38%, in comparison with 89.85% for DDLNet and 88.82% for ChangeFormer. These results suggest the method’s effectiveness for mangrove change detection in complex intertidal environments. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

19 pages, 2661 KB  
Article
Two-Stage Microseismic P-Wave Arrival Picking via STA/LTA-Guided Lightweight U-Net
by Jiancheng Jin, Gang Wang, Yuanhang Qiu, Siyuan Gong and Bo Ren
Sensors 2026, 26(5), 1693; https://doi.org/10.3390/s26051693 - 7 Mar 2026
Viewed by 241
Abstract
Accurate picking of microseismic P-wave arrival times is essential for the localization and monitoring of mining-induced seismic events. Conventional Short-Term Average/Long-Term Average (STA/LTA) detectors, while computationally efficient, are highly susceptible to noise interference. Conversely, deep learning approaches exhibit superior noise robustness but often [...] Read more.
Accurate picking of microseismic P-wave arrival times is essential for the localization and monitoring of mining-induced seismic events. Conventional Short-Term Average/Long-Term Average (STA/LTA) detectors, while computationally efficient, are highly susceptible to noise interference. Conversely, deep learning approaches exhibit superior noise robustness but often involve substantial computational redundancy and compromised real-time performance. To address these limitations, we propose a novel two-stage picking framework that integrates STA/LTA with a lightweight U-Net, enabling rapid preliminary detection followed by fine-grained refinement. In the first stage, STA/LTA rapidly scans continuous waveforms to identify candidate windows potentially containing P-wave arrivals. In the second stage, a lightweight U-Net performs sample-level regression within each candidate window to refine arrival-time estimates with high precision. This coarse-to-fine paradigm effectively balances computational efficiency and picking accuracy. Experimental validation on 500 Hz microseismic data acquired from a coal mine in Gansu Province demonstrates that the proposed method achieves a hit rate of 63.21% within a tolerance window of ±0.01 s. This represents performance improvements of 25.42% and 40.47% over convolutional neural network (CNN) and STA/LTA methods, respectively, while reducing the mean absolute error to 0.0130 s. Furthermore, the model exhibits consistent performance on independent test sets, confirming its generalization capability and noise robustness. By combining the computational efficiency of STA/LTA with the representational power of deep learning, the proposed approach demonstrates significant potential for real-time industrial deployment. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

30 pages, 4600 KB  
Article
Fault-Resilient Flat-Top Current Control for Large-Scale Electromagnetic Forming Using Staged-DQN
by Manli Huang, Xiaokang Sun, Jiqiang Wang, Jiajie Chen and Feifan Yu
Appl. Sci. 2026, 16(5), 2478; https://doi.org/10.3390/app16052478 - 4 Mar 2026
Viewed by 199
Abstract
Quasi-Static Electromagnetic Forming (QSEF) technology utilizes stable magnetic fields generated by long-pulse flat-top currents to achieve non-contact, high-precision forming of large-scale integral aerospace components. To meet the immense energy demands of large-scale component forming, the drive system requires instantaneous power output capabilities at [...] Read more.
Quasi-Static Electromagnetic Forming (QSEF) technology utilizes stable magnetic fields generated by long-pulse flat-top currents to achieve non-contact, high-precision forming of large-scale integral aerospace components. To meet the immense energy demands of large-scale component forming, the drive system requires instantaneous power output capabilities at the Gigawatt level. Consequently, the precise regulation of ultra-high flat-top current waveforms becomes a critical challenge for ensuring forming quality. However, traditional meta-heuristic methods, such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO), exhibit limited adaptability and robustness when addressing strong geometric nonlinearities induced by workpiece deformation and the performance degradation of pulsed power modules. To address engineering challenges such as capacitor degradation, inductance drift, and module failures, this paper proposes a Staged Deep Reinforcement Learning (Staged-DQN) adaptive current control framework. This framework decouples the discharge scheduling into “heuristic rapid rise” and “DQN fine compensation” stages, adaptively optimizing triggering timing to suppress plateau oscillations and compensate for energy deficits caused by faults. Simulation results demonstrate that under typical high-energy operating conditions, the proposed method achieves superior tracking accuracy compared to traditional PSO in fault-free scenarios. In extreme scenarios involving 25 faulty modules, the Mean Absolute Percentage Error (MAPE) is maintained between 1.13% and 1.80%, significantly lower than the 2.65–3.52% of the baseline DQN. This study validates the effectiveness of the proposed method in enhancing waveform quality and system fault tolerance, offering a reliable intelligent control solution for large-scale electromagnetic manufacturing equipment. Full article
Show Figures

Figure 1

20 pages, 77395 KB  
Article
Underwater Moving Target Localization Based on High-Density Pressure Array Sensing
by Jiamin Chen, Yilin Li, Ruixin Chen, Wenjun Li, Keqiang Yue and Ruixue Li
J. Mar. Sci. Eng. 2026, 14(5), 484; https://doi.org/10.3390/jmse14050484 - 3 Mar 2026
Viewed by 269
Abstract
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which [...] Read more.
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which limits the development of high-precision perception and localization methods for underwater moving targets. In this study, a high-fidelity simulation model is established to characterize the pressure field variations induced by a moving source on an artificial lateral line pressure array. The influences of source velocity and sensing distance on the sensitivity and discretization characteristics of the pressure array are systematically investigated. Simulation results indicate that the sensor density of the pressure array is strongly correlated with the spatial resolution of the acquired pressure data, and a resolution of 50 sensors per meter is selected as the best-performing configuration by balancing sensing accuracy and sensor quantity. Under this configuration, the pressure distribution induced by the moving source exhibits clear and distinguishable spatiotemporal features, making it suitable for deep learning-based modeling. Furthermore, a large-scale temporal pressure dataset is constructed based on high-fidelity simulations under multiple motion directions and velocity conditions, and a spatiotemporal neural network is employed to predict the position of the underwater moving source. Experimental results demonstrate that, for straight-line underwater motion scenarios, the average localization error is within 7 cm, and a classification accuracy of 71% is achieved in practical engineering experiments. These results indicate that the proposed artificial lateral line pressure array design and deep learning-based prediction framework provide a feasible and effective solution for underwater target perception and localization in complex flow environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 10242 KB  
Article
Cross-Modality Whole-Heart MRI Reconstruction with Deep Motion Correction and Super-Resolution
by Jinwei Dong, Wenhao Ke, Wangbin Ding, Liqin Huang and Mingjing Yang
Sensors 2026, 26(5), 1565; https://doi.org/10.3390/s26051565 - 2 Mar 2026
Viewed by 282
Abstract
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical [...] Read more.
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical analyses such as 3D reconstruction and regional functional assessment. On the other hand, acquiring high-resolution MRI demands extended scan durations that increase patient burden and potential health risks. To address this challenge, we propose a deep motion correction and super-resolution whole-heart reconstruction (DeepWHR) framework. It learns cardiac structure prior knowledge from computed tomography (CT) data, and transfers it to reconstruct cardiac structure from conventional misaligned and large slice thickness MRI images. Specifically, DeepWHR utilizes CT anatomy data to train a deep motion correction model that enables the network to capture structurally coherent and anatomically consistent representations, while MRI Finetune preserves modality-specific spatial characteristics, ensuring that the reconstructed results retain the intrinsic MRI data distribution. Furthermore, DeepWHR introduced an implicit neural representation module, which models continuous spatial fields, enabling multi-scale super-resolution structure reconstruction. Experiments on the CARE2024 WHS dataset validate that our method not only restores the spatial coherence of MRI-derived anatomical structures but also generates high-fidelity label representations suitable for downstream cardiac applications. This study demonstrates that DeepWHR transforms sparse, misaligned 2D label stacks into anatomically coherent, high-resolution 3D models, enhancing their reliability for clinical applications. Full article
(This article belongs to the Special Issue Emerging MRI Techniques for Enhanced Disease Diagnosis and Monitoring)
Show Figures

Figure 1

Back to TopTop