Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (586)

Search Parameters:
Keywords = squeeze and excitation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1780 KB  
Article
Dual-Branch CNN for Direction-of-Arrival and Number-of-Sources Estimation
by Yufeng Jiang and Lin Zou
Sensors 2026, 26(3), 809; https://doi.org/10.3390/s26030809 (registering DOI) - 26 Jan 2026
Abstract
Despite numerous conventional direction-of-arrival (DOA) methods, relationships between number of sources (NOS) and DOA are often ignored, which could yield meaningful estimation information. Therefore, a dual-branch Convolutional Neutral Network (CNN) integrated with squeeze-and-excitation (SE) blocks that can perform DOA and NOS estimation simultaneously [...] Read more.
Despite numerous conventional direction-of-arrival (DOA) methods, relationships between number of sources (NOS) and DOA are often ignored, which could yield meaningful estimation information. Therefore, a dual-branch Convolutional Neutral Network (CNN) integrated with squeeze-and-excitation (SE) blocks that can perform DOA and NOS estimation simultaneously is proposed to address such limitations. Extensive simulations demonstrate the superiority of the proposed model over several traditional algorithms, especially under low signal-to-noise (SNR) conditions, limited snapshots, and in closely spaced incident angle scenarios. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 7177 KB  
Article
MFF-Net: A Study on Soil Moisture Content Inversion in a Summer Maize Field Based on Multi-Feature Fusion of Leaf Images
by Jianqin Ma, Jiaqi Han, Bifeng Cui, Xiuping Hao, Zhengxiong Bai, Yijian Chen, Yan Zhao and Yu Ding
Agriculture 2026, 16(3), 298; https://doi.org/10.3390/agriculture16030298 - 23 Jan 2026
Viewed by 211
Abstract
Current agricultural irrigation management practices are often extensive, and traditional soil moisture content (SMC) monitoring methods are inefficient, so there is a pressing need for innovative approaches in precision irrigation. This study proposes a Multi-Feature Fusion Network (MFF-Net) for SMC inversion. The model [...] Read more.
Current agricultural irrigation management practices are often extensive, and traditional soil moisture content (SMC) monitoring methods are inefficient, so there is a pressing need for innovative approaches in precision irrigation. This study proposes a Multi-Feature Fusion Network (MFF-Net) for SMC inversion. The model uses a designed Channel-Changeable Residual Block (ResBlockCC) to construct a multi-branch feature extraction and fusion architecture. Integrating the Channel Squeeze and Spatial Excitation (sSE) attention module with U-Net-like skip connections, MFF-Net inverts root-zone SMC from summer maize leaf images. Field experiments were conducted in Zhengzhou, Henan Province, China, from 2024 to 2025, under three irrigation treatments: 60–70% θfc, 70–90% θfc, and 60–90% θfc (θfc denotes field capacity). This study shows that (1) MFF-Net achieved its smallest inversion error under the 60–70% θfc treatment, suggesting the inversion was most effective when SMC variation was small and relatively low; (2) MFF-Net demonstrated superior performance to several benchmark models, achieving an R2 of 0.84; and (3) the ablation study confirmed that each feature branch and the sSE attention module contributed positively to model performance. MFF-Net thus offers a technological reference for real-time precision irrigation and shows promise for field SMC inversion in summer maize. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

19 pages, 5390 KB  
Article
Multilevel Modeling and Validation of Thermo-Mechanical Nonlinear Dynamics in Flexible Supports
by Xiangyu Meng, Qingyu Zhu, Qingkai Han and Junzhe Lin
Machines 2026, 14(1), 131; https://doi.org/10.3390/machines14010131 - 22 Jan 2026
Viewed by 31
Abstract
Prediction accuracy for complex flexible support systems is often limited by insufficiently characterized thermo-mechanical couplings and nonlinearities. To address this, we propose a multilevel hybrid parallel–serial model that integrates the thermo-viscous effects of a Squeeze Film Damper (SFD) via a coupled Reynolds–Walther equation, [...] Read more.
Prediction accuracy for complex flexible support systems is often limited by insufficiently characterized thermo-mechanical couplings and nonlinearities. To address this, we propose a multilevel hybrid parallel–serial model that integrates the thermo-viscous effects of a Squeeze Film Damper (SFD) via a coupled Reynolds–Walther equation, the structural flexibility of a squirrel-cage support using Finite Element analysis, and the load-dependent stiffness of a four-point contact ball bearing based on Hertzian theory. The resulting state-dependent system is solved using a force-controlled iterative numerical algorithm. For validation, a dedicated bidirectional excitation test rig was constructed to decouple and characterize the support’s dynamics via frequency-domain impedance identification. Experimental results indicate that equivalent damping is temperature-sensitive, decreasing by approximately 50% as the lubricant temperature rises from 30 °C to 100 °C. In contrast, the system exhibits pronounced stiffness hardening under increasing loads. Theoretical analysis attributes this nonlinearity primarily to the bearing’s Hertzian contact mechanics, which accounts for a stiffness increase of nearly 240%. This coupled model offers a distinct advancement over traditional linear approaches, providing a validated framework for the design and vibration control of aero-engine flexible supports. Full article
Show Figures

Figure 1

16 pages, 1206 KB  
Article
HASwinNet: A Swin Transformer-Based Denoising Framework with Hybrid Attention for mmWave MIMO Systems
by Xi Han, Houya Tu, Jiaxi Ying, Junqiao Chen and Zhiqiang Xing
Entropy 2026, 28(1), 124; https://doi.org/10.3390/e28010124 - 20 Jan 2026
Viewed by 151
Abstract
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic [...] Read more.
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic noise sensitivity and clustered sparse multipath structures. These challenges are particularly severe under limited pilot resources and low signal-to-noise ratio (SNR) conditions. To address these difficulties, this paper proposes HASwinNet, a deep learning (DL) framework designed for mmWave channel denoising. The framework integrates a hierarchical Swin Transformer encoder for structured representation learning. It further incorporates two complementary branches. The first branch performs sparse token extraction guided by angular-domain significance. The second branch focuses on angular-domain refinement by applying discrete Fourier transform (DFT), squeeze-and-excitation (SE), and inverse DFT (IDFT) operations. This generates a mask that highlights angularly coherent features. A decoder combines the outputs of both branches with a residual projection from the input to yield refined channel estimates. Additionally, we introduce an angular-domain perceptual loss during training. This enforces spectral consistency and preserves clustered multipath structures. Simulation results based on the Saleh–Valenzuela (S–V) channel model demonstrate that HASwinNet achieves significant improvements in normalized mean squared error (NMSE) and bit error rate (BER). It consistently outperforms convolutional neural network (CNN), long short-term memory (LSTM), and U-Net baselines. Furthermore, experiments with reduced pilot symbols confirm that HASwinNet effectively exploits angular sparsity. The model retains a consistent advantage over baselines even under pilot-limited conditions. These findings validate the scalability of HASwinNet for practical 6G mmWave backhaul applications. They also highlight its potential in ISAC scenarios where accurate channel recovery supports both communication and sensing. Full article
Show Figures

Figure 1

19 pages, 5706 KB  
Article
Research on a Unified Multi-Type Defect Detection Method for Lithium Batteries Throughout Their Entire Lifecycle Based on Multimodal Fusion and Attention-Enhanced YOLOv8
by Zitao Du, Ziyang Ma, Yazhe Yang, Dongyan Zhang, Haodong Song, Xuanqi Zhang and Yijia Zhang
Sensors 2026, 26(2), 635; https://doi.org/10.3390/s26020635 - 17 Jan 2026
Viewed by 260
Abstract
To address the limitations of traditional lithium battery defect detection—low efficiency, high missed detection rates for minute/composite defects, and inadequate multimodal fusion—this study develops an improved YOLOv8 model based on multimodal fusion and attention enhancement for unified full-lifecycle multi-type defect detection. Integrating visible-light [...] Read more.
To address the limitations of traditional lithium battery defect detection—low efficiency, high missed detection rates for minute/composite defects, and inadequate multimodal fusion—this study develops an improved YOLOv8 model based on multimodal fusion and attention enhancement for unified full-lifecycle multi-type defect detection. Integrating visible-light and X-ray modalities, the model incorporates a Squeeze-and-Excitation (SE) module to dynamically weight channel features, suppressing redundancy and highlighting cross-modal complementarity. A Multi-Scale Fusion Module (MFM) is constructed to amplify subtle defect expression by fusing multi-scale features, building on established feature fusion principles. Experimental results show that the model achieves an mAP@0.5 of 87.5%, a minute defect recall rate (MRR) of 84.1%, and overall industrial recognition accuracy of 97.49%. It operates at 35.9 FPS (server) and 25.7 FPS (edge) with end-to-end latency of 30.9–38.9 ms, meeting high-speed production line requirements. Exhibiting strong robustness, the lightweight model outperforms YOLOv5/7/8/9-S in core metrics. Large-scale verification confirms stable performance across the battery lifecycle, providing a reliable solution for industrial defect detection and reducing production costs. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Graphical abstract

19 pages, 1722 KB  
Article
Light-YOLO-Pepper: A Lightweight Model for Detecting Missing Seedlings
by Qiang Shi, Yongzhong Zhang, Xiaoxue Du, Tianhua Chen and Yafei Wang
Agriculture 2026, 16(2), 231; https://doi.org/10.3390/agriculture16020231 - 15 Jan 2026
Viewed by 245
Abstract
The aim of this study was to accurately meet the demand of real-time detection of seedling shortage in large-scale seedling production and solve the problems of low precision of traditional models and insufficient adaptability of mainstream lightweight models. This study proposed a Light-YOLO-Pepper [...] Read more.
The aim of this study was to accurately meet the demand of real-time detection of seedling shortage in large-scale seedling production and solve the problems of low precision of traditional models and insufficient adaptability of mainstream lightweight models. This study proposed a Light-YOLO-Pepper seedling shortage detection model based on the improvement of YOLOv8n. This model was based on YOLOv8n. The SE (Squeeze-and-Excitation) attention module was introduced to dynamically suppress the interference of the nutrient soil background and enhance the features of the seedling shortage area. Depth-separable convolution (DSConv) was used to replace the traditional convolution, which can reduce computational redundancy while retaining core features. Based on K- means clustering, customized anchor boxes were generated to adapt to the hole sizes of 72-unit (large size) and 128-unit (small size and high-density) seedling trays. The results show that the overall mAP@0.5, accuracy and recall rate of Light-YOLO-Pepper model were 93.6 ± 0.5%, 94.6 ± 0.4% and 93.2 ± 0.6%, which were 3.3%, 3.1%, and 3.4% higher than YOLOv8n model, respectively. The parameter size of the Light-YOLO-Pepper model was only 1.82 M, the calculation cost was 3.2 G FLOPs, and the reasoning speeds with regard to the GPU and CPU were 168.4 FPS and 28.9 FPS, respectively. The Light-YOLO-Pepper model was superior to the mainstream model in terms of its lightweight and real-time performance. The precision difference between the two seedlings was only 1.2%, and the precision retention rate in high-density scenes was 98.73%. This model achieves the best balance of detection accuracy, lightweight performance, and scene adaptability, and can efficiently meet the needs of embedded equipment and real-time detection in large-scale seedling production, providing technical support for replanting automation. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 3459 KB  
Article
Green-Making Stage Recognition of Tieguanyin Tea Based on Improved MobileNet V3
by Yuyan Huang, Shengwei Xia, Wei Chen, Jian Zhao, Yu Zhou and Yongkuai Chen
Sensors 2026, 26(2), 511; https://doi.org/10.3390/s26020511 - 12 Jan 2026
Viewed by 192
Abstract
The green-making stage is crucial for forming the distinctive aroma and flavor of Tieguanyin tea. Current green-making stage recognition relies on tea makers’ sensory experience, which is labor-intensive and time-consuming. To address these issues, this paper proposes a lightweight automatic recognition model named [...] Read more.
The green-making stage is crucial for forming the distinctive aroma and flavor of Tieguanyin tea. Current green-making stage recognition relies on tea makers’ sensory experience, which is labor-intensive and time-consuming. To address these issues, this paper proposes a lightweight automatic recognition model named T-GSR for the accurate and objective identification of Tieguanyin tea green-making stages. First, an extensive set of Tieguanyin tea images at different green-making stages was collected. Subsequently, preprocessing techniques, i.e., multi-color-space fusion and morphological filtering, were applied to enhance the representation of target tea features. Furthermore, three targeted improvements were implemented based on the MobileNet V3 backbone network: (1) an adaptive residual branch was introduced to strengthen feature propagation; (2) the Rectified Linear Unit (ReLU) activation function was replaced with the Gaussian Error Linear Unit (GELU) to improve gradient propagation efficiency; and (3) an Improved Coordinate Attention (ICA) mechanism was adopted to replace the original Squeeze-and-Excitation (SE) module, enabling more accurate capture of complex tea features. Experimental results demonstrate that the T-GSR model outperforms the original MobileNet V3 in both classification performance and model complexity, achieving a recognition accuracy of 93.38%, an F1-score of 93.33%, with only 3.025 M parameters and 0.242 G FLOPs. The proposed model offers an effective solution for the intelligent recognition of Tieguanyin tea green-making stages, facilitating online monitoring and supporting automated tea production. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

23 pages, 91075 KB  
Article
Improved Lightweight Marine Oil Spill Detection Using the YOLOv8 Algorithm
by Jianting Shi, Tianyu Jiao, Daniel P. Ames, Yinan Chen and Zhonghua Xie
Appl. Sci. 2026, 16(2), 780; https://doi.org/10.3390/app16020780 - 12 Jan 2026
Viewed by 195
Abstract
Marine oil spill detection using Synthetic Aperture Radar (SAR) is crucial but challenged by dynamic marine conditions, diverse spill scales, and limitations in existing algorithms regarding model size and real-time performance. To address these challenges, we propose LSFE-YOLO, a YOLOv8s-optimized (You Only Look [...] Read more.
Marine oil spill detection using Synthetic Aperture Radar (SAR) is crucial but challenged by dynamic marine conditions, diverse spill scales, and limitations in existing algorithms regarding model size and real-time performance. To address these challenges, we propose LSFE-YOLO, a YOLOv8s-optimized (You Only Look Once version 8) lightweight model with an original, domain-tailored synergistic integration of FasterNet, GN-LSC Head (GroupNorm Lightweight Shared Convolution Head), and C2f_MBE (C2f Mobile Bottleneck Enhanced). FasterNet serves as the backbone (25% neck width reduction), leveraging partial convolution (PConv) to minimize memory access and redundant computations—overcoming traditional lightweight backbones’ high memory overhead—laying the foundation for real-time deployment while preserving feature extraction. The proposed GN-LSC Head replaces YOLOv8’s decoupled head: its shared convolutions reduce parameter redundancy by approximately 40%, and GroupNorm (Group Normalization) ensures stable accuracy under edge computing’s small-batch constraints, outperforming BatchNorm (Batch Normalization) in resource-limited scenarios. The C2f_MBE module integrates EffectiveSE (Effective Squeeze and Excitation)-optimized MBConv (Mobile Inverted Bottleneck Convolution) into C2f: MBConv’s inverted-residual design enhances multi-scale feature capture, while lightweight EffectiveSE strengthens discriminative oil spill features without extra computation, addressing the original C2f’s scale variability insufficiency. Additionally, an SE (Squeeze and Excitation) attention mechanism embedded upstream of SPPF (Spatial Pyramid Pooling Fast) suppresses background interference (e.g., waves, biological oil films), synergizing with FasterNet and C2f_MBE to form a cascaded feature optimization pipeline that refines representations throughout the model. Experimental results show that LSFE-YOLO improves mAP (mean Average Precision) by 1.3% and F1 score by 1.7% over YOLOv8s, while achieving substantial reductions in model size (81.9%), parameter count (82.9%), and computational cost (84.2%), alongside a 20 FPS (Frames Per Second) increase in detection speed. LSFE-YOLO offers an efficient and effective solution for real-time marine oil spill detection. Full article
Show Figures

Figure 1

23 pages, 6446 KB  
Article
Lightweight GAFNet Model for Robust Rice Pest Detection in Complex Agricultural Environments
by Yang Zhou, Wanqiang Huang, Benjing Liu, Tianhua Chen, Jing Wang, Qiqi Zhang and Tianfu Yang
AgriEngineering 2026, 8(1), 26; https://doi.org/10.3390/agriengineering8010026 - 10 Jan 2026
Viewed by 224
Abstract
To address challenges such as small target size, high density, severe occlusion, complex background interference, and edge device computational constraints, a lightweight model, GAFNet, is proposed based on YOLO11n, optimized for rice pest detection in field environments. To improve feature perception, we propose [...] Read more.
To address challenges such as small target size, high density, severe occlusion, complex background interference, and edge device computational constraints, a lightweight model, GAFNet, is proposed based on YOLO11n, optimized for rice pest detection in field environments. To improve feature perception, we propose the Global Attention Fusion and Spatial Pyramid Pooling (GAM-SPP) module, which captures global context and aggregates multi-scale features. Building on this, we introduce the C3-Efficient Feature Selection Attention (C3-EFSA) module, which refines feature representation by combining depthwise separable convolutions (DWConv) with lightweight channel attention to enhance background discrimination. The model’s detection head, Enhanced Ghost Detect (EGDetect), integrates Enhanced Ghost Convolution (EGConv), Squeeze-and-Excitation (SE), and Sigmoid-Weighted Linear Unit (SiLU) activation, which reduces redundancy. Additionally, we propose the Focal-Enhanced Complete-IoU (FECIoU) loss function, incorporating stability and hard-sample weighting for improved localization. Compared to YOLO11n, GAFNet improves Precision, Recall, and mean Average Precision (mAP) by 3.5%, 4.2%, and 1.6%, respectively, while reducing parameters and computation by 5% and 21%. GAFNet can deploy on edge devices, providing farmers with instant pest alerts. Further, GAFNet is evaluated on the AgroPest-12 dataset, demonstrating enhanced generalization and robustness across diverse pest detection scenarios. Overall, GAFNet provides an efficient, reliable, and sustainable solution for early pest detection, precision pesticide application, and eco-friendly pest control, advancing the future of smart agriculture. Full article
Show Figures

Figure 1

15 pages, 5995 KB  
Article
A Multi-Scale Soft-Thresholding Attention Network for Diabetic Retinopathy Recognition
by Xin Ma, Linfeng Sui, Ruixuan Chen, Taiyo Maeda and Jianting Cao
Appl. Sci. 2026, 16(2), 685; https://doi.org/10.3390/app16020685 - 8 Jan 2026
Viewed by 198
Abstract
Diabetic retinopathy (DR) is a major cause of preventable vision loss, and its early detection is essential for timely clinical intervention. However, existing deep learning-based DR recognition methods still face two fundamental challenges: substantial lesion-scale variability and significant background noise in retinal fundus [...] Read more.
Diabetic retinopathy (DR) is a major cause of preventable vision loss, and its early detection is essential for timely clinical intervention. However, existing deep learning-based DR recognition methods still face two fundamental challenges: substantial lesion-scale variability and significant background noise in retinal fundus images. To address these issues, we propose a lightweight framework named Multi-Scale Soft-Thresholding Attention Network (MSA-Net). The model integrates three components: (1) parallel multi-scale convolutional branches to capture lesions of different spatial sizes; (2) a soft-thresholding attention module to suppress noise-dominated responses; and (3) hierarchical feature fusion to enhance cross-layer representation consistency. A squeeze-and-excitation module is further incorporated for channel recalibration. On the APTOS 2019 dataset, MSA-Net achieves 97.54% accuracy and 0.991 AUC-ROC for binary DR recognition. We further evaluate five-class DR grading on APTOS2019 with 5-fold stratified cross-validation, achieving 82.71 ± 1.25% accuracy and 0.8937 ± 0.0142 QWK, indicating stable performance for ordinal severity classification. With only 4.54 M parameters, MSA-Net remains lightweight and suitable for deployment in resource-constrained DR screening environments. Full article
Show Figures

Figure 1

16 pages, 1443 KB  
Article
DCRDF-Net: A Dual-Channel Reverse-Distillation Fusion Network for 3D Industrial Anomaly Detection
by Chunshui Wang, Jianbo Chen and Heng Zhang
Sensors 2026, 26(2), 412; https://doi.org/10.3390/s26020412 - 8 Jan 2026
Viewed by 162
Abstract
Industrial surface defect detection is essential for ensuring product quality, but real-world production lines often provide only a limited number of defective samples, making supervised training difficult. Multimodal anomaly detection with aligned RGB and depth data is a promising solution, yet existing fusion [...] Read more.
Industrial surface defect detection is essential for ensuring product quality, but real-world production lines often provide only a limited number of defective samples, making supervised training difficult. Multimodal anomaly detection with aligned RGB and depth data is a promising solution, yet existing fusion schemes tend to overlook modality-specific characteristics and cross-modal inconsistencies, so that defects visible in only one modality may be suppressed or diluted. In this work, we propose DCRDF-Net, a dual-channel reverse-distillation fusion network for unsupervised RGB–depth industrial anomaly detection. The framework learns modality-specific normal manifolds from nominal RGB and depth data and detects defects as deviations from these learned manifolds. It consists of three collaborative components: a Perlin-guided pseudo-anomaly generator that injects appearance–geometry-consistent perturbations into both modalities to enrich training signals; a dual-channel reverse-distillation architecture with guided feature refinement that denoises teacher features and constrains RGB and depth students towards clean, defect-free representations; and a cross-modal squeeze–excitation gated fusion module that adaptively combines RGB and depth anomaly evidence based on their reliability and agreement.Extensive experiments on the MVTec 3D-AD dataset show that DCRDF-Net achieves 97.1% image-level I-AUROC and 98.8% pixel-level PRO, surpassing current state-of-the-art multimodal methods on this benchmark. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 2236 KB  
Article
Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network
by Xiang Li, Yitao Su, Xiaobin Zhao, Junjun Yin and Jian Yang
Sensors 2026, 26(1), 334; https://doi.org/10.3390/s26010334 - 4 Jan 2026
Viewed by 377
Abstract
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address [...] Read more.
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address these problems, this paper proposes a lightweight spatiotemporal fusion-based (LSTF) HRRP sequence target recognition method. First, a lightweight Transformer encoder based on group linear transformations (TGLT) is designed to effectively model temporal dynamics while significantly reducing parameter size and computation, making it suitable for edge-device applications. Second, a transform-domain spatial feature extraction network is introduced, combining the fractional Fourier transform with an enhanced squeeze-and-excitation fully convolutional network (FSCN). This design fully exploits multi-domain spatial information and enhances class separability by leveraging discriminative scattering-energy distributions at specific fractional orders. Finally, an adaptive focal loss with label smoothing (AFL-LS) is constructed to dynamically adjust class weights for improved performance on long-tail classes, while label smoothing alleviates overfitting and enhances generalization. Experiments on the MSTAR and CVDomes datasets demonstrate that the proposed method consistently outperforms existing baseline approaches across three representative scenarios. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

25 pages, 4974 KB  
Article
Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation
by Ting Shu, Huan Zhao, Kanglong Cai and Zexuan Zhu
Remote Sens. 2026, 18(1), 156; https://doi.org/10.3390/rs18010156 - 3 Jan 2026
Viewed by 233
Abstract
Quantitative precipitation estimation (QPE) from radar reflectivity is fundamental for weather nowcasting and water resource management. Conventional Z-R relationship formulas, derived from Rayleigh scattering theory, rely heavily on empirical parameter fitting, which limits the estimation accuracy and generalization across different precipitation regimes. Recent [...] Read more.
Quantitative precipitation estimation (QPE) from radar reflectivity is fundamental for weather nowcasting and water resource management. Conventional Z-R relationship formulas, derived from Rayleigh scattering theory, rely heavily on empirical parameter fitting, which limits the estimation accuracy and generalization across different precipitation regimes. Recent deep learning (DL)-based QPE methods can capture the complex nonlinear relationships between radar reflectivity and rainfall. However, most of them overlook fundamental physical constraints, resulting in reduced robustness and interpretability. To address these issues, this paper proposes FusionQPE, a novel Physics-Constrained DL framework that integrates an adaptive Z-R formula. Specifically, FusionQPE employs a Dense convolutional neural network (DenseNet) backbone to extract multi-scale spatial features from radar echoes, while a modified squeeze-and-excitation (SE) network adaptively learns the parameters of the Z-R relationship. The final rainfall estimate is obtained through a linear combination of outputs from both the DenseNet backbone and the adaptive Z-R branch, where the trained linear weight and Z-R parameters provide interpretable insights into the model’s physical reasoning. Moreover, a physical-based constraint derived from the Z-R branch output is incorporated into the loss function to further strengthen physical consistency. Comprehensive experiments on real radar and rain gauge observations from Guangzhou, China, demonstrate that FusionQPE consistently outperforms both traditional and state-of-the-art DL-based QPE models across multiple evaluation metrics. The ablation and interpretability analysis further confirms that the adaptive Z-R branch improves both the physical consistency and credibility of the model’s precipitation estimation. Full article
Show Figures

Figure 1

23 pages, 4414 KB  
Article
A Novel Graph Neural Network Method for Traffic State Estimation with Directional Wave Awareness
by Xiwen Lou, Jingu Mou, Boning Wang, Zhengfeng Huang, Hang Yang, Yibing Wang, Hongzhao Dong, Markos Papageorgiou and Pengjun Zheng
Sensors 2026, 26(1), 289; https://doi.org/10.3390/s26010289 - 2 Jan 2026
Viewed by 523
Abstract
Traffic state estimation (TSE) is crucial for intelligent transportation systems, as it provides unobserved parameters for traffic management and control. In this paper, we propose a novel physics-guided graph neural network for TSE that integrates traffic flow theory into an estimation framework. First, [...] Read more.
Traffic state estimation (TSE) is crucial for intelligent transportation systems, as it provides unobserved parameters for traffic management and control. In this paper, we propose a novel physics-guided graph neural network for TSE that integrates traffic flow theory into an estimation framework. First, we constructed wave-informed anisotropic temporal graphs to capture the time-delayed correlations across the road network, which were then merged with spatial graphs into a unified spatiotemporal structure for subsequent graph convolution operations. Then, we designed a four-layer diffusion graph convolutional network. Each layer is enhanced with squeeze-and-excitation attention mechanism to adaptively capture dynamic directional correlations. Furthermore, we introduced the fundamental diagram equation into the loss function, which guided the model toward physically consistent estimations. Experimental evaluations on a real-world highway dataset demonstrated that the proposed model achieved a higher accuracy than benchmark methods, confirming its effectiveness in capturing complex traffic dynamics. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

27 pages, 7513 KB  
Article
Research on Long-Term Structural Response Time-Series Prediction Method Based on the Informer-SEnet Model
by Yufeng Xu, Qingzhong Quan and Zhantao Zhang
Buildings 2026, 16(1), 189; https://doi.org/10.3390/buildings16010189 - 1 Jan 2026
Viewed by 188
Abstract
To address the stochastic, nonlinear, and strongly coupled characteristics of multivariate long-term structural response in bridge health monitoring, this study proposes the Informer-SEnet prediction model. The model integrates a Squeeze-and-Excitation (SE) channel attention mechanism into the Informer framework, enabling adaptive recalibration of channel [...] Read more.
To address the stochastic, nonlinear, and strongly coupled characteristics of multivariate long-term structural response in bridge health monitoring, this study proposes the Informer-SEnet prediction model. The model integrates a Squeeze-and-Excitation (SE) channel attention mechanism into the Informer framework, enabling adaptive recalibration of channel importance to suppress redundant information and enhance key structural response features. A sliding-window strategy is used to construct the datasets, and extensive comparative experiments and ablation studies are conducted on one public bridge-monitoring dataset and two long-term monitoring datasets from real bridges. In the best case, the proposed model achieves improvements of up to 54.67% in MAE, 52.39% in RMSE, and 7.73% in R2. Ablation analysis confirms that the SE module substantially strengthens channel-wise feature representation, while the sparse attention and distillation mechanisms are essential for capturing long-range dependencies and improving computational efficiency. Their combined effect yields the optimal predictive performance. Five-fold cross-validation further evaluates the model’s generalization capability. The results show that Informer-SEnet exhibits smaller fluctuations across folds compared with baseline models, demonstrating higher stability and robustness and confirming the reliability of the proposed approach. The improvement in prediction accuracy enables more precise characterization of the structural response evolution under environmental and operational loads, thereby providing a more reliable basis for anomaly detection and early damage warning, and reducing the risk of false alarms and missed detections. The findings offer an efficient and robust deep learning solution to support bridge structural safety assessment and intelligent maintenance decision-making. Full article
(This article belongs to the Special Issue Recent Developments in Structural Health Monitoring)
Show Figures

Figure 1

Back to TopTop