Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (164)

Search Parameters:
Keywords = ECA-Net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4958 KB  
Article
ViaNet: Interpretable and Lightweight Deep Hyperspectral Classification of Pepper Seed Viability
by Lei Zhu, Yeminzi Zhou, Yueming Zhu, Ling Zou, Bin Li, Siqiao Tan, Feng Liu and Fuchen Chen
Agriculture 2026, 16(4), 486; https://doi.org/10.3390/agriculture16040486 (registering DOI) - 22 Feb 2026
Abstract
Seed viability fundamentally determines crop establishment, stress resilience, and yield stability in pepper (Capsicum annuum L.), yet conventional assessment remains destructive, labor-intensive, and poorly scalable, while existing spectral learning approaches largely lack physiological interpretability, limiting their reliability for industrial seed quality management. [...] Read more.
Seed viability fundamentally determines crop establishment, stress resilience, and yield stability in pepper (Capsicum annuum L.), yet conventional assessment remains destructive, labor-intensive, and poorly scalable, while existing spectral learning approaches largely lack physiological interpretability, limiting their reliability for industrial seed quality management. Here, we present ViaNet, a lightweight, interpretable deep hyperspectral classification framework for 1038 naturally aged pepper seeds labeled via standardized 14-day germination tests. ROI-averaged hyperspectral reflectance vectors are modeled as a binary classification task, and ViaNet integrates Successive Projections Algorithm (SPA)-based wavelength sparsification with Efficient Channel Attention (ECA)-driven spectral weighting within a compact 1D-CNN architecture, enabling physiologically grounded feature learning under strict computational constraints. The model achieves recall for germinable seeds (79.75%) and outperforms classical machine learning methods. In addition, ViaNet consistently highlights reproducible spectral bands associated with natural-aging-related biochemical changes as reported in the literature (e.g., carotenoid-related absorption features in the near-UV region). By coupling spectral feature selection with attention-guided wavelength focusing, ViaNet establishes a closed analytical chain from spectral compression to physiologically interpretable inference. This framework balances predictive accuracy, interpretability, and deployability and provides a scalable, non-destructive, and biologically informed paradigm for hyperspectral seed viability assessment. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 13497 KB  
Article
Road Slippery State-Aware Adaptive Collision Warning Method for IVs
by Ying Cheng, Yu Zhang, Mingjiang Cai and Wei Luo
Electronics 2026, 15(4), 829; https://doi.org/10.3390/electronics15040829 - 14 Feb 2026
Viewed by 94
Abstract
To address critical limitations in conventional forward collision warning (FCW) systems including inadequate road condition detection accuracy, significant warning area prediction errors, and poor environmental adaptability on wet/snow-covered roads, this study develops an adaptive collision warning framework based on real-time road slippery states [...] Read more.
To address critical limitations in conventional forward collision warning (FCW) systems including inadequate road condition detection accuracy, significant warning area prediction errors, and poor environmental adaptability on wet/snow-covered roads, this study develops an adaptive collision warning framework based on real-time road slippery states recognition. An enhanced ED-ResNet50 model is proposed, incorporating grouped convolutions within the backbone network and embedding ECA attention mechanisms after the second/third residual blocks alongside DDS-DA modules after the fourth block, significantly improving discriminative capability for pavement texture analysis under adverse conditions. This vision-based recognition system synchronizes with YOLOv8 for preceding vehicle detection, enabling the construction of a friction-sensitive safety distance and the time-to-collision model that dynamically calibrates warning thresholds according to instantaneous vehicle velocity and road adhesion coefficients. Real-vehicle validation demonstrates an 8.76% improvement in overall warning accuracy and 7.29% reduction in lateral and early false alarm rates compared to static-threshold systems, confirming practical efficacy for safety assurance in inclement weather. Full article
(This article belongs to the Special Issue Signal Processing and AI Applications for Vehicles, 2nd Edition)
Show Figures

Figure 1

23 pages, 5549 KB  
Article
A Precision Weeding System for Cabbage Seedling Stage
by Pei Wang, Weiyue Chen, Qi Niu, Chengsong Li, Yuheng Yang and Hui Li
Agriculture 2026, 16(3), 384; https://doi.org/10.3390/agriculture16030384 - 5 Feb 2026
Viewed by 252
Abstract
This study developed an integrated vision–actuation system for precision weeding in indoor soil bin environments, with cabbage as a case example. The system integrates lightweight object detection, 3D co-ordinate mapping, path planning, and a three-axis synchronized conveyor-type actuator to enable precise weed identification [...] Read more.
This study developed an integrated vision–actuation system for precision weeding in indoor soil bin environments, with cabbage as a case example. The system integrates lightweight object detection, 3D co-ordinate mapping, path planning, and a three-axis synchronized conveyor-type actuator to enable precise weed identification and automated removal. By integrating ECA and CBAM attention mechanisms into YOLO11, we developed the YOLO11-WeedNet model. This integration significantly enhanced the detection performance for small-scale weeds under complex lighting and cluttered backgrounds. Based on the optimal model performance achieved during experimental evaluation, the model achieved 96.25% precision, 86.49% recall, 91.10% F1-score, and a mean Average Precision (mAP@0.5) of 91.50% calculated across two categories (crop and weed). An RGB-D fusion localization method combined with a protected-area constraint enabled accurate mapping of weed spatial positions. Furthermore, an enhanced Artificial Hummingbird Algorithm (AHA+) was proposed to optimize the execution path and reduce the operating trajectory while maintaining real-time performance. Indoor soil bin tests showed positioning errors of less than 8 mm on the X/Y axes, depth control within ±1 mm on the Z-axis, and an average weeding rate of 88.14%. The system achieved zero contact with cabbage seedlings, with a processing time of 6.88 s per weed. These results demonstrate the feasibility of the proposed system for precise and automated weeding at the cabbage seedling stage. Full article
Show Figures

Figure 1

23 pages, 7288 KB  
Article
ECA-RepNet: A Lightweight Coal–Rock Recognition Network Using Recurrence Plot Transformation
by Jianping Zhou, Zhixin Jin, Hongwei Wang, Wenyan Cao, Xipeng Gu, Qingyu Kong, Jianzhong Li and Zeping Liu
Information 2026, 17(2), 140; https://doi.org/10.3390/info17020140 - 1 Feb 2026
Viewed by 212
Abstract
Coal and rock recognition is one of the key technologies in mining production, but traditional methods have limitations such as single-feature representation dimension, insufficient robustness, and unbalanced performance in lightweight design under noise interference and complex feature conditions. To address these issues, an [...] Read more.
Coal and rock recognition is one of the key technologies in mining production, but traditional methods have limitations such as single-feature representation dimension, insufficient robustness, and unbalanced performance in lightweight design under noise interference and complex feature conditions. To address these issues, an Efficient Channel Attention Reparameterized Network (ECA-RepNet) based on recurrence plot and Efficient Channel Attention mechanism is proposed. The one-dimensional vibration signal is mapped to the two-dimensional image space through a recurrence plot (RP), which retains the dynamic characteristics of the time series while capturing the complex patterns in the signal. Multi-scale feature extraction and lightweight design are achieved through the reparameterized large kernel block (RepLK Block) and the depthwise separable convolution (DSConv) module. The ECA module is introduced to embed multiple convolutional layers. Through global average pooling, one-dimensional convolution, and dynamic weight allocation, the modeling ability of inter-channel dependencies is enhanced, the model robustness is improved, and the computational overhead is reduced. Experimental results demonstrate that the ECA-RepNet model achieves 97.33% accuracy, outperforming classic models including ResNet, CNN, and MobileNet in parameter efficiency, training time, and inference speed. Full article
Show Figures

Graphical abstract

31 pages, 11832 KB  
Article
A Visual Navigation Path Extraction Method for Complex and Variable Agricultural Scenarios Based on AFU-Net and Key Contour Point Constraints
by Jin Lu, Zhao Wang, Jin Wang, Zhongji Cao, Jia Zhao and Minjie Zhang
Agriculture 2026, 16(3), 324; https://doi.org/10.3390/agriculture16030324 - 28 Jan 2026
Viewed by 257
Abstract
In intelligent unmanned agricultural machinery research, navigation line extraction in natural field/orchard environments is critical for autonomous operation. Existing methods still face two prominent challenges: (1) Dynamic shooting perspective shifts caused by natural environmental interference lead to geometric distortion of image features, making [...] Read more.
In intelligent unmanned agricultural machinery research, navigation line extraction in natural field/orchard environments is critical for autonomous operation. Existing methods still face two prominent challenges: (1) Dynamic shooting perspective shifts caused by natural environmental interference lead to geometric distortion of image features, making it difficult to acquire high-precision navigation features; (2) Symmetric distribution of crop row boundaries hinders traditional algorithms from accurately extracting effective navigation trajectories, resulting in insufficient accuracy and reliability. To address these issues, this paper proposes an environment-adaptive navigation path extraction method for multi-type agricultural scenarios, consisting of two core components: an Attention-Feature-Enhanced U-Net (AFU-Net) for semantic segmentation of navigation feature regions, and a key-point constraint-based adaptive navigation line extraction algorithm. AFU-Net improves the U-Net framework by embedding Efficient Channel Attention (ECA) modules at the ends of Encoders 1–3 to enhance feature expression, and replacing Encoder 4 with a cascaded Semantic Aware Multi-scale Enhancement (SAME) module. Trained and tested on both our KVW dataset and Yu’s field dataset, our method achieves outstanding performance: On the KVW dataset, AFU-Net attains a Mean Intersection over Union (MIoU) of 97.55% and a real-time inference speed of 32.60 FPS with only 3.95 M Params, outperforming state-of-the-art models. On Yu’s field dataset, it maintains an MIoU of 95.20% and 16.30 FPS. Additionally, compared with traditional navigation line extraction algorithms, the proposed adaptive algorithm reduces the mean absolute yaw angle error (mAYAE) to 2.06° in complex scenarios. This research exhibits strong adaptability and robustness, providing reliable technical support for the precise navigation of intelligent agricultural machinery across multiple agricultural scenarios. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 8145 KB  
Article
Research on Greenhouse Eggplant Fruit Detection and Tracking-Based Counting Using an Improved YOLOv5s-DeepSORT
by Jianfei Zhu, Long Bai, Caishan Liu, Chengxu Nian, Keke Zhang and Sibo Yang
Agriculture 2026, 16(2), 253; https://doi.org/10.3390/agriculture16020253 - 19 Jan 2026
Viewed by 210
Abstract
Accurate fruit counting is essential for yield evaluation and automated management in greenhouse eggplant production. This study presents a lightweight detection and counting method based on an improved YOLOv5s–DeepSORT framework. To reduce computational cost while preserving accuracy, we replace the YOLOv5s backbone with [...] Read more.
Accurate fruit counting is essential for yield evaluation and automated management in greenhouse eggplant production. This study presents a lightweight detection and counting method based on an improved YOLOv5s–DeepSORT framework. To reduce computational cost while preserving accuracy, we replace the YOLOv5s backbone with MobileNetV3, insert an Efficient Channel Attention (ECA) module to enhance discriminative fruit features, and substitute the neck C3 block with C2f to strengthen multi-scale feature fusion. Compared with the original YOLOv5s, our improved YOLOv5s increases precision by 2.3% while reducing the number of parameters and FLOPs by 37.0% and 50.9%, respectively. For counting, we integrate DeepSORT with a counting-zone strategy that increments the count once per target when the bounding-box center first enters the counting zone, thereby mitigating identity switches (ID switches) and suppressing duplicate counts. Experimental results demonstrate that the proposed method enables accurate and real-time eggplant fruit counting in complex greenhouse scenes, providing practical support for automated yield assessment on inspection robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 2743 KB  
Article
Axial Solidification Experiments to Mimic Net-Shaped Castings of Aluminum Alloys—Interfacial Heat-Transfer Coefficient and Thermal Diffusivity
by Ravi Peri, Ahmed M. Teamah, Xiaochun Zeng, Mohamed S. Hamed and Sumanth Shankar
Processes 2026, 14(1), 128; https://doi.org/10.3390/pr14010128 - 30 Dec 2025
Viewed by 357
Abstract
Net-shaped casting processes in the automotive industry have proved to be difficult to simulate due to the complexities of the interactions amongst thermal, fluid, and solute transport regimes in the solidifying domain, along with the interface. The existing casting simulation software lacks the [...] Read more.
Net-shaped casting processes in the automotive industry have proved to be difficult to simulate due to the complexities of the interactions amongst thermal, fluid, and solute transport regimes in the solidifying domain, along with the interface. The existing casting simulation software lacks the necessary real-time estimation of thermophysical properties (thermal diffusivity and thermal conductivity) and the interfacial heat-transfer coefficient (IHTC) to evaluate the thermal resistances in a casting process and solve the temperature in the solidifying domain. To address these shortcomings, an axial directional solidification experiment setup was developed to map the thermal data as the melt solidifies unidirectionally from the chill surface under unsteady-state conditions. A Dilute Eutectic Cast Aluminum (DECA) alloy, Al-5Zn-1Mg-1.2Fe-0.07Ti, Eutectic Cast Aluminum (ECA) alloys (A365 and A383), and pure Al (P0303) were used to demonstrate the validity of the experiments to evaluate the thermal diffusivity (α) of both the solid and liquid phases of the solidifying metal using an inverse heat-transfer analysis (IHTA). The thermal diffusivity varied from 0.2 to 1.9 cm2/s while the IHTC changed from 9500 to 200 W/m2K for different alloys in the solid and liquid phases. The heat flux was estimated from the chill side with transient temperature distributions estimated from IHTA for either side of the mold–metal interface as an input to compute the interfacial heat-transfer coefficient (IHTC). The results demonstrate the reliability of the axial solidification experiment apparatus in accurately providing input to the casting simulation software and aid in reproducing casting numerical simulation models efficiently. Full article
Show Figures

Figure 1

15 pages, 3365 KB  
Article
Lightweight YOLO-Based Online Inspection Architecture for Cup Rupture Detection in the Strip Steel Welding Process
by Yong Qin and Shuai Zhao
Machines 2026, 14(1), 40; https://doi.org/10.3390/machines14010040 - 29 Dec 2025
Viewed by 292
Abstract
Cup rupture failures in strip steel welds can lead to strip breakage, resulting in unplanned downtime of high-speed continuous rolling mills and scrap steel losses. Manual visual inspection suffers from a high false positive rate and cannot meet the production cycle time requirements. [...] Read more.
Cup rupture failures in strip steel welds can lead to strip breakage, resulting in unplanned downtime of high-speed continuous rolling mills and scrap steel losses. Manual visual inspection suffers from a high false positive rate and cannot meet the production cycle time requirements. This paper proposes a lightweight online cup rupture visual inspection method based on an improved YOLOv10 algorithm. The backbone feature extraction network is replaced with ShuffleNetV2 to reduce the model’s parameter count and computational complexity. An ECA attention mechanism is incorporated into the backbone network to enhance the model’s focus on cup rupture micro-cracks. A Slim-Neck design is adopted, utilizing a dual optimization with GSConv and VoV-GSCSP, significantly improving the balance between real-time performance and accuracy. Based on the results, the optimized model achieves a precision of 98.8% and a recall of 99.2%, with a mean average precision (mAP) of 99.5%—an improvement of 0.2 percentage points over the baseline. The model has a computational load of 4.4 GFLOPs and a compact size of only 3.24 MB, approximately half that of the original model. On embedded devices, it achieves a real-time inference speed of 122 FPS, which is about 2.5, 11, and 1.8 times faster than SSD, Faster R-CNN, and YOLOv10n, respectively. Therefore, the lightweight model based on the improved YOLOv10 not only enhances detection accuracy but also significantly reduces computational cost and model size, enabling efficient real-time cup rupture detection in industrial production environments on embedded platforms. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

21 pages, 66751 KB  
Article
Real-Time Panoramic Surveillance Video Stitching Method for Complex Industrial Environments
by Jiuteng Zhu, Jianyu Guo, Kailun Ding, Gening Wang, Youxuan Zhou and Wenhong Li
Sensors 2026, 26(1), 186; https://doi.org/10.3390/s26010186 - 26 Dec 2025
Viewed by 619
Abstract
In complex industrial environments, surveillance videos often exhibit large parallax, low illumination, low texture, and low overlap rate, making it difficult to extract reliable image feature points and consequently leading to video suboptimal stitching performance. To address these challenges, this study proposes a [...] Read more.
In complex industrial environments, surveillance videos often exhibit large parallax, low illumination, low texture, and low overlap rate, making it difficult to extract reliable image feature points and consequently leading to video suboptimal stitching performance. To address these challenges, this study proposes a real-time panoramic surveillance video stitching method specifically designed for complex industrial scenarios. In the image registration stage, the Efficient Channel Attention (ECA) and Channel Attention (CA) modules are integrated with ResNet to enhance the feature extraction layers of the UDIS algorithm, thereby improving feature extraction and matching accuracy. A loss function incorporating similarity loss Lsim and smoothness loss Lsmooth is designed to optimize registration errors. In the image fusion stage, gradient terms and motion terms are introduced for improving the energy function of the optimal seam line, enabling the optimal seam line to avoid moving objects in overlapping regions and thus achieve video stitching. Experimental validation is conducted by comparing the proposed image registration method with SIFT + RANSAC, UDIS, UDIS++, and NIS, and the proposed image fusion method with weighted average fusion, dynamic programming, and graph cut. The results show that, in image registration experiments, the proposed method achieves RMSE, PSNR, and SSIM values of 1.965, 25.338, and 0.8366, respectively. In image fusion experiments, the seam transition is smoother and effectively avoids moving objects, significantly improving the visual quality of the stitched videos. Moreover, the real-time stitching frame rate reaches 23 fps, meeting the real-time requirements of industrial surveillance applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 12611 KB  
Article
Crop Row Line Detection for Rapeseed Seedlings in Complex Environments Based on Improved BiSeNetV2 and Dynamic Sliding Window Fitting
by Wanjing Dong, Rui Wang, Fanguo Zeng, Youming Jiang, Yang Zhang, Qingyang Shi, Zhendong Liu and Wei Xu
Agriculture 2026, 16(1), 23; https://doi.org/10.3390/agriculture16010023 - 21 Dec 2025
Viewed by 321
Abstract
Crop row line detection is essential for precision agriculture, supporting autonomous navigation, field management, and growth monitoring. To address the low detection accuracy of rapeseed seedling rows under complex field conditions, this study proposes a detection framework that integrates an improved BiSeNetV2 with [...] Read more.
Crop row line detection is essential for precision agriculture, supporting autonomous navigation, field management, and growth monitoring. To address the low detection accuracy of rapeseed seedling rows under complex field conditions, this study proposes a detection framework that integrates an improved BiSeNetV2 with a dynamic sliding-window fitting strategy. The improved BiSeNetV2 incorporates the Efficient Channel Attention (ECA) mechanism to strengthen crop-specific feature representation, an Atrous Spatial Pyramid Pooling (ASPP) decoder to improve multi-scale perception, and Depthwise Separable Convolutions (DS Conv) in the Detail Branch to reduce model complexity while preserving accuracy. After semantic segmentation, a Gaussian-filtered vertical projection method is applied to identify crop-row regions by locating density peaks. A dynamic sliding-window algorithm is then used to extract row trajectories, with the window size adaptively determined by the row width and the sliding process incorporating both a lateral inertial-drift strategy and a dynamically adjusted longitudinal step size. Finally, variable-order polynomial fitting is performed within each crop-row region to achieve precise extraction of the crop-row lines. Experimental results indicate that the improved BiSeNetV2 model achieved a Mean Pixel Accuracy (mPA) of 87.73% and a Mean Intersection over Union (MIoU) of 79.40% on the rapeseed seedling dataset, marking improvements of 9.98% and 8.56%, respectively, compared to the original BiSeNetV2. The crop row detection performance for rapeseed seedlings under different environmental conditions demonstrated that the Curve Fitting Coefficient (CFC), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE) were 0.85, 1.57, and 1.27 pixels on sunny days; 0.86, 2.05 and 1.63 pixels on cloudy days; 0.74, 2.89, and 2.22 pixels on foggy days; and 0.76, 1.38, and 1.11 pixels during the evening, respectively. The results reveal that the improved BiSeNetV2 can effectively identify rapeseed seedlings, and the detection algorithm can identify crop row lines in various complex environments. This research provides methodological support for crop row line detection in precision agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

26 pages, 8192 KB  
Article
Enhancing Deep Learning Models with Attention Mechanisms for Interpretable Detection of Date Palm Diseases and Pests
by Amine El Hanafy, Abdelaaziz Hessane and Yousef Farhaoui
Technologies 2025, 13(12), 596; https://doi.org/10.3390/technologies13120596 - 18 Dec 2025
Viewed by 562
Abstract
Deep learning has become a powerful tool for diagnosing pests and plant diseases, although conventional convolutional neural networks (CNNs) generally suffer from limited interpretability and suboptimal focus on important image features. This study examines the integration of attention mechanisms into two prevalent CNN [...] Read more.
Deep learning has become a powerful tool for diagnosing pests and plant diseases, although conventional convolutional neural networks (CNNs) generally suffer from limited interpretability and suboptimal focus on important image features. This study examines the integration of attention mechanisms into two prevalent CNN architectures—ResNet50 and MobileNetV2—to improve the interpretability and classification of diseases impacting date palm trees. Four attention modules—Squeeze-and-Excitation (SE), Efficient Channel Attention (ECA), Soft Attention, and the Convolutional Block Attention Module (CBAM)—were systematically integrated into ResNet50 and MobileNetV2 and assessed on the Palm Leaves dataset. Using transfer learning, the models were trained and evaluated through accuracy, F1-score, Grad-CAM visualizations, and quantitative metrics such as entropy and Attention Focus Scores. Analysis was also performed on the model’s complexity, including parameters and FLOPs. To confirm generalization, we tested the improved models on field data that was not part of the dataset used for learning. The experimental results demonstrated that the integration of attention mechanisms substantially improved both predictive accuracy and interpretability across all evaluated architectures. For MobileNetV2, the best performance and the most compact attention maps were obtained with SE and ECA (reaching 91%), while Soft Attention improved accuracy but produced broader, less concentrated activation patterns. For ResNet50, SE achieved the most focused and symptom-specific heatmaps, whereas CBAM reached the highest classification accuracy (up to 90.4%) but generated more spatially diffuse Grad-CAM activations. Overall, these findings demonstrate that attention-enhanced CNNs can provide accurate, interpretable, and robust detection of palm tree diseases and pests under real-world agricultural conditions. Full article
Show Figures

Figure 1

33 pages, 10355 KB  
Article
S2GL-MambaResNet: A Spatial–Spectral Global–Local Mamba Residual Network for Hyperspectral Image Classification
by Tao Chen, Hongming Ye, Guojie Li, Yaohan Peng, Jianming Ding, Huayue Chen, Xiangbing Zhou and Wu Deng
Remote Sens. 2025, 17(23), 3917; https://doi.org/10.3390/rs17233917 - 3 Dec 2025
Viewed by 832
Abstract
In hyperspectral image classification (HSIC), each pixel contains information across hundreds of contiguous spectral bands; therefore, the ability to perform long-distance modeling that stably captures and propagates these long-distance dependencies is critical. A selective structured state space model (SSM) named Mamba has shown [...] Read more.
In hyperspectral image classification (HSIC), each pixel contains information across hundreds of contiguous spectral bands; therefore, the ability to perform long-distance modeling that stably captures and propagates these long-distance dependencies is critical. A selective structured state space model (SSM) named Mamba has shown strong capabilities for capturing cross-band long-distance dependencies and exhibits advantages in long-distance modeling. However, the inherently high spectral dimensionality, information redundancy, and spatial heterogeneity of hyperspectral images (HSI) pose challenges for Mamba in fully extracting spatial–spectral features and in maintaining computational efficiency. To address these issues, we propose S2GL-MambaResNet, a lightweight HSI classification network that tightly couples Mamba with progressive residuals to enable richer global, local, and multi-scale spatial–spectral feature extraction, thereby mitigating the negative effects of high dimensionality, redundancy, and spatial heterogeneity on long-distance modeling. To avoid fragmentation of spatial–spectral information caused by serialization and to enhance local discriminability, we design a preprocessing method applied to the features before they are input to Mamba, termed the Spatial–Spectral Gated Attention Aggregator (SS-GAA). SS-GAA uses spatial–spectral adaptive gated fusion to preserve and strengthen the continuity of the central pixel’s neighborhood and its local spatial–spectral representation. To compensate for a single global sequence network’s tendency to overlook local structures, we introduce a novel Mamba variant called the Global_Local Spatial_Spectral Mamba Encoder (GLS2ME). GLS2ME comprises a pixel-level global branch and a non-overlapping sliding-window local branch for modeling long-distance dependencies and patch-level spatial–spectral relations, respectively, jointly improving generalization stability under limited sample regimes. To ensure that spatial details and boundary integrity are maintained while capturing spectral patterns at multiple scales, we propose a multi-scale Mamba encoding scheme, the Hierarchical Spectral Mamba Encoder (HSME). HSME first extracts spectral responses via multi-scale 1D spectral convolutions, then groups spectral bands and feeds these groups into Mamba encoders to capture spectral pattern information at different scales. Finally, we design a Progressive Residual Fusion Block (PRFB) that integrates 3D residual recalibration units with Efficient Channel Attention (ECA) to fuse multi-kernel outputs within a global context. This enables ordered fusion of local multi-scale features under a global semantic context, improving information utilization efficiency while keeping computational overhead under control. Comparative experiments on four publicly available HSI datasets demonstrate that S2GL-MambaResNet achieves superior classification accuracy compared with several state-of-the-art methods, with particularly pronounced advantages under few-shot and class-imbalanced conditions. Full article
Show Figures

Figure 1

34 pages, 11986 KB  
Article
High-Speed Die Bond Quality Detection Using Lightweight Architecture DSGβSI-SECS-Yolov7-Tiny
by Bao Rong Chang, Hsiu-Fen Tsai and Wei-Shun Chang
Sensors 2025, 25(23), 7358; https://doi.org/10.3390/s25237358 - 3 Dec 2025
Viewed by 543
Abstract
The die bonding process significantly impacts the yield and quality of IC packaging, and its quality detection is also a critical image sensing technology. With the advancement of machine automation and increased operating speeds, the misclassification rate in die bond image inspection has [...] Read more.
The die bonding process significantly impacts the yield and quality of IC packaging, and its quality detection is also a critical image sensing technology. With the advancement of machine automation and increased operating speeds, the misclassification rate in die bond image inspection has also risen. Therefore, this study develops a high-speed intelligent vision inspection model that slightly improves classification accuracy and adapts to the operation of new-generation machines. Furthermore, by identifying the causes of die bonding defects, key process parameters can be adjusted in real time during production, thereby improving the yield of the die bonding process and substantially reducing manufacturing cost losses. Previously, we proposed a lightweight model named DSGβSI-YOLOv7-tiny, which integrates depthwise separable convolution, Ghost convolution, and a Sigmoid activation function with a learnable β parameter. This model enables real-time and efficient detection and prediction of die bond quality through image sensing. We further enhanced the previous model by incorporating an SE layer, ECA-Net, Coordinate Attention, and a Small Object Enhancer to accommodate the faster operation of new machines. This improvement resulted in a more lightweight architecture named DSGβSI-SECS-YOLOv7-tiny. Compared with the previous model, the proposed model achieves an increased inference speed of 294.1 FPS and a Precision of 99.1%. Full article
Show Figures

Figure 1

32 pages, 1317 KB  
Article
ECA110-Pooling: A Comparative Analysis of Pooling Strategies in Convolutional Neural Networks
by Doru Constantin and Costel Bălcău
Big Data Cogn. Comput. 2025, 9(12), 306; https://doi.org/10.3390/bdcc9120306 - 2 Dec 2025
Viewed by 588
Abstract
Pooling strategies are fundamental to convolutional neural networks, shaping the trade-off between accuracy, robustness to spatial variations, and computational efficiency in modern visual recognition systems. In this paper, we present and validate ECA110-Pooling, a novel rule-based pooling operator inspired by elementary cellular automata. [...] Read more.
Pooling strategies are fundamental to convolutional neural networks, shaping the trade-off between accuracy, robustness to spatial variations, and computational efficiency in modern visual recognition systems. In this paper, we present and validate ECA110-Pooling, a novel rule-based pooling operator inspired by elementary cellular automata. We conduct a systematic comparative study, benchmarking ECA110-Pooling against conventional pooling methods (MaxPooling, AveragePooling, MedianPooling, MinPooling, KernelPooling) as well as state-of-the-art (SOTA) architectures. Experiments on three benchmark datasets—ImageNet (subset), CIFAR-10, and Fashion-MNIST—across training horizons ranging from 20 to 50,000 epochs show that ECA110-Pooling consistently achieves higher Top-1 accuracy, lower error rates, and stronger F1-scores than traditional pooling operators, while maintaining computational efficiency comparable to MaxPooling. Moreover, when compared with SOTA models, ECA110-Pooling delivers competitive accuracy with substantially fewer parameters and reduced training time. These results establish ECA110-Pooling as a principled and validated approach to image classification, bridging the gap between fixed pooling schemes and complex deep architectures. Its interpretable, rule-based design highlights both theoretical significance and practical applicability in contexts that demand a balance of accuracy, efficiency, and scalability. Full article
Show Figures

Figure 1

16 pages, 5435 KB  
Article
Passive Acoustic Monitoring Provides Insights into Avian Use of Energycane Cropping Systems in Southern Florida
by Leroy J. Walston, Jules F. Cacho, Ricardo A. Lesmes-Vesga, Hardev Sandhu, Colleen R. Zumpf, Bradford Kasberg, Jeremy Feinstein and Maria Cristina Negri
Birds 2025, 6(4), 60; https://doi.org/10.3390/birds6040060 - 10 Nov 2025
Viewed by 650
Abstract
Birds are important indicators of ecosystem health and provide a range of benefits to society. It is important, therefore, to understand the impacts of agricultural land use changes on bird populations. The cultivation of energycane (EC)—a sugarcane hybrid—for biofuel production represents one form [...] Read more.
Birds are important indicators of ecosystem health and provide a range of benefits to society. It is important, therefore, to understand the impacts of agricultural land use changes on bird populations. The cultivation of energycane (EC)—a sugarcane hybrid—for biofuel production represents one form of agricultural land use change in southern Florida. We used passive acoustic monitoring (PAM) to examine bird community use of experimental EC fields and other agricultural land uses at two study sites in southern Florida. We deployed 16 acoustic recorders in different study plots and used the automatic species identifier BirdNET to identify 40 focal bird species. We found seasonal differences in daily avian species diversity and richness between EC experimental plots and reference agricultural fields (corn fields, orchards, pastureland), and between time periods (pre-planting, post-planting). Daily avian species diversity and richness were lower in the EC experimental plots during Fall and Winter months when plants reached maximum height (>400 cm in some areas). Despite seasonal differences in daily measures of species diversity and richness, we found no differences in cumulative species richness, suggesting that there may be little overall (season-long) effects of EC production. These findings could provide insight to avian seasonal habitat preferences and underscore the potential limitations of PAM in areas experiencing dynamic vegetation changes. More research is needed to better understand if utilization of EC cropping systems results in positive or negative effects on avian populations (e.g., foraging habitat quality, predator–prey dynamics, nest success). Full article
Show Figures

Graphical abstract

Back to TopTop