Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,452)

Search Parameters:
Keywords = lightweight deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

30 pages, 59872 KiB  
Article
Advancing 3D Seismic Fault Identification with SwiftSeis-AWNet: A Lightweight Architecture Featuring Attention-Weighted Multi-Scale Semantics and Detail Infusion
by Ang Li, Rui Li, Yuhao Zhang, Shanyi Li, Yali Guo, Liyan Zhang and Yuqing Shi
Electronics 2025, 14(15), 3078; https://doi.org/10.3390/electronics14153078 (registering DOI) - 31 Jul 2025
Abstract
The accurate identification of seismic faults, which serve as crucial fluid migration pathways in hydrocarbon reservoirs, is of paramount importance for reservoir characterization. Traditional interpretation is inefficient. It also struggles with complex geometries, failing to meet the current exploration demands. Deep learning boosts [...] Read more.
The accurate identification of seismic faults, which serve as crucial fluid migration pathways in hydrocarbon reservoirs, is of paramount importance for reservoir characterization. Traditional interpretation is inefficient. It also struggles with complex geometries, failing to meet the current exploration demands. Deep learning boosts fault identification significantly but struggles with edge accuracy and noise robustness. To overcome these limitations, this research introduces SwiftSeis-AWNet, a novel lightweight and high-precision network. The network is based on an optimized MedNeXt architecture for better fault edge detection. To address the noise from simple feature fusion, a Semantics and Detail Infusion (SDI) module is integrated. Since the Hadamard product in SDI can cause information loss, we engineer an Attention-Weighted Semantics and Detail Infusion (AWSDI) module that uses dynamic multi-scale feature fusion to preserve details. Validation on field seismic datasets from the Netherlands F3 and New Zealand Kerry blocks shows that SwiftSeis-AWNet mitigates challenges like the loss of small-scale fault features and misidentification of fault intersection zones, enhancing the accuracy and geological reliability of automated fault identification. Full article
Show Figures

Figure 1

18 pages, 3506 KiB  
Review
A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots
by Haolei Ru, Meiping Sheng, Jiahui Qi, Zhanghao Li, Lei Cheng, Jiahao Zhang, Jiangjian Xiao, Fei Gao, Baolei Wang and Qingwei Jia
Electronics 2025, 14(15), 3069; https://doi.org/10.3390/electronics14153069 (registering DOI) - 31 Jul 2025
Abstract
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a [...] Read more.
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a systematic and comprehensive review of spatial positioning techniques tailored to magnetic climbing robots. This paper addresses this gap by categorizing and evaluating current spatial positioning approaches. Initially, single-sensor-based methods are analyzed with a focus on external sensor approaches. Then, multi-sensor fusion methods are explored to overcome the shortcomings of single-sensor-based approaches. Multi-sensor fusion methods include simultaneous localization and mapping (SLAM), integrated positioning systems, and multi-robot cooperative positioning. To address non-uniform noise and environmental interference, both analytical and learning-based reinforcement approaches are reviewed. Common analytical methods include Kalman-type filtering, particle filtering, and correlation filtering, while typical learning-based approaches involve deep reinforcement learning (DRL) and neural networks (NNs). Finally, challenges and future development trends are discussed. Multi-sensor fusion and lightweight design are the future trends in the advancement of spatial positioning technologies for magnetic climbing robots. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

34 pages, 3535 KiB  
Article
Hybrid Optimization and Explainable Deep Learning for Breast Cancer Detection
by Maral A. Mustafa, Osman Ayhan Erdem and Esra Söğüt
Appl. Sci. 2025, 15(15), 8448; https://doi.org/10.3390/app15158448 - 30 Jul 2025
Viewed by 65
Abstract
Breast cancer continues to be one of the leading causes of women’s deaths around the world, and this has emphasized the necessity to have novel and interpretable diagnostic models. This work offers a clear learning deep learning model that integrates the mobility of [...] Read more.
Breast cancer continues to be one of the leading causes of women’s deaths around the world, and this has emphasized the necessity to have novel and interpretable diagnostic models. This work offers a clear learning deep learning model that integrates the mobility of MobileNet and two bio-driven optimization operators, the Firefly Algorithm (FLA) and Dingo Optimization Algorithm (DOA), in an effort to boost classification appreciation and the convergence of the model. The suggested model demonstrated excellent findings as the DOA-optimized MobileNet acquired the highest performance of 98.96 percent accuracy on the fusion test, and the FLA-optimized MobileNet scaled up to 98.06 percent and 95.44 percent accuracies on mammographic and ultrasound tests, respectively. Further to good quantitative results, Grad-CAM visualizations indeed showed clinically consistent localization of the lesions, which strengthened the interpretability and model diagnostic reliability of Grad-CAM. These results show that lightweight, compact CNNs can be used to do high-performance, multimodal breast cancer diagnosis. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 5013 KiB  
Article
Enhancing Document Forgery Detection with Edge-Focused Deep Learning
by Yong-Yeol Bae, Dae-Jea Cho and Ki-Hyun Jung
Symmetry 2025, 17(8), 1208; https://doi.org/10.3390/sym17081208 - 30 Jul 2025
Viewed by 48
Abstract
Detecting manipulated document images is essential for verifying the authenticity of official records and preventing document forgery. However, forgery artifacts are often subtle and localized in fine-grained regions, such as text boundaries or character outlines, where visual symmetry and structural regularity are typically [...] Read more.
Detecting manipulated document images is essential for verifying the authenticity of official records and preventing document forgery. However, forgery artifacts are often subtle and localized in fine-grained regions, such as text boundaries or character outlines, where visual symmetry and structural regularity are typically expected. These manipulations can disrupt the inherent symmetry of document layouts, making the detection of such inconsistencies crucial for forgery identification. Conventional CNN-based models face limitations in capturing such edge-level asymmetric features, as edge-related information tends to weaken through repeated convolution and pooling operations. To address this issue, this study proposes an edge-focused method composed of two components: the Edge Attention (EA) layer and the Edge Concatenation (EC) layer. The EA layer dynamically identifies channels that are highly responsive to edge features in the input feature map and applies learnable weights to emphasize them, enhancing the representation of boundary-related information, thereby emphasizing structurally significant boundaries. Subsequently, the EC layer extracts edge maps from the input image using the Sobel filter and concatenates them with the original feature maps along the channel dimension, allowing the model to explicitly incorporate edge information. To evaluate the effectiveness and compatibility of the proposed method, it was initially applied to a simple CNN architecture to isolate its impact. Subsequently, it was integrated into various widely used models, including DenseNet121, ResNet50, Vision Transformer (ViT), and a CAE-SVM-based document forgery detection model. Experiments were conducted on the DocTamper, Receipt, and MIDV-2020 datasets to assess classification accuracy and F1-score using both original and forged text images. Across all model architectures and datasets, the proposed EA–EC method consistently improved model performance, particularly by increasing sensitivity to asymmetric manipulations around text boundaries. These results demonstrate that the proposed edge-focused approach is not only effective but also highly adaptable, serving as a lightweight and modular extension that can be easily incorporated into existing deep learning-based document forgery detection frameworks. By reinforcing attention to structural inconsistencies often missed by standard convolutional networks, the proposed method provides a practical solution for enhancing the robustness and generalizability of forgery detection systems. Full article
Show Figures

Figure 1

19 pages, 1555 KiB  
Article
MedLangViT: A Language–Vision Network for Medical Image Segmentation
by Yiyi Wang, Jia Su, Xinxiao Li and Eisei Nakahara
Electronics 2025, 14(15), 3020; https://doi.org/10.3390/electronics14153020 - 29 Jul 2025
Viewed by 181
Abstract
Precise medical image segmentation is crucial for advancing computer-aided diagnosis. Although deep learning-based medical image segmentation is now widely applied in this field, the complexity of human anatomy and the diversity of pathological manifestations often necessitate the use of image annotations to enhance [...] Read more.
Precise medical image segmentation is crucial for advancing computer-aided diagnosis. Although deep learning-based medical image segmentation is now widely applied in this field, the complexity of human anatomy and the diversity of pathological manifestations often necessitate the use of image annotations to enhance segmentation accuracy. In this process, the scarcity of annotations and the lightweight design requirements of associated text encoders collectively present key challenges for improving segmentation model performance. To address these challenges, we propose MedLangViT, a novel language–vision multimodal model for medical image segmentation that incorporates medical descriptive information through lightweight text embedding rather than text encoders. MedLangViT innovatively leverages medical textual information to assist the segmentation process, thereby reducing reliance on extensive high-precision image annotations. Furthermore, we design an Enhanced Channel-Spatial Attention Module (ECSAM) to effectively fuse textual and visual features, strengthening textual guidance for segmentation decisions. Extensive experiments conducted on two publicly available text–image-paired medical datasets demonstrated that MedLangViT significantly outperforms existing state-of-the-art methods, validating the effectiveness of both the proposed model and the ECSAM. Full article
Show Figures

Figure 1

20 pages, 19642 KiB  
Article
SIRI-MOGA-UNet: A Synergistic Framework for Subsurface Latent Damage Detection in ‘Korla’ Pears via Structured-Illumination Reflectance Imaging and Multi-Order Gated Attention
by Baishao Zhan, Jiawei Liao, Hailiang Zhang, Wei Luo, Shizhao Wang, Qiangqiang Zeng and Yongxian Lai
Spectrosc. J. 2025, 3(3), 22; https://doi.org/10.3390/spectroscj3030022 - 29 Jul 2025
Viewed by 114
Abstract
Bruising in ‘Korla’ pears represents a prevalent phenomenon that leads to progressive fruit decay and substantial economic losses. The detection of early-stage bruising proves challenging due to the absence of visible external characteristics, and existing deep learning models have limitations in weak feature [...] Read more.
Bruising in ‘Korla’ pears represents a prevalent phenomenon that leads to progressive fruit decay and substantial economic losses. The detection of early-stage bruising proves challenging due to the absence of visible external characteristics, and existing deep learning models have limitations in weak feature extraction under complex optical interference. To address the postharvest latent damage detection challenges in ‘Korla’ pears, this study proposes a collaborative detection framework integrating structured-illumination reflectance imaging (SIRI) with multi-order gated attention mechanisms. Initially, an SIRI optical system was constructed, employing 150 cycles·m−1 spatial frequency modulation and a three-phase demodulation algorithm to extract subtle interference signal variations, thereby generating RT (Relative Transmission) images with significantly enhanced contrast in subsurface damage regions. To improve the detection accuracy of latent damage areas, the MOGA-UNet model was developed with three key innovations: 1. Integrate the lightweight VGG16 encoder structure into the feature extraction network to improve computational efficiency while retaining details. 2. Add a multi-order gated aggregation module at the end of the encoder to realize the fusion of features at different scales through a special convolution method. 3. Embed the channel attention mechanism in the decoding stage to dynamically enhance the weight of feature channels related to damage. Experimental results demonstrate that the proposed model achieves 94.38% mean Intersection over Union (mIoU) and 97.02% Dice coefficient on RT images, outperforming the baseline UNet model by 2.80% with superior segmentation accuracy and boundary localization capabilities compared with mainstream models. This approach provides an efficient and reliable technical solution for intelligent postharvest agricultural product sorting. Full article
Show Figures

Figure 1

24 pages, 17213 KiB  
Review
Empowering Smart Soybean Farming with Deep Learning: Progress, Challenges, and Future Perspectives
by Huihui Sun, Hao-Qi Chu, Yi-Ming Qin, Pingfan Hu and Rui-Feng Wang
Agronomy 2025, 15(8), 1831; https://doi.org/10.3390/agronomy15081831 - 28 Jul 2025
Viewed by 212
Abstract
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies [...] Read more.
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies (e.g., model lightweighting, transfer learning), and sensor data fusion techniques, the review identifies their roles and performances in complex agricultural environments. It also highlights key challenges including data quality limitations, difficulties in real-world deployment, and the lack of standardized evaluation benchmarks. In response, promising directions such as reinforcement learning, self-supervised learning, interpretable AI, and multi-source data fusion are proposed. Specifically for soybean automation, future advancements are expected in areas such as high-precision disease and weed localization, real-time decision-making for variable-rate spraying and harvesting, and the integration of deep learning with robotics and edge computing to enable autonomous field operations. This review provides valuable insights and future prospects for promoting intelligent, efficient, and sustainable development in soybean production through deep learning. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 2698 KiB  
Article
Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification
by Xuan Huang, Qin Gao, Hanwen Zhang, Fuhong Min, Dong Li and Gangyin Luo
Appl. Sci. 2025, 15(15), 8377; https://doi.org/10.3390/app15158377 - 28 Jul 2025
Viewed by 187
Abstract
Lung organoids play a crucial role in modeling drug responses in pulmonary diseases. However, their morphological analysis remains hindered by manual detection inefficiencies and the high computational cost of existing algorithms. To overcome these challenges, this study proposes Orga-Dete—a lightweight, high-precision detection model [...] Read more.
Lung organoids play a crucial role in modeling drug responses in pulmonary diseases. However, their morphological analysis remains hindered by manual detection inefficiencies and the high computational cost of existing algorithms. To overcome these challenges, this study proposes Orga-Dete—a lightweight, high-precision detection model based on YOLOv11n—which first employs data augmentation to mitigate the small-scale dataset and class imbalance issues, then optimizes via a triple co-optimization strategy: a bi-directional feature pyramid network for enhanced multi-scale feature fusion, MPCA for stronger micro-organoid feature response, and EMASlideLoss to address class imbalance. Validated on a lung organoid microscopy dataset, Orga-Dete achieves 81.4% mAP@0.5 with only 2.25 M parameters and 6.3 GFLOPs, surpassing the baseline model YOLOv11n by 3.5%. Ablation experiments confirm the synergistic effects of these modules in enhancing morphological feature extraction. With its balance of precision and efficiency, Orga-Dete offers a scalable solution for high-throughput organoid analysis, underscoring its potential for personalized medicine and drug screening. Full article
Show Figures

Figure 1

27 pages, 6143 KiB  
Article
Optical Character Recognition Method Based on YOLO Positioning and Intersection Ratio Filtering
by Kai Cui, Qingpo Xu, Yabin Ding, Jiangping Mei, Ying He and Haitao Liu
Symmetry 2025, 17(8), 1198; https://doi.org/10.3390/sym17081198 - 27 Jul 2025
Viewed by 167
Abstract
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to [...] Read more.
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to meet the accuracy and real-time demands of complex logistics scenarios due to challenges such as image distortion, uneven illumination, and field overlap. This paper proposes a three-level collaborative recognition method based on deep learning that facilitates structured information extraction through regional normalization, dual-path parallel extraction, and a dynamic matching mechanism. First, the geometric distortion associated with contour detection and the lightweight direction classification model has been improved. Second, by integrating the enhanced YOLOv5s for key area localization with the upgraded PaddleOCR for full-text character extraction, a dual-path parallel architecture for positioning and recognition has been constructed. Finally, a dynamic space–semantic joint matching module has been designed that incorporates anti-offset IoU metrics and hierarchical semantic regularization constraints, thereby enhancing matching robustness through density-adaptive weight adjustment. Experimental results indicate that the accuracy of this method on a self-constructed dataset is 89.5%, with an F1 score of 90.1%, representing a 24.2% improvement over traditional OCR methods. The dynamic matching mechanism elevates the average accuracy of YOLOv5s from 78.5% to 89.7%, surpassing the Faster R-CNN benchmark model while maintaining a real-time processing efficiency of 76 FPS. This study offers a lightweight and highly robust solution for the efficient extraction of order information in complex logistics scenarios, significantly advancing the intelligent upgrading of sorting systems. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

23 pages, 20415 KiB  
Article
FireNet-KD: Swin Transformer-Based Wildfire Detection with Multi-Source Knowledge Distillation
by Naveed Ahmad, Mariam Akbar, Eman H. Alkhammash and Mona M. Jamjoom
Fire 2025, 8(8), 295; https://doi.org/10.3390/fire8080295 - 26 Jul 2025
Viewed by 310
Abstract
Forest fire detection is an essential application in environmental surveillance since wildfires cause devastating damage to ecosystems, human life, and property every year. The effective and accurate detection of fire is necessary to allow for timely response and efficient management of disasters. Traditional [...] Read more.
Forest fire detection is an essential application in environmental surveillance since wildfires cause devastating damage to ecosystems, human life, and property every year. The effective and accurate detection of fire is necessary to allow for timely response and efficient management of disasters. Traditional techniques for fire detection often experience false alarms and delayed responses in various environmental situations. Therefore, developing robust, intelligent, and real-time detection systems has emerged as a central challenge in remote sensing and computer vision research communities. Despite recent achievements in deep learning, current forest fire detection models still face issues with generalizability, lightweight deployment, and accuracy trade-offs. In order to overcome these limitations, we introduce a novel technique (FireNet-KD) that makes use of knowledge distillation, a method that maps the learning of hard models (teachers) to a light and efficient model (student). We specifically utilize two opposing teacher networks: a Vision Transformer (ViT), which is popular for its global attention and contextual learning ability, and a Convolutional Neural Network (CNN), which is esteemed for its spatial locality and inductive biases. These teacher models instruct the learning of a Swin Transformer-based student model that provides hierarchical feature extraction and computational efficiency through shifted window self-attention, and is thus particularly well suited for scalable forest fire detection. By combining the strengths of ViT and CNN with distillation into the Swin Transformer, the FireNet-KD model outperforms state-of-the-art methods with significant improvements. Experimental results show that the FireNet-KD model obtains a precision of 95.16%, recall of 99.61%, F1-score of 97.34%, and mAP@50 of 97.31%, outperforming the existing models. These results prove the effectiveness of FireNet-KD in improving both detection accuracy and model efficiency for forest fire detection. Full article
Show Figures

Figure 1

17 pages, 3178 KiB  
Article
Deep Learning-Based YOLO Applied to Rear Weld Pool Thermal Monitoring of Metallic Materials in the GTAW Process
by Vinicius Lemes Jorge, Zaid Boutaleb, Theo Boutin, Issam Bendaoud, Fabien Soulié and Cyril Bordreuil
Metals 2025, 15(8), 836; https://doi.org/10.3390/met15080836 - 26 Jul 2025
Viewed by 261
Abstract
This study investigates the use of YOLOv8 deep learning models to segment and classify thermal images acquired from the rear of the weld pool during the Gas Tungsten Arc Welding (GTAW) process. Thermal data were acquired using a two-color pyrometer under three welding [...] Read more.
This study investigates the use of YOLOv8 deep learning models to segment and classify thermal images acquired from the rear of the weld pool during the Gas Tungsten Arc Welding (GTAW) process. Thermal data were acquired using a two-color pyrometer under three welding current levels (160 A, 180 A, and 200 A). Models of sizes from nano to extra-large were trained on 66 annotated frames and evaluated with and without data augmentation. The results demonstrate that the YOLOv8m model achieved the best classification performance, with a precision of 83.25% and an inference time of 21.4 ms per frame by using GPU, offering the optimal balance between accuracy and speed. Segmentation accuracy also remained high across all current levels. The YOLOv8n model was the fastest (15.9 ms/frame) but less accurate (75.33%). Classification was most reliable at 160 A, where the thermal field was more stable. The arc reflection class was consistently identified with near-perfect precision, demonstrating the model’s robustness against non-relevant thermal artifacts. These findings confirm the feasibility of using lightweight, dual-task neural networks for reliable weld pool analysis, even with limited training data. Full article
(This article belongs to the Special Issue Advances in Welding Processes of Metallic Materials)
Show Figures

Figure 1

22 pages, 3082 KiB  
Article
A Lightweight Intrusion Detection System with Dynamic Feature Fusion Federated Learning for Vehicular Network Security
by Junjun Li, Yanyan Ma, Jiahui Bai, Congming Chen, Tingting Xu and Chi Ding
Sensors 2025, 25(15), 4622; https://doi.org/10.3390/s25154622 - 25 Jul 2025
Viewed by 282
Abstract
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional [...] Read more.
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional Intrusion Detection System (IDS) makes it difficult to effectively deal with the dynamics and complexity of emerging threats. To solve these problems, a lightweight vehicular network intrusion detection framework based on Dynamic Feature Fusion Federated Learning (DFF-FL) is proposed. The proposed framework employs a two-stream architecture, including a transformer-augmented autoencoder for abstract feature extraction and a lightweight CNN-LSTM–Attention model for preserving temporal and local patterns. Compared with the traditional theoretical framework of the federated learning, DFF-FL first dynamically fuses the deep feature representation of each node through the transformer attention module to realize the fine-grained cross-node feature interaction in a heterogeneous data environment, thereby eliminating the performance degradation caused by the difference in feature distribution. Secondly, based on the final loss LAEX,X^ index of each node, an adaptive weight adjustment mechanism is used to make the nodes with excellent performance dominate the global model update, which significantly improves robustness against complex attacks. Experimental evaluation on the CAN-Hacking dataset shows that the proposed intrusion detection system achieves more than 99% F1 score with only 1.11 MB of memory and 81,863 trainable parameters, while maintaining low computational overheads and ensuring data privacy, which is very suitable for edge device deployment. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

14 pages, 1419 KiB  
Article
GhostBlock-Augmented Lightweight Gaze Tracking via Depthwise Separable Convolution
by Jing-Ming Guo, Yu-Sung Cheng, Yi-Chong Zeng and Zong-Yan Yang
Electronics 2025, 14(15), 2978; https://doi.org/10.3390/electronics14152978 - 25 Jul 2025
Viewed by 155
Abstract
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due [...] Read more.
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due to aggressive parameter reduction. To address this issue, we introduce GhostBlocks, a custom-designed convolutional unit that combines intrinsic feature generation with ghost feature recomposition through depthwise operations. Our method enhances the original L2CS architecture by replacing each ResNet block with GhostBlocks, thereby significantly reducing the number of parameters and floating-point operations. The experimental results on the Gaze360 dataset demonstrate that the proposed model reduces FLOPs from 16.527 × 108 to 8.610 × 108 and parameter count from 2.387 × 105 to 1.224 × 105 while maintaining comparable gaze estimation accuracy, with MAE increasing only slightly from 10.70° to 10.87°. This work highlights the potential of GhostNet-augmented designs for real-time gaze tracking on edge devices, providing a practical solution for deployment in resource-constrained environments. Full article
Show Figures

Figure 1

21 pages, 4949 KiB  
Article
An Integrated Lightweight Neural Network Design and FPGA-Accelerated Edge Computing for Chili Pepper Variety and Origin Identification via an E-Nose
by Ziyu Guo, Yong Yin, Haolin Gu, Guihua Peng, Xueya Wang, Ju Chen and Jia Yan
Foods 2025, 14(15), 2612; https://doi.org/10.3390/foods14152612 - 25 Jul 2025
Viewed by 211
Abstract
A chili pepper variety and origin detection system that integrates a field-programmable gate array (FPGA) with an electronic nose (e-nose) is proposed in this paper to address the issues of variety confusion and origin ambiguity in the chili pepper market. The system uses [...] Read more.
A chili pepper variety and origin detection system that integrates a field-programmable gate array (FPGA) with an electronic nose (e-nose) is proposed in this paper to address the issues of variety confusion and origin ambiguity in the chili pepper market. The system uses the AIRSENSE PEN3 e-nose from Germany to collect gas data from thirteen different varieties of chili peppers and two specific varieties of chili peppers originating from seven different regions. Model training is conducted via the proposed lightweight convolutional neural network ChiliPCNN. By combining the strengths of a convolutional neural network (CNN) and a multilayer perceptron (MLP), the ChiliPCNN model achieves an efficient and accurate classification process, requiring only 268 parameters for chili pepper variety identification and 244 parameters for origin tracing, with 364 floating-point operations (FLOPs) and 340 FLOPs, respectively. The experimental results demonstrate that, compared with other advanced deep learning methods, the ChiliPCNN has superior classification performance and good stability. Specifically, ChiliPCNN achieves accuracy rates of 94.62% in chili pepper variety identification and 93.41% in origin tracing tasks involving Jiaoyang No. 6, with accuracy rates reaching as high as 99.07% for Xianjiao No. 301. These results fully validate the effectiveness of the model. To further increase the detection speed of the ChiliPCNN, its acceleration circuit is designed on the Xilinx Zynq7020 FPGA from the United States and optimized via fixed-point arithmetic and loop unrolling strategies. The optimized circuit reduces the latency to 5600 ns and consumes only 1.755 W of power, significantly improving the resource utilization rate and processing speed of the model. This system not only achieves rapid and accurate chili pepper variety and origin detection but also provides an efficient and reliable intelligent agricultural management solution, which is highly important for promoting the development of agricultural automation and intelligence. Full article
Show Figures

Figure 1

Back to TopTop