Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = dual squeeze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3854 KiB  
Article
Accurate Classification of Multi-Cultivar Watermelons via GAF-Enhanced Feature Fusion Convolutional Neural Networks
by Changqing An, Maozhen Qu, Yiran Zhao, Zihao Wu, Xiaopeng Lv, Yida Yu, Zichao Wei, Xiuqin Rao and Huirong Xu
Foods 2025, 14(16), 2860; https://doi.org/10.3390/foods14162860 - 18 Aug 2025
Viewed by 223
Abstract
The online rapid classification of multi-cultivar watermelon, including seedless and seeded types, has far-reaching significance for enhancing quality control in the watermelon industry. However, interference in one-dimensional spectra affects the high-accuracy classification of multi-cultivar watermelons with similar appearances. This study proposed an innovative [...] Read more.
The online rapid classification of multi-cultivar watermelon, including seedless and seeded types, has far-reaching significance for enhancing quality control in the watermelon industry. However, interference in one-dimensional spectra affects the high-accuracy classification of multi-cultivar watermelons with similar appearances. This study proposed an innovative method integrating Gramian Angular Field (GAF), feature fusion, and Squeeze-and-Excitation (SE)-guided convolutional neural networks (CNN) based on VIS-NIR transmittance spectroscopy. First, one-dimensional spectra of 163 seedless and 160 seeded watermelons were converted into two-dimensional Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF) images. Subsequently, a dual-input CNN architecture was designed to fuse discriminative features from both GASF and GADF images. Feature visualization of high-weight channels of the input images in convolutional layer revealed distinct spectral features between seedless and seeded watermelons. With the fusion of distinguishing feature information, the developed CNN model achieved a classification accuracy of 95.1% on the prediction set, outperforming traditional models based on one-dimensional spectra. Remarkably, wavelength optimization through competitive adaptive reweighted sampling (CARS) reduced GAF image generation time to 55.19% of full-wavelength processing, while improving classification accuracy to 96.3%. A better generalization of the model was demonstrated using 17 seedless and 20 seeded watermelons from other origins, with a classification accuracy of 91.9%. These findings substantiated that GAF-enhanced feature fusion CNN can significantly improve the classification accuracy of multi-cultivar watermelons, casting innovative light on fruit quality based on VIS-NIR transmittance spectroscopy. Full article
Show Figures

Figure 1

22 pages, 3234 KiB  
Article
A Lightweight CNN for Multiclass Retinal Disease Screening with Explainable AI
by Arjun Kumar Bose Arnob, Muhammad Hasibur Rashid Chayon, Fahmid Al Farid, Mohd Nizam Husen and Firoz Ahmed
J. Imaging 2025, 11(8), 275; https://doi.org/10.3390/jimaging11080275 - 15 Aug 2025
Viewed by 495
Abstract
Timely, balanced, and transparent detection of retinal diseases is essential to avert irreversible vision loss; however, current deep learning screeners are hampered by class imbalance, large models, and opaque reasoning. This paper presents a lightweight attention-augmented convolutional neural network (CNN) that addresses all [...] Read more.
Timely, balanced, and transparent detection of retinal diseases is essential to avert irreversible vision loss; however, current deep learning screeners are hampered by class imbalance, large models, and opaque reasoning. This paper presents a lightweight attention-augmented convolutional neural network (CNN) that addresses all three barriers. The network combines depthwise separable convolutions, squeeze-and-excitation, and global-context attention, and it incorporates gradient-based class activation mapping (Grad-CAM) and Grad-CAM++ to ensure that every decision is accompanied by pixel-level evidence. A 5335-image ten-class color-fundus dataset from Bangladeshi clinics, which was severely skewed (17–1509 images per class), was equalized using a synthetic minority oversampling technique (SMOTE) and task-specific augmentations. Images were resized to 150×150 px and split 70:15:15. The training used the adaptive moment estimation (Adam) optimizer (initial learning rate of 1×104, reduce-on-plateau, early stopping), 2 regularization, and dual dropout. The 16.6 M parameter network converged in fewer than 50 epochs on a mid-range graphics processing unit (GPU) and reached 87.9% test accuracy, a macro-precision of 0.882, a macro-recall of 0.879, and a macro-F1-score of 0.880, reducing the error by 58% relative to the best ImageNet backbone (Inception-V3, 40.4% accuracy). Eight disorders recorded true-positive rates above 95%; macular scar and central serous chorioretinopathy attained F1-scores of 0.77 and 0.89, respectively. Saliency maps consistently highlighted optic disc margins, subretinal fluid, and other hallmarks. Targeted class re-balancing, lightweight attention, and integrated explainability, therefore, deliver accurate, transparent, and deployable retinal screening suitable for point-of-care ophthalmic triage on resource-limited hardware. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

19 pages, 1711 KiB  
Article
TSDCA-BA: An Ultra-Lightweight Speech Enhancement Model for Real-Time Hearing Aids with Multi-Scale STFT Fusion
by Zujie Fan, Zikun Guo, Yanxing Lai and Jaesoo Kim
Appl. Sci. 2025, 15(15), 8183; https://doi.org/10.3390/app15158183 - 23 Jul 2025
Viewed by 563
Abstract
Lightweight speech denoising models have made remarkable progress in improving both speech quality and computational efficiency. However, most models rely on long temporal windows as input, limiting their applicability in low-latency, real-time scenarios on edge devices. To address this challenge, we propose a [...] Read more.
Lightweight speech denoising models have made remarkable progress in improving both speech quality and computational efficiency. However, most models rely on long temporal windows as input, limiting their applicability in low-latency, real-time scenarios on edge devices. To address this challenge, we propose a lightweight hybrid module, Temporal Statistics Enhancement, Squeeze-and-Excitation-based Dual Convolutional Attention, and Band-wise Attention (TSE, SDCA, BA) Module. The TSE module enhances single-frame spectral features by concatenating statistical descriptors—mean, standard deviation, maximum, and minimum—thereby capturing richer local information without relying on temporal context. The SDCA and BA module integrates a simplified residual structure and channel attention, while the BA component further strengthens the representation of critical frequency bands through band-wise partitioning and differentiated weighting. The proposed model requires only 0.22 million multiply–accumulate operations (MMACs) and contains a total of 112.3 K parameters, making it well suited for low-latency, real-time speech enhancement applications. Experimental results demonstrate that among lightweight models with fewer than 200K parameters, the proposed approach outperforms most existing methods in both denoising performance and computational efficiency, significantly reducing processing overhead. Furthermore, real-device deployment on an improved hearing aid confirms an inference latency as low as 2 milliseconds, validating its practical potential for real-time edge applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2132 KiB  
Article
Deep Learning with Dual-Channel Feature Fusion for Epileptic EEG Signal Classification
by Bingbing Yu, Mingliang Zuo and Li Sui
Eng 2025, 6(7), 150; https://doi.org/10.3390/eng6070150 - 2 Jul 2025
Viewed by 495
Abstract
Background: Electroencephalography (EEG) signals play a crucial role in diagnosing epilepsy by reflecting distinct patterns associated with normal brain activity, ictal (seizure) states, and interictal (between-seizure) periods. However, the manual classification of these patterns is labor-intensive, time-consuming, and depends heavily on specialized expertise. [...] Read more.
Background: Electroencephalography (EEG) signals play a crucial role in diagnosing epilepsy by reflecting distinct patterns associated with normal brain activity, ictal (seizure) states, and interictal (between-seizure) periods. However, the manual classification of these patterns is labor-intensive, time-consuming, and depends heavily on specialized expertise. While deep learning methods have shown promise, many current models suffer from limitations such as excessive complexity, high computational demands, and insufficient generalizability. Developing lightweight and accurate models for real-time epilepsy detection remains a key challenge. Methods: This study proposes a novel dual-channel deep learning model to classify epileptic EEG signals into three categories: normal, ictal, and interictal states. Channel 1 integrates a bidirectional long short-term memory (BiLSTM) network with a Squeeze-and-Excitation (SE) ResNet attention module to dynamically emphasize critical feature channels. Channel 2 employs a dual-branch convolutional neural network (CNN) to extract deeper and distinct features. The model’s performance was evaluated on the publicly available Bonn EEG dataset. Results: The proposed model achieved an outstanding accuracy of 98.57%. The dual-channel structure improved specificity to 99.43%, while the dual-branch CNN boosted sensitivity by 5.12%. Components such as SE-ResNet attention modules contributed 4.29% to the accuracy improvement, and BiLSTM further enhanced specificity by 1.62%. Ablation studies validated the significance of each module. Conclusions: By leveraging a lightweight design and attention-based mechanisms, the dual-channel model offers high diagnostic precision while maintaining computational efficiency. Its applicability to real-time automated diagnosis positions it as a promising tool for clinical deployment across diverse patient populations. Full article
Show Figures

Figure 1

26 pages, 3424 KiB  
Article
MFF: A Multimodal Feature Fusion Approach for Encrypted Traffic Classification
by Hong Huang, Yinghang Zhou, Feng Jiang, Xiaolin Zhou and Qingping Jiang
Electronics 2025, 14(13), 2584; https://doi.org/10.3390/electronics14132584 - 26 Jun 2025
Viewed by 446
Abstract
With the widespread adoption of encryption technologies, encrypted traffic classification has become essential for maintaining network security awareness and optimizing service quality. However, existing deep learning-based methods often rely on fixed-length truncation during preprocessing, which can lead to the loss of critical information [...] Read more.
With the widespread adoption of encryption technologies, encrypted traffic classification has become essential for maintaining network security awareness and optimizing service quality. However, existing deep learning-based methods often rely on fixed-length truncation during preprocessing, which can lead to the loss of critical information and degraded classification performance. To address this issue, we propose a Multi-Feature Fusion (MFF) model that learns robust representations of encrypted traffic through a dual-path feature extraction architecture. The temporal modeling branch incorporates a Squeeze-and-Excitation (SE) attention mechanism into ResNet18 to dynamically emphasize salient temporal patterns. Meanwhile, the global statistical feature branch uses an autoencoder for the nonlinear dimensionality reduction and semantic reconstruction of 52-dimensional statistical features, effectively preserving high-level semantic information of traffic interactions. MFF integrates both feature types to achieve feature enhancement and construct a more robust representation, thereby improving classification accuracy and generalization. In addition, SHAP-based interpretability analysis further validates the model’s decision-making process and reliability. Experimental results show that MFF achieves classification accuracies of 99.61% and 99.99% on the ISCX VPN-nonVPN and USTC-TFC datasets, respectively, outperforming mainstream baselines. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

35 pages, 16759 KiB  
Article
A Commodity Recognition Model Under Multi-Size Lifting and Lowering Sampling
by Mengyuan Chen, Song Chen, Kai Xie, Bisheng Wu, Ziyu Qiu, Haofei Xu and Jianbiao He
Electronics 2025, 14(11), 2274; https://doi.org/10.3390/electronics14112274 - 2 Jun 2025
Viewed by 568
Abstract
Object detection algorithms have evolved from two-stage to single-stage architectures, with foundation models achieving sustained improvements in accuracy. However, in intelligent retail scenarios, small object detection and occlusion issues still lead to significant performance degradation. To address these challenges, this paper proposes an [...] Read more.
Object detection algorithms have evolved from two-stage to single-stage architectures, with foundation models achieving sustained improvements in accuracy. However, in intelligent retail scenarios, small object detection and occlusion issues still lead to significant performance degradation. To address these challenges, this paper proposes an improved model based on YOLOv11, focusing on resolving insufficient multi-scale feature coupling and occlusion sensitivity. First, a multi-scale feature extraction network (MFENet) is designed. It splits input feature maps into dual branches along the channel dimension: the upper branch performs local detail extraction and global semantic enhancement through secondary partitioning, while the lower branch integrates CARAFE (content-aware reassembly of features) upsampling and SENet (squeeze-and-excitation network) channel weight matrices to achieve adaptive feature enhancement. The three feature streams are fused to output multi-scale feature maps, significantly improving small object detail retention. Second, a convolutional block attention module (CBAM) is introduced during feature fusion, dynamically focusing on critical regions through channel–spatial dual attention mechanisms. A fuseModule is designed to aggregate multi-level features, enhancing contextual modeling for occluded objects. Additionally, the extreme-IoU (XIoU) loss function replaces the traditional complete-IoU (CIoU), combined with XIoU-NMS (extreme-IoU non-maximum suppression) to suppress redundant detections, optimizing convergence speed and localization accuracy. Experiments demonstrate that the improved model achieves a mean average precision (mAP50) of 0.997 (0.2% improvement) and mAP50-95 of 0.895 (3.5% improvement) on the RPC product dataset and the 6th Product Recognition Challenge dataset. The recall rate increases to 0.996 (0.6% improvement over baseline). Although frames per second (FPS) decreased compared to the original model, the improved model still meets real-time requirements for retail scenarios. The model exhibits stable noise resistance in challenging environments and achieves 84% mAP in cross-dataset testing, validating its generalization capability and engineering applicability. Video streams were captured using a Zhongweiaoke camera operating at 60 fps, satisfying real-time detection requirements for intelligent retail applications. Full article
(This article belongs to the Special Issue Emerging Technologies in Computational Intelligence)
Show Figures

Figure 1

22 pages, 6392 KiB  
Article
Dual-Phase Severity Grading of Strawberry Angular Leaf Spot Based on Improved YOLOv11 and OpenCV
by Yi-Xiao Xu, Xin-Hao Yu, Qing Yi, Qi-Yuan Zhang and Wen-Hao Su
Plants 2025, 14(11), 1656; https://doi.org/10.3390/plants14111656 - 29 May 2025
Viewed by 727
Abstract
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated [...] Read more.
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated a Content-Aware ReAssembly of FEatures (CARAFE) module for improved feature upsampling and a squeeze-and-excitation (SE) attention mechanism for channel-wise feature recalibration, resulting in the YOLOv11-CARAFE-SE for the severity assessment of strawberry angular leaf spot. Furthermore, an OpenCV-based threshold segmentation algorithm based on H-channel thresholds in the HSV color space achieved accurate lesion segmentation. A disease severity grading standard for strawberry angular leaf spot was established based on the ratio of lesion area to leaf area. In addition, specialized software for the assessment of disease severity was developed based on the improved YOLOv11-CARAFE-SE model and OpenCV-based algorithms. Experimental results show that compared with the baseline YOLOv11, the performance is significantly improved: the box mAP@0.5 is increased by 1.4% to 93.2%, the mask mAP@0.5 is increased by 0.9% to 93.0%, the inference time is shortened by 0.4 ms to 0.9 ms, and the computational load is reduced by 1.94% to 10.1 GFLOPS. In addition, this two-stage grading framework achieves an average accuracy of 94.2% in detecting selected strawberry horn leaf spot disease samples, providing real-time field diagnostics and a high-throughput phenotypic analysis for resistance breeding programs. This work demonstrates the feasibility of rapidly estimating the severity of strawberry horn leaf spot, which will establish a robust technical framework for strawberry disease management under field conditions. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

25 pages, 6037 KiB  
Article
Extraction of Levees from Paddy Fields Based on the SE-CBAM UNet Model and Remote Sensing Images
by Hongfu Ai, Xiaomeng Zhu, Yongqi Han, Shinai Ma, Yiang Wang, Yihan Ma, Chuan Qin, Xinyi Han, Yaxin Yang and Xinle Zhang
Remote Sens. 2025, 17(11), 1871; https://doi.org/10.3390/rs17111871 - 28 May 2025
Viewed by 633
Abstract
During rice cultivation, extracting levees helps to delineate effective planting areas, thereby enhancing the precision of management zones. This approach is crucial for devising more efficient water field management strategies and has significant implications for water-saving irrigation and fertilizer optimization in rice production. [...] Read more.
During rice cultivation, extracting levees helps to delineate effective planting areas, thereby enhancing the precision of management zones. This approach is crucial for devising more efficient water field management strategies and has significant implications for water-saving irrigation and fertilizer optimization in rice production. The uneven distribution and lack of standardization of levees pose significant challenges for their accurate extraction. However, recent advancements in remote sensing and deep learning technologies have provided viable solutions. In this study, Youyi Farm in Shuangyashan City, Heilongjiang Province, was chosen as the experimental site. We developed the SCA-UNet model by optimizing the UNet algorithm and enhancing its network architecture through the integration of the Convolutional Block Attention Module (CBAM) and Squeeze-and-Excitation Networks (SE). The SCA-UNet model leverages the channel attention strengths of SE while incorporating CBAM to emphasize spatial information. Through a dual-attention collaborative mechanism, the model achieves a synergistic perception of the linear features and boundary information of levees, thereby significantly improving the accuracy of levee extraction. The experimental results demonstrate that the proposed SCA-UNet model and its additional modules offer substantial performance advantages. Our algorithm outperforms existing methods in both computational efficiency and precision. Significance analysis revealed that our method achieved overall accuracy (OA) and F1-score values of 88.4% and 90.6%, respectively. These results validate the efficacy of the multimodal dataset in addressing the issue of ambiguous levee boundaries. Additionally, ablation experiments using 10-fold cross-validation confirmed the effectiveness of the proposed SCA-UNet method. This approach provides a robust technical solution for levee extraction and has the potential to significantly advance precision agriculture. Full article
Show Figures

Graphical abstract

28 pages, 3777 KiB  
Article
Multisensor Fault Diagnosis of Rolling Bearing with Noisy Unbalanced Data via Intuitionistic Fuzzy Weighted Least Squares Twin Support Higher-Order Tensor Machine
by Shengli Dong, Yifang Zhang and Shengzheng Wang
Machines 2025, 13(6), 445; https://doi.org/10.3390/machines13060445 - 22 May 2025
Cited by 1 | Viewed by 498
Abstract
Aiming at the limitations of existing multisensor fault diagnosis methods for rolling bearings in real industrial scenarios, this paper proposes an innovative intuitionistic fuzzy weighted least squares twin support higher-order tensor machine (IFW-LSTSHTM) model, which realizes a breakthrough in the noise robustness, adaptability [...] Read more.
Aiming at the limitations of existing multisensor fault diagnosis methods for rolling bearings in real industrial scenarios, this paper proposes an innovative intuitionistic fuzzy weighted least squares twin support higher-order tensor machine (IFW-LSTSHTM) model, which realizes a breakthrough in the noise robustness, adaptability to the working conditions, and the class imbalance processing capability. First, the multimodal feature tensor is constructed: the fourier synchro-squeezed transform is used to convert the multisensor time-domain signals into time–frequency images, and then the tensor is reconstructed to retain the three-dimensional structural information of the sensor coupling relationship and time–frequency features. The nonlinear feature mapping strategy combined with Tucker decomposition effectively maintains the high-order correlation of the feature tensor. Second, the adaptive sample-weighting mechanism is developed: an intuitionistic fuzzy membership score assignment scheme with global–local information fusion is proposed. At the global level, the class contribution is assessed based on the relative position of the samples to the classification boundary; at the local level, the topological structural features of the sample distribution are captured by K-nearest neighbor analysis; this mechanism significantly improves the recognition of noisy samples and the handling of class-imbalanced data. Finally, a dual hyperplane classifier is constructed in tensor space: a structural risk regularization term is introduced to enhance the model generalization ability and a dynamic penalty factor is set to set adaptive weights for different categories. A linear equation system solving strategy is adopted: the nonparallel hyperplane optimization is converted into matrix operations to improve the computational efficiency. The extensive experimental results on the two rolling bearing datasets have verified that the proposed method outperforms existing solutions in diagnostic accuracy and stability. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

28 pages, 17488 KiB  
Article
Attentive Multi-Scale Features with Adaptive Context PoseResNet for Resource-Efficient Human Pose Estimation
by Ali Zakir, Sartaj Ahmed Salman, Gibran Benitez-Garcia and Hiroki Takahashi
Electronics 2025, 14(11), 2107; https://doi.org/10.3390/electronics14112107 - 22 May 2025
Viewed by 649
Abstract
Human Pose Estimation (HPE) remains challenging due to scale variation, occlusion, and high computational costs. Standard methods often struggle to capture detailed spatial information when keypoints are obscured, and they typically rely on computationally expensive deconvolution layers for upsampling, making them inefficient for [...] Read more.
Human Pose Estimation (HPE) remains challenging due to scale variation, occlusion, and high computational costs. Standard methods often struggle to capture detailed spatial information when keypoints are obscured, and they typically rely on computationally expensive deconvolution layers for upsampling, making them inefficient for real-time or resource-constrained scenarios. We propose AMFACPose (Attentive Multi-scale Features with Adaptive Context PoseResNet) to address these limitations. Specifically, our architecture incorporates Coordinate Convolution 2D (CoordConv2d) to retain explicit spatial context, alleviating the loss of coordinate information in conventional convolutions. To reduce computational overhead while maintaining accuracy, we utilize Depthwise Separable Convolutions (DSCs), separating spatial and pointwise operations. At the core of our approach is an Adaptive Feature Pyramid Network (AFPN), which replaces costly deconvolution-based upsampling by efficiently aggregating multi-scale features to handle diverse human poses and body sizes. We further introduce Dual-Gate Context Blocks (DGCBs) that refine global context to manage partial occlusions and cluttered backgrounds. The model integrates Squeeze-and-Excitation (SE) blocks and the Spatial–Channel Refinement Module (SCRM) to emphasize the most informative feature channels and spatial regions, which is particularly beneficial for occluded or overlapping keypoints. For precise keypoint localization, we replace dense heatmap predictions with coordinate classification using Multi-Layer Perceptron (MLP) heads. Experiments on the COCO and CrowdPose datasets demonstrate that AMFACPose surpasses the existing 2D HPE methods in both accuracy and computational efficiency. Moreover, our implementation on edge devices achieves real-time performance while preserving high accuracy, confirming the suitability of AMFACPose for resource-constrained pose estimation in both benchmark and real-world environments. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

22 pages, 254 KiB  
Article
The Impact of Environmental Regulations on Technological Progress of the Pesticide Manufacturing Industry in China
by Haixia Yang, Xinxin Zhu and Chao Chen
Sustainability 2025, 17(10), 4550; https://doi.org/10.3390/su17104550 - 16 May 2025
Viewed by 446
Abstract
The Chinese government has been continuously strengthening environmental regulations. to promote the reduction in pesticide use. However, the issue of excessive pesticide use remains unresolved. Technological progress of the pesticide manufacturing industry plays a critical role in reducing pesticide intensity and is a [...] Read more.
The Chinese government has been continuously strengthening environmental regulations. to promote the reduction in pesticide use. However, the issue of excessive pesticide use remains unresolved. Technological progress of the pesticide manufacturing industry plays a critical role in reducing pesticide intensity and is a key objective of environmental regulations for pesticides. This study examines the impact of China’s environmental regulations on technological progress of the pesticide manufacturing industry by using panel data from 30 provinces between 2004 and 2020 and constructing command-and-control and market-incentive environmental regulations. Empirical results show that environmental regulations have significantly promoted technological progress of the pesticide manufacturing industry, with market-incentive environmental regulations proving more effective than command-and-control environmental regulations. Regional analysis reveals that the eastern and western regions are consistent with the national results, while the central region shows heterogeneity. In the eastern and western regions, environmental regulations have fostered technological progress, generating an “innovation compensation effect”. However, the central region exhibits a dual effect. On one hand, environmental regulations have stimulated research in pesticide technologies; on the other hand, they have squeezed out investment in high-quality and innovative technologies, thereby hindering technological progress to some extent. Consequently, the government should enhance environmental supervision, revise environmental protection laws, and increase investments and subsidies for pesticide enterprises to foster technological innovation. Moreover, the formulation and implementation of environmental regulations should account for regional disparities. Full article
21 pages, 78310 KiB  
Article
Effect of Laser Power on Formation and Joining Strength of DP980-CFRP Joint Fabricated by Laser Circle Welding
by Sendong Ren, Yihao Shen, Taowei Wang, Hao Chen, Ninshu Ma and Jianguo Yang
Polymers 2025, 17(7), 997; https://doi.org/10.3390/polym17070997 - 7 Apr 2025
Viewed by 536
Abstract
In the present research, laser circle welding (LCW) was proposed to join dual-phase steel (DP980) and carbon fiber-reinforced plastic (CFRP). The welding appearance, cross-section of the welded joint and fracture surfaces were subjected to multi-scale characterizations. Joining strength was evaluated by the single-lap [...] Read more.
In the present research, laser circle welding (LCW) was proposed to join dual-phase steel (DP980) and carbon fiber-reinforced plastic (CFRP). The welding appearance, cross-section of the welded joint and fracture surfaces were subjected to multi-scale characterizations. Joining strength was evaluated by the single-lap shear test. Moreover, a numerical model was established based on the in-house finite element (FE) code JWRIAN-Hybrid to reproduce the thermal process of LCW. The results showed that successful bonding was achieved with a laser power higher than 300 W. The largest joining strength increased to about 1353.2 N (12.2 MPa) with 450 W laser power and then decreased under higher heat input. While the welded joint always presented brittle fracture, the joining zone could be divided into a squeezed zone (SZ), molten zone (MZ) and decomposition zone (DZ). The morphology of CFRP and chemical bonding information were distinct in each subregion. The chemical reaction between the O-C=O bond on the CFRP surface and the -OH bond on the DP980 sheet provided the joining force between dissimilar materials. Additionally, the developed FE model was effective in predicting the interfacial maximum temperature distribution of LCW. The influence of laser power on the joining strength of LCW joints was dualistic in character. The joining strength variation reflected the competitive result between joining zone expansion and local bonding quality change. Full article
(This article belongs to the Special Issue Advanced Joining Technologies for Polymers and Polymer Composites)
Show Figures

Figure 1

19 pages, 7175 KiB  
Article
MFFSNet: A Lightweight Multi-Scale Shuffle CNN Network for Wheat Disease Identification in Complex Contexts
by Mingjin Xie, Jiening Wu, Jie Sun, Lei Xiao, Zhenqi Liu, Rui Yuan, Shukai Duan and Lidan Wang
Agronomy 2025, 15(4), 910; https://doi.org/10.3390/agronomy15040910 - 7 Apr 2025
Viewed by 685
Abstract
Wheat is one of the most essential food crops globally, but diseases significantly threaten its yield and quality, resulting in considerable economic losses. The identification of wheat diseases faces challenges, such as interference from complex environments in the field, the inefficiency of traditional [...] Read more.
Wheat is one of the most essential food crops globally, but diseases significantly threaten its yield and quality, resulting in considerable economic losses. The identification of wheat diseases faces challenges, such as interference from complex environments in the field, the inefficiency of traditional machine learning methods, and difficulty in deploying the existing deep learning models. To address these challenges, this study proposes a multi-scale feature fusion shuffle network model (MFFSNet) for wheat disease identification from complex environments in the field. MFFSNet incorporates a multi-scale feature extraction and fusion module (MFEF), utilizing inflated convolution to efficiently capture diverse features, and its main constituent units are improved by ShuffleNetV2 units. A dual-branch shuffle attention mechanism (DSA) is also integrated to enhance the model’s focus on critical features, reducing interference from complex backgrounds. The model is characterized by its smaller size and fast operation speed. The experimental results demonstrate that the proposed DSA attention mechanism outperforms the best-performing Squeeze-and-Excitation (SE) block by approximately 1% in accuracy, with the final model achieving 97.38% accuracy and 97.96% recall on the test set, which are higher than classical models such as GoogleNet, MobileNetV3, and Swin Transformer. In addition, the number of parameters of this model is only 0.45 M, one-third that of MobileNetV3 Small, which is very suitable for deploying on devices with limited memory resources, demonstrating great potential for practical applications in agricultural production. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

20 pages, 2239 KiB  
Article
A Novel Lightweight Deep Learning Approach for Drivers’ Facial Expression Detection
by Jia Uddin
Designs 2025, 9(2), 45; https://doi.org/10.3390/designs9020045 - 3 Apr 2025
Cited by 1 | Viewed by 1009
Abstract
Drivers’ facial expression recognition systems play a pivotal role in Advanced Driver Assistance Systems (ADASs) by monitoring emotional states and detecting fatigue or distractions in real time. However, deploying such systems in resource-constrained environments like vehicles requires lightweight architectures to ensure real-time performance, [...] Read more.
Drivers’ facial expression recognition systems play a pivotal role in Advanced Driver Assistance Systems (ADASs) by monitoring emotional states and detecting fatigue or distractions in real time. However, deploying such systems in resource-constrained environments like vehicles requires lightweight architectures to ensure real-time performance, efficient model updates, and compatibility with embedded hardware. Smaller models significantly reduce communication overhead in distributed training. For autonomous vehicles, lightweight architectures also minimize the data transfer required for over-the-air updates. Moreover, they are crucial for their deployability on hardware with limited on-chip memory. In this work, we propose a novel Dual Attention Lightweight Deep Learning (DALDL) approach for drivers’ facial expression recognition. The proposed approach combines the SqueezeNext architecture with a Dual Attention Convolution (DAC) block. Our DAC block integrates Hybrid Channel Attention (HCA) and Coordinate Space Attention (CSA) to enhance feature extraction efficiency while maintaining minimal parameter overhead. To evaluate the effectiveness of our architecture, we compare it against two baselines: (a) Vanilla SqueezeNet and (b) AlexNet. Compared with SqueezeNet, DALDL improves accuracy by 7.96% and F1-score by 7.95% on the KMU-FED dataset. On the CK+ dataset, it achieves 8.51% higher accuracy and 8.40% higher F1-score. Against AlexNet, DALDL improves accuracy by 4.34% and F1-score by 4.17% on KMU-FED. Lastly, on CK+, it provides a 5.36% boost in accuracy and a 7.24% increase in F1-score. These results demonstrate that DALDL is a promising solution for efficient and accurate emotion recognition in real-world automotive applications. Full article
Show Figures

Figure 1

19 pages, 2703 KiB  
Article
DualNetIQ: Texture-Insensitive Image Quality Assessment with Dual Multi-Scale Feature Maps
by Adel Agamy, Hossam Mady, Hamada Esmaiel, Abdulrahman Al Ayidh, Abdelmageed Mohamed Aly and Mohamed Abdel-Nasser
Electronics 2025, 14(6), 1169; https://doi.org/10.3390/electronics14061169 - 17 Mar 2025
Viewed by 682
Abstract
The precise assessment of image quality that matches human perception is still a major challenge in the field of digital imaging. Digital images play a crucial role in many technological and media applications. The existing deep convolutional neural network (CNN)-based image quality assessment [...] Read more.
The precise assessment of image quality that matches human perception is still a major challenge in the field of digital imaging. Digital images play a crucial role in many technological and media applications. The existing deep convolutional neural network (CNN)-based image quality assessment (IQA) methods have advanced considerably, but there remains a critical need to improve the performance of existing methods while maintaining explicit tolerance to visual texture resampling and texture similarity. This paper introduces DualNetIQ, a novel full-reference IQA method that leverages the strengths of deep learning architectures to exhibit robustness against resampling effects on visual textures. DualNetIQ includes two main stages: feature extraction from the reference and distorted images, and similarity measurement based on combining global texture and structure similarity metrics. In particular, DualNetIQ takes features from input images using a group of hybrid pre-trained multi-scale feature maps carefully chosen from VGG19 and SqueezeNet pre-trained CNN models to find differences in texture and structure between the reference image and the distorted image. The Grey Wolf Optimizer (GWO) calculates the weighted combination of global texture and structure similarity metrics to assess the similarity between reference and distorted images. The unique advantage of the proposed method is that it does not require training or fine-tuning the CNN deep learning model. Comprehensive experiments and comparisons on five databases, including various distortion types, demonstrate the superiority of the proposed method over state-of-the-art models, particularly in image quality prediction and texture similarity tasks. Full article
Show Figures

Figure 1

Back to TopTop