Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = RefConv feature enhancement module

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3170 KB  
Article
Improvement in Pavement Defect Scenarios Using an Improved YOLOv10 with ECA Attention, RefConv and WIoU
by Xiaolin Zhang, Lei Lu, Hanyun Luo and Lei Wang
World Electr. Veh. J. 2025, 16(6), 328; https://doi.org/10.3390/wevj16060328 - 13 Jun 2025
Cited by 3 | Viewed by 1128
Abstract
This study addresses challenges such as multi-scale defects, varying lighting, and irregular shapes by proposing an improved YOLOv10 model that integrates the ECA attention mechanism, RefConv feature enhancement module, and WIoU loss function for complex pavement defect detection. The RefConv dual-branch structure achieves [...] Read more.
This study addresses challenges such as multi-scale defects, varying lighting, and irregular shapes by proposing an improved YOLOv10 model that integrates the ECA attention mechanism, RefConv feature enhancement module, and WIoU loss function for complex pavement defect detection. The RefConv dual-branch structure achieves feature complementarity between local details and global context (mAP increased by 2.1%), the ECA mechanism models channel relationships using 1D convolution (small-object recall rate increased by 27%), and the WIoU loss optimizes difficult sample regression through a dynamic weighting mechanism (location accuracy improved by 37%). Experiments show that on a dataset constructed from 23,949 high-resolution images, the improved model’s mAP reaches 68.2%, which is an increase of 6.2% compared to the baseline YOLOv10, maintaining a stable recall rate of 83.5% in highly reflective and low-light scenarios, with an inference speed of 158 FPS (RTX 4080), providing a high-precision real-time solution for intelligent road inspection. Full article
Show Figures

Figure 1

21 pages, 4057 KB  
Article
RHS-YOLOv8: A Lightweight Underwater Small Object Detection Algorithm Based on Improved YOLOv8
by Yifan Wei, Jun Tao, Wenjun Wu, Donghua Yuan and Shunzhi Hou
Appl. Sci. 2025, 15(7), 3778; https://doi.org/10.3390/app15073778 - 30 Mar 2025
Cited by 4 | Viewed by 3481
Abstract
To address the challenge posed by the abundance of small objects with weak object features and little information in the images of underwater biomonitoring scenarios, and the added difficulty of recognizing these objects due to light absorption and scattering in the underwater environment, [...] Read more.
To address the challenge posed by the abundance of small objects with weak object features and little information in the images of underwater biomonitoring scenarios, and the added difficulty of recognizing these objects due to light absorption and scattering in the underwater environment, this study proposes an improved RHS-YOLOv8 (Ref-Dilated-HBFPN-SOB-YOLOv8). Firstly, a combination of hybrid inflated convolution and RefConv is used to redesign the lightweight Ref-Dilated convolution block, which reduces the model computation. Second, a new feature pyramid network fusion module, the Hybrid Bridge Feature Pyramid Network (HBFPN), is designed to fuse the deep features with the high-level features, as well as the features of the current layer, to improve the feature extraction capability for fuzzy objects. Third, Efficient Localization Attention (ELA) is added to reduce the interference of irrelevant factors on prediction. Fourth, an Involution module is introduced to effectively capture spatial long-range relationships and improve recognition accuracy. Finally, a small object detection branch is incorporated into the original architecture to enhance the model’s performance in detecting small objects. Experiments based on the DUO dataset show that RHS-YOLOv8 reduces 9.95% of computing power, while mAP@0.5 and mAP@0.50:0.95 are improved by 2.54% and 4.31%, respectively. Compared with other cutting-edge underwater object detection algorithms, the present algorithm improves the detection accuracy while lightweighting the improvement, which effectively enhances the capability to detect small underwater objects. Full article
(This article belongs to the Special Issue Deep Learning for Object Detection)
Show Figures

Figure 1

16 pages, 952 KB  
Article
CGFTNet: Content-Guided Frequency Domain Transform Network for Face Super-Resolution
by Yeerlan Yekeben, Shuli Cheng and Anyu Du
Information 2024, 15(12), 765; https://doi.org/10.3390/info15120765 - 2 Dec 2024
Cited by 3 | Viewed by 1431
Abstract
Recent advancements in face super resolution (FSR) have been propelled by deep learning techniques using convolutional neural networks (CNN). However, existing methods still struggle with effectively capturing global facial structure information, leading to reduced fidelity in reconstructed images, and often require additional manual [...] Read more.
Recent advancements in face super resolution (FSR) have been propelled by deep learning techniques using convolutional neural networks (CNN). However, existing methods still struggle with effectively capturing global facial structure information, leading to reduced fidelity in reconstructed images, and often require additional manual data annotation. To overcome these challenges, we introduce a content-guided frequency domain transform network (CGFTNet) for face super-resolution tasks. The network features a channel attention-linked encoder-decoder architecture with two key components: the Frequency Domain and Reparameterized Focus Convolution Feature Enhancement module (FDRFEM) and the Content-Guided Channel Attention Fusion (CGCAF) module. FDRFEM enhances feature representation through transformation domain techniques and reparameterized focus convolution (RefConv), capturing detailed facial features and improving image quality. CGCAF dynamically adjusts feature fusion based on image content, enhancing detail restoration. Extensive evaluations across multiple datasets demonstrate that the proposed CGFTNet consistently outperforms other state-of-the-art methods. Full article
Show Figures

Figure 1

15 pages, 6894 KB  
Article
A Novel Approach to Pedestrian Re-Identification in Low-Light and Zero-Shot Scenarios: Exploring Transposed Convolutional Reflectance Decoders
by Zhenghao Li and Jiping Xiong
Electronics 2024, 13(20), 4069; https://doi.org/10.3390/electronics13204069 - 16 Oct 2024
Viewed by 1682
Abstract
In recent years, pedestrian re-identification technology has made significant progress, with various neural network models performing well under normal conditions, such as good weather and adequate lighting. However, most research has overlooked extreme environments, such as rainy weather and nighttime. Additionally, the existing [...] Read more.
In recent years, pedestrian re-identification technology has made significant progress, with various neural network models performing well under normal conditions, such as good weather and adequate lighting. However, most research has overlooked extreme environments, such as rainy weather and nighttime. Additionally, the existing pedestrian re-identification datasets predominantly consist of well-lit images. Although some studies have started to address these issues by proposing methods for enhancing low-light images to restore their original features, the effectiveness of these approaches remains limited. We noted that a method based on Retinex theory designed a reflectance representation learning module aimed at restoring image features as much as possible. However, this method has so far only been applied in object detection networks. In response to this, we improved the method and applied it to pedestrian re-identification, proposing a transposed convolution reflectance decoder (TransConvRefDecoder) to better restore details in low-light images. Extensive experiments on the Market1501, CUHK03, and MSMT17 datasets demonstrated that our approach delivered superior performance. Full article
Show Figures

Figure 1

Back to TopTop