Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = AODNet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1981 KiB  
Article
Image Dehazing Technique Based on DenseNet and the Denoising Self-Encoder
by Kunxiang Liu, Yue Yang, Yan Tian and Haixia Mao
Processes 2024, 12(11), 2568; https://doi.org/10.3390/pr12112568 - 16 Nov 2024
Viewed by 1618
Abstract
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques [...] Read more.
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques like DehazeNet, AOD-Net, and Li have shown encouraging progress in the study of image dehazing applications. However, these methods suffer from a shallow network structure leading to limited network estimation capability, reliance on atmospheric scattering models to generate the final results that are prone to error accumulation, as well as unstable training and slow convergence. Aiming at these problems, this paper proposes an improved end-to-end convolutional neural network method based on the denoising self-encoder-DenseNet (DAE-DenseNet), where the denoising self-encoder is used as the main body of the network structure, the encoder extracts the features of haze images, the decoder performs the feature reconstruction to recover the image, and the boosting module further performs the feature fusion locally and globally, and finally outputs the dehazed image. Testing the defogging effect in the public dataset, the PSNR index of DAE-DenseNet is 22.60, which is much higher than other methods. Experiments have proved that the dehazing method designed in this paper is better than other algorithms to a certain extent, and there is no color oversaturation or an excessive dehazing phenomenon in the image after dehazing. The dehazing results are the closest to the real image and the viewing experience feels natural and comfortable, with the image dehazing effect being very competitive. Full article
Show Figures

Figure 1

15 pages, 3848 KiB  
Article
AODs-CLYOLO: An Object Detection Method Integrating Fog Removal and Detection in Haze Environments
by Xinyu Liang, Zhengyou Liang, Linke Li and Jiahong Chen
Appl. Sci. 2024, 14(16), 7357; https://doi.org/10.3390/app14167357 - 20 Aug 2024
Viewed by 1469
Abstract
Foggy and hazy weather conditions can significantly reduce the clarity of images captured by cameras, making it difficult for object detection algorithms to accurately recognize targets. This degradation can cause failures in autonomous or assisted driving systems, posing severe safety threats to both [...] Read more.
Foggy and hazy weather conditions can significantly reduce the clarity of images captured by cameras, making it difficult for object detection algorithms to accurately recognize targets. This degradation can cause failures in autonomous or assisted driving systems, posing severe safety threats to both drivers and passengers. To address the issue of decreased detection accuracy in foggy weather, we propose an object detection algorithm specifically designed for such environments, named AODs-CLYOLO. To effectively handle images affected by fog, we introduce an image dehazing model, AODs, which is more suitable for detection tasks. This model incorporates a Channel–Pixel (CP) attention mechanism and a new Contrastive Regularization (CR), enhancing the dehazing effect while preserving the integrity of image information. For the detection network component, we propose a learnable Cross-Stage Partial Connection Module (CSPCM++), which is used before the detection head. Alongside this, we integrate the LSKNet selective attention mechanism to improve the extraction of effective features from large objects. Additionally, we apply the FocalGIoU loss function to enhance the model’s performance in scenarios characterized by sample imbalance or a high proportion of difficult samples. Experimental results demonstrate that the AODs-CLYOLO detection algorithm achieves up to a 10.1% improvement in the mAP (0.5:0.95) metric compared to the baseline model YOLOv5s. Full article
(This article belongs to the Section Ecology Science and Engineering)
Show Figures

Figure 1

22 pages, 9246 KiB  
Article
DST-DETR: Image Dehazing RT-DETR for Safety Helmet Detection in Foggy Weather
by Ziyuan Liu, Chunxia Sun and Xiaopeng Wang
Sensors 2024, 24(14), 4628; https://doi.org/10.3390/s24144628 - 17 Jul 2024
Cited by 7 | Viewed by 2465
Abstract
In foggy weather, outdoor safety helmet detection often suffers from low visibility and unclear objects, hindering optimal detector performance. Moreover, safety helmets typically appear as small objects at construction sites, prone to occlusion and difficult to distinguish from complex backgrounds, further exacerbating the [...] Read more.
In foggy weather, outdoor safety helmet detection often suffers from low visibility and unclear objects, hindering optimal detector performance. Moreover, safety helmets typically appear as small objects at construction sites, prone to occlusion and difficult to distinguish from complex backgrounds, further exacerbating the detection challenge. Therefore, the real-time and precise detection of safety helmet usage among construction personnel, particularly in adverse weather conditions such as foggy weather, poses a significant challenge. To address this issue, this paper proposes the DST-DETR, a framework for foggy weather safety helmet detection. The DST-DETR framework comprises a dehazing module, PAOD-Net, and an object detection module, ST-DETR, for joint dehazing and detection. Initially, foggy images are restored within PAOD-Net, enhancing the AOD-Net model by introducing a novel convolutional module, PfConv, guided by the parameter-free average attention module (PfAAM). This module enables more focused attention on crucial features in lightweight models, therefore enhancing performance. Subsequently, the MS-SSIM + 2 loss function is employed to bolster the model’s robustness, making it adaptable to scenes with intricate backgrounds and variable fog densities. Next, within the object detection module, the ST-DETR model is designed to address small objects. By refining the RT-DETR model, its capability to detect small objects in low-quality images is enhanced. The core of this approach lies in utilizing the variant ResNet-18 as the backbone to make the network lightweight without sacrificing accuracy, followed by effectively integrating the small-object layer into the improved BiFPN neck structure, resulting in CCFF-BiFPN-P2. Various experiments were conducted to qualitatively and quantitatively compare our method with several state-of-the-art approaches, demonstrating its superiority. The results validate that the DST-DETR algorithm is better suited for foggy safety helmet detection tasks in construction scenarios. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

24 pages, 14041 KiB  
Article
Analysis of the Generalization Ability of Defogging Algorithms on RICE Remote Sensing Images
by Guisheng Miao, Zhongpeng Zhang and Zhanbei Wang
Sensors 2024, 24(14), 4566; https://doi.org/10.3390/s24144566 - 14 Jul 2024
Viewed by 1266
Abstract
This paper explores the generalization ability of defogging algorithms on RICE (A Remote Sensing Image Dataset for Cloud Removal) remotely sensed images. RICE is a dataset of remotely sensed images used for removing clouds, allowing the researcher to better evaluate the performance of [...] Read more.
This paper explores the generalization ability of defogging algorithms on RICE (A Remote Sensing Image Dataset for Cloud Removal) remotely sensed images. RICE is a dataset of remotely sensed images used for removing clouds, allowing the researcher to better evaluate the performance of defogging algorithms for cloud removal from remotely sensed images. In this paper, four classical defogging algorithms, including AOD-Net, FFA-Net, dark channel prior, and DehazeFormer, are selected and applied to the task of de-cloud in RICE remote sensing images. The performance of these algorithms on the RICE dataset is analyzed by comparing the experimental results, and their differences, advantages, and disadvantages in dealing with de-clouded remote sensing images are explored. The experimental results show that the four defogging algorithms are capable of performing well on uniform thin cloud images, but there is a color distortion and the performance is weak when it comes to inhomogeneous clouds as well as thick clouds. So, the generalization ability of the algorithms is weak when the defogging algorithms are applied to the problem of cloud removal. Finally, this paper proposes improvement ideas for the de-cloud problem of RICE remote sensing images and looks forward to possible future research directions. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

18 pages, 14758 KiB  
Article
Object Detection in Hazy Environments, Based on an All-in-One Dehazing Network and the YOLOv5 Algorithm
by Aijuan Li, Guangpeng Xu, Wenpeng Yue, Chuanyan Xu, Chunpeng Gong and Jiaping Cao
Electronics 2024, 13(10), 1862; https://doi.org/10.3390/electronics13101862 - 10 May 2024
Cited by 6 | Viewed by 1862
Abstract
This study introduces an advanced algorithm for intelligent vehicle target detection in hazy conditions, aiming to bolster the environmental perception capabilities of autonomous vehicles. The proposed approach integrates a hybrid convolutional module (HDC) into an all-in-one dehazing network, AOD-Net, to expand the perceptual [...] Read more.
This study introduces an advanced algorithm for intelligent vehicle target detection in hazy conditions, aiming to bolster the environmental perception capabilities of autonomous vehicles. The proposed approach integrates a hybrid convolutional module (HDC) into an all-in-one dehazing network, AOD-Net, to expand the perceptual domain for image feature extraction and refine the clarity of dehazed images. To accelerate model convergence and enhance generalization, the loss function has been optimized. For practical deployment in intelligent vehicle systems, the ShuffleNetv2 lightweight network module is incorporated into the YOLOv5s network backbone, and the feature pyramid network (FPN) within the neck network has been refined. Additionally, the network employs a global shuffle convolution (GSconv) to balance accuracy with parameter count. To further focus on the target, a convolutional block attention module (CBAM) is introduced, which helps in reducing the network’s parameter count without compromising accuracy. A comparative experiment was conducted, and the results indicated that our algorithm achieved an impressive mean average precision (mAP) of 76.8% at an intersection-over-union (IoU) threshold of 0.5 in hazy conditions, outperforming YOLOv5 by 7.4 percentage points. Full article
Show Figures

Figure 1

12 pages, 6418 KiB  
Article
Tracking and Localization based on Multi-angle Vision for Underwater Target
by Jun Liu, Shenghua Gong, Wenxue Guan, Benyuan Li, Haobo Li and Jiaxin Liu
Electronics 2020, 9(11), 1871; https://doi.org/10.3390/electronics9111871 - 7 Nov 2020
Cited by 11 | Viewed by 3391
Abstract
With the cost reduction of underwater sensor network nodes and the increasing demand for underwater detection and monitoring, near-land areas, shallow water areas, lakes and rivers have gradually tended to densely arranged sensor nodes. In order to achieve real-time monitoring, most nodes now [...] Read more.
With the cost reduction of underwater sensor network nodes and the increasing demand for underwater detection and monitoring, near-land areas, shallow water areas, lakes and rivers have gradually tended to densely arranged sensor nodes. In order to achieve real-time monitoring, most nodes now have visual sensors instead of acoustic sensors to collect and analyze optical images, mainly because cameras might be more advantageous when it comes to dense underwater sensor networks. In this article, image enhancement, saliency detection, calibration and refraction model calculation are performed on the video streams collected by multiple optical cameras to obtain the track of the dynamic target. This study not only innovatively combines the application of AOD-Net’s (all-in-one network) image defogging algorithm with underwater image enhancement, but also refers to the BASNet (Boundary-Aware Salient network) network architecture, introducing frame difference results in the input to reduce the interference of static targets. Based on the aforementioned technologies, this paper designs a dynamic target tracking system centered on video stream processing in dense underwater networks. As part of the process, most nodes carried underwater cameras. When the dynamic target could be captured by at least two nodes in the network at the same time, the target position could then be calculated and tracked. Full article
(This article belongs to the Special Issue Underwater Communication and Networking Systems)
Show Figures

Figure 1

Back to TopTop