sensors-logo

Journal Browser

Journal Browser

Smart Image Recognition and Detection Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (20 April 2025) | Viewed by 19305

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geography, University of Connecticut, U-4148, Storrs, CT, USA
Interests: potential landslide detection; landslide susceptibility mapping; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Resources and Environment (CRE), University of Chinese Academy of Sciences, Beijing 100049, China
Interests: environmental remote sensing; ecological remote sensing; remote sensing assessment of aerosol effects
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advance of smart image recognition has brought new chances of development for remote sensing and earth observation technology. The development of sensor technology also brought more and more ways to obtain remote sensing data, especially very high-resolution images. With the growing overwhelming amount of data, more automatic and accurate algorithms are needed to interpret images. In the last decade, the development of deep learning has provided a new way for smart image processing. Therefore, the application of deep learning based smart image processing in remote sensing has a very broad prospect.

This special issue aims to share the latest advances in smart image recognition related to remote sensing. Researches with innovative methods are very welcome. We especially encourage the use of freely available remote sensing data and open-source processing software, as it helps to conduct analysis anywhere in the world and promotes data equality. The data and code are recommended to be uploaded as supplementary material and shared to the public. Review papers will also be considered.

Potential topics for this Special Issue may include, but are not limited to, the following:

  • Classification and segmentation of remote sensing images
  • Object recognition and detection using optical or SAR sensors
  • Remote sensing scene classification
  • Remote sensing image change detection
  • Remote sensing application of Unmanned aerial vehicle (UAV)

Dr. Zhijie Zhang
Dr. Jiakui Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • land use and land cover
  • object detection
  • remote sensing scene classification
  • UAV remote sensing
  • change detection
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 41478 KiB  
Article
LO-MLPRNN: A Classification Algorithm for Multispectral Remote Sensing Images by Fusing Selective Convolution
by Xiangsuo Fan, Yan Zhang, Yong Peng, Qi Li, Xianqiang Wei, Jiabin Wang and Fadong Zou
Sensors 2025, 25(8), 2472; https://doi.org/10.3390/s25082472 - 14 Apr 2025
Viewed by 177
Abstract
To address the limitation of traditional deep learning algorithms in fully utilizing contextual information in multispectral remote sensing (RS) images, this paper proposes an improved vegetation cover classification algorithm called LO-MLPRNN, which integrates Large Selective Kernel Network (LSK) and Omni-Dimensional Dynamic Convolution (ODC) [...] Read more.
To address the limitation of traditional deep learning algorithms in fully utilizing contextual information in multispectral remote sensing (RS) images, this paper proposes an improved vegetation cover classification algorithm called LO-MLPRNN, which integrates Large Selective Kernel Network (LSK) and Omni-Dimensional Dynamic Convolution (ODC) with a Multi-Layer Perceptron Recurrent Neural Network (MLPRNN). The algorithm employs parallel-connected ODC and LSK modules to adaptively adjust convolution kernel parameters across multiple dimensions and dynamically optimize spatial receptive fields, enabling multi-perspective feature fusion for efficient processing of multispectral band information. The extracted features are mapped to a high-dimensional space through a Gate Recurrent Unit (GRU) and fully connected layers, with nonlinear characteristics enhanced by activation functions, ultimately achieving pixel-level land cover classification. Experiments conducted on GF-2 (0.75 m) and Sentinel-2 (10 m) multispectral RS images from Liucheng County, Liuzhou City, Guangxi Province, demonstrate that LO-MLPRNN achieves overall accuracies of 99.11% and 99.43%, outperforming Vision Transformer (ViT) by 2.61% and 3.98%, respectively. Notably, the classification accuracy for sugarcane reaches 99.70% and 99.67%, showcasing its superior performance. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

15 pages, 29925 KiB  
Article
Enhanced Color Nighttime Light Remote Sensing Imagery Using Dual-Sampling Adjustment
by Yaqi Huang, Yanling Lu, Li Zhang and Min Yin
Sensors 2025, 25(7), 2002; https://doi.org/10.3390/s25072002 - 22 Mar 2025
Viewed by 314
Abstract
Nighttime light remote sensing imagery is limited by its single band and low spatial resolution, hindering its ability to accurately capture ground information. To address this, a dual-sampling adjustment method is proposed to enhance nighttime light remote sensing imagery by fusing daytime optical [...] Read more.
Nighttime light remote sensing imagery is limited by its single band and low spatial resolution, hindering its ability to accurately capture ground information. To address this, a dual-sampling adjustment method is proposed to enhance nighttime light remote sensing imagery by fusing daytime optical images with nighttime light remote sensing imagery, generating high-quality color nighttime light remote sensing imagery. The results are as follows: (1) Compared to traditional nighttime light remote sensing imagery, the spatial resolution of the fusion images is improved from 500 m to 15 m while better retaining the ground features of daytime optical images and the distribution of nighttime light. (2) Quality evaluations confirm that color nighttime light remote sensing imagery enhanced by dual-sampling adjustment can effectively balance optical fidelity and spatial texture features. (3) In Beijing’s central business district, color nighttime light brightness exhibits the strongest correlation with business, especially in Dongcheng District, with r = 0.7221, providing a visual tool for assessing urban economic vitality at night. This study overcomes the limitations of fusing day–night remote sensing imagery, expanding the application field of color nighttime light remote sensing imagery and providing critical decision support for refined urban management. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

20 pages, 16733 KiB  
Article
CTHNet: A CNN–Transformer Hybrid Network for Landslide Identification in Loess Plateau Regions Using High-Resolution Remote Sensing Images
by Juan Li, Jin Zhang and Yongyong Fu
Sensors 2025, 25(1), 273; https://doi.org/10.3390/s25010273 - 6 Jan 2025
Cited by 1 | Viewed by 866
Abstract
The Loess Plateau in northwest China features fragmented terrain and is prone to landslides. However, the complex environment of the Loess Plateau, combined with the inherent limitations of convolutional neural networks (CNNs), often results in false positives and missed detection for deep learning [...] Read more.
The Loess Plateau in northwest China features fragmented terrain and is prone to landslides. However, the complex environment of the Loess Plateau, combined with the inherent limitations of convolutional neural networks (CNNs), often results in false positives and missed detection for deep learning models based on CNNs when identifying landslides from high-resolution remote sensing images. To deal with this challenge, our research introduced a CNN–transformer hybrid network. Specifically, we first constructed a database consisting of 1500 loess landslides and non-landslide samples. Subsequently, we proposed a neural network architecture that employs a CNN–transformer hybrid as an encoder, with the ability to extract high-dimensional, local-scale features using CNNs and global-scale features using a multi-scale lightweight transformer module, thereby enabling the automatic identification of landslides. The results demonstrate that this model can effectively detect loess landslides in such complex environments. Compared to approaches based on CNNs or transformers, such as U-Net, HCNet and TransUNet, our proposed model achieved greater accuracy, with an improvement of at least 3.81% in the F1-score. This study contributes to the automatic and intelligent identification of landslide locations and ranges on the Loess Plateau, which has significant practicality in terms of landslide investigation, risk assessment, disaster management, and related fields. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

16 pages, 6322 KiB  
Article
A Novel Dataset and Detection Method for Unmanned Aerial Vehicles Using an Improved YOLOv9 Algorithm
by Depeng Gao, Jianlin Tang, Hongqi Li, Bingshu Wang, Jianlin Qiu, Shuxi Chen and Xiangxiang Mei
Sensors 2024, 24(23), 7512; https://doi.org/10.3390/s24237512 - 25 Nov 2024
Cited by 1 | Viewed by 1261
Abstract
With the growing popularity of unmanned aerial vehicles (UAVs), their improper use is significantly disrupting society. Individuals and organizations have been continuously researching methods for detecting UAVs. However, most existing detection methods fail to account for the impact of similar flying objects, leading [...] Read more.
With the growing popularity of unmanned aerial vehicles (UAVs), their improper use is significantly disrupting society. Individuals and organizations have been continuously researching methods for detecting UAVs. However, most existing detection methods fail to account for the impact of similar flying objects, leading to weak anti-interference capabilities. In other words, when such objects appear in the image, the detector may mistakenly identify them as UAVs. Therefore, this study aims to enhance the anti-interference ability of UAV detectors by constructing an anti-interference dataset comprising 5062 images. In addition to UAVs, this dataset also contains three other types of flying objects that are visually similar to the UAV targets: planes, helicopters, and birds. This dataset can be used in model training to help detectors distinguish UAVs from these nontarget objects and thereby improve their anti-interference capabilities. Furthermore, we propose an anti-interference UAV detection method based on YOLOv9-C in which the dot distance is used as an evaluation index to assign positive and negative samples. This results in an increased number of positive samples, improving detector performance in the case of small targets. The comparison of experimental results shows that the developed method has better anti-interference performance than other algorithms. The detection method and dataset used to test the anti-interference capabilities in this study are expected to assist in the development and validation of related research methods. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

22 pages, 13099 KiB  
Article
Efficient Small Object Detection You Only Look Once: A Small Object Detection Algorithm for Aerial Images
by Jie Luo, Zhicheng Liu, Yibo Wang, Ao Tang, Huahong Zuo and Ping Han
Sensors 2024, 24(21), 7067; https://doi.org/10.3390/s24217067 - 2 Nov 2024
Cited by 3 | Viewed by 2852
Abstract
Aerial images have distinct characteristics, such as varying target scales, complex backgrounds, severe occlusion, small targets, and dense distribution. As a result, object detection in aerial images faces challenges like difficulty in extracting small target information and poor integration of spatial and semantic [...] Read more.
Aerial images have distinct characteristics, such as varying target scales, complex backgrounds, severe occlusion, small targets, and dense distribution. As a result, object detection in aerial images faces challenges like difficulty in extracting small target information and poor integration of spatial and semantic data. Moreover, existing object detection algorithms have a large number of parameters, posing a challenge for deployment on drones with limited hardware resources. We propose an efficient small-object YOLO detection model (ESOD-YOLO) based on YOLOv8n for Unmanned Aerial Vehicle (UAV) object detection. Firstly, we propose that the Reparameterized Multi-scale Inverted Blocks (RepNIBMS) module is implemented to replace the C2f module of the Yolov8n backbone extraction network to enhance the information extraction capability of small objects. Secondly, a cross-level multi-scale feature fusion structure, wave feature pyramid network (WFPN), is designed to enhance the model’s capacity to integrate spatial and semantic information. Meanwhile, a small-object detection head is incorporated to augment the model’s ability to identify small objects. Finally, a tri-focal loss function is proposed to address the issue of imbalanced samples in aerial images in a straightforward and effective manner. In the VisDrone2019 test set, when the input size is uniformly 640 × 640 pixels, the parameters of ESOD-YOLO are 4.46 M, and the average mean accuracy of detection reaches 29.3%, which is 3.6% higher than the baseline method YOLOv8n. Compared with other detection methods, it also achieves higher detection accuracy with lower parameters. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

19 pages, 12162 KiB  
Article
DRA-UNet for Coal Mining Ground Surface Crack Delineation with UAV High-Resolution Images
by Wei Wang, Weibing Du, Xiangyang Song, Sushe Chen, Haifeng Zhou, Hebing Zhang, Youfeng Zou, Junlin Zhu and Chaoying Cheng
Sensors 2024, 24(17), 5760; https://doi.org/10.3390/s24175760 - 4 Sep 2024
Cited by 1 | Viewed by 1196
Abstract
Coal mining in the Loess Plateau can very easily generate ground cracks, and these cracks can immediately result in ventilation trouble under the mine shaft, runoff disturbance, and vegetation destruction. Advanced UAV (Unmanned Aerial Vehicle) high-resolution mapping and DL (Deep Learning) are introduced [...] Read more.
Coal mining in the Loess Plateau can very easily generate ground cracks, and these cracks can immediately result in ventilation trouble under the mine shaft, runoff disturbance, and vegetation destruction. Advanced UAV (Unmanned Aerial Vehicle) high-resolution mapping and DL (Deep Learning) are introduced as the key methods to quickly delineate coal mining ground surface cracks for disaster prevention. Firstly, the dataset named the Ground Cracks of Coal Mining Area Unmanned Aerial Vehicle (GCCMA-UAV) is built, with a ground resolution of 3 cm, which is suitable to make a 1:500 thematic map of the ground crack. This GCCMA-UAV dataset includes 6280 images of ground cracks, and the size of the imagery is 256 × 256 pixels. Secondly, the DRA-UNet model is built effectively for coal mining ground surface crack delineation. This DRA-UNet model is an improved UNet DL model, which mainly includes the DAM (Dual Dttention Dechanism) module, the RN (residual network) module, and the ASPP (Atrous Spatial Pyramid Pooling) module. The DRA-UNet model shows the highest recall rate of 77.29% when the DRA-UNet was compared with other similar DL models, such as DeepLabV3+, SegNet, PSPNet, and so on. DRA-UNet also has other relatively reliable indicators; the precision rate is 84.92% and the F1 score is 78.87%. Finally, DRA-UNet is applied to delineate cracks on a DOM (Digital Orthophoto Map) of 3 km2 in the mining workface area, with a ground resolution of 3 cm. There were 4903 cracks that were delineated from the DOM in the Huojitu Coal Mine Shaft. This DRA-UNet model effectively improves the efficiency of crack delineation. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

21 pages, 6112 KiB  
Article
CM-YOLOv8: Lightweight YOLO for Coal Mine Fully Mechanized Mining Face
by Yingbo Fan, Shanjun Mao, Mei Li, Zheng Wu and Jitong Kang
Sensors 2024, 24(6), 1866; https://doi.org/10.3390/s24061866 - 14 Mar 2024
Cited by 15 | Viewed by 3590
Abstract
With the continuous development of deep learning, the application of object detection based on deep neural networks in the coal mine has been expanding. Simultaneously, as the production applications demand higher recognition accuracy, most research chooses to enlarge the depth and parameters of [...] Read more.
With the continuous development of deep learning, the application of object detection based on deep neural networks in the coal mine has been expanding. Simultaneously, as the production applications demand higher recognition accuracy, most research chooses to enlarge the depth and parameters of the network to improve accuracy. However, due to the limited computing resources in the coal mining face, it is challenging to meet the computation demands of a large number of hardware resources. Therefore, this paper proposes a lightweight object detection algorithm designed specifically for the coal mining face, referred to as CM-YOLOv8. The algorithm introduces adaptive predefined anchor boxes tailored to the coal mining face dataset to enhance the detection performance of various targets. Simultaneously, a pruning method based on the L1 norm is designed, significantly compressing the model’s computation and parameter volume without compromising accuracy. The proposed algorithm is validated on the coal mining dataset DsLMF+, achieving a compression rate of 40% on the model volume with less than a 1% drop in accuracy. Comparative analysis with other existing algorithms demonstrates its efficiency and practicality in coal mining scenarios. The experiments confirm that CM-YOLOv8 significantly reduces the model’s computational requirements and volume while maintaining high accuracy. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

22 pages, 91538 KiB  
Article
Study on the Influence of Label Image Accuracy on the Performance of Concrete Crack Segmentation Network Models
by Kaifeng Ma, Mengshu Hao, Wenlong Shang, Jinping Liu, Junzhen Meng, Qingfeng Hu, Peipei He and Shiming Li
Sensors 2024, 24(4), 1068; https://doi.org/10.3390/s24041068 - 6 Feb 2024
Cited by 2 | Viewed by 1390
Abstract
A high-quality dataset is a basic requirement to ensure the training quality and prediction accuracy of a deep learning network model (DLNM). To explore the influence of label image accuracy on the performance of a concrete crack segmentation network model in a semantic [...] Read more.
A high-quality dataset is a basic requirement to ensure the training quality and prediction accuracy of a deep learning network model (DLNM). To explore the influence of label image accuracy on the performance of a concrete crack segmentation network model in a semantic segmentation dataset, this study uses three labelling strategies, namely pixel-level fine labelling, outer contour widening labelling and topological structure widening labelling, respectively, to generate crack label images and construct three sets of crack semantic segmentation datasets with different accuracy. Four semantic segmentation network models (SSNMs), U-Net, High-Resolution Net (HRNet)V2, Pyramid Scene Parsing Network (PSPNet) and DeepLabV3+, were used for learning and training. The results show that the datasets constructed from the crack label images with pix-el-level fine labelling are more conducive to improving the accuracy of the network model for crack image segmentation. The U-Net had the best performance among the four SSNMs. The Mean Intersection over Union (MIoU), Mean Pixel Accuracy (MPA) and Accuracy reached 85.47%, 90.86% and 98.66%, respectively. The average difference between the quantized width of the crack image segmentation obtained by U-Net and the real crack width was 0.734 pixels, the maximum difference was 1.997 pixels, and the minimum difference was 0.141 pixels. Therefore, to improve the segmentation accuracy of crack images, the pixel-level fine labelling strategy and U-Net are the best choices. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

21 pages, 5860 KiB  
Article
Road-MobileSeg: Lightweight and Accurate Road Extraction Model from Remote Sensing Images for Mobile Devices
by Guangjun Qu, Yue Wu, Zhihong Lv, Dequan Zhao, Yingpeng Lu, Kefa Zhou, Jiakui Tang, Qing Zhang and Aijun Zhang
Sensors 2024, 24(2), 531; https://doi.org/10.3390/s24020531 - 15 Jan 2024
Cited by 5 | Viewed by 1700
Abstract
Current road extraction models from remote sensing images based on deep learning are computationally demanding and memory-intensive because of their high model complexity, making them impractical for mobile devices. This study aimed to develop a lightweight and accurate road extraction model, called Road-MobileSeg, [...] Read more.
Current road extraction models from remote sensing images based on deep learning are computationally demanding and memory-intensive because of their high model complexity, making them impractical for mobile devices. This study aimed to develop a lightweight and accurate road extraction model, called Road-MobileSeg, to address the problem of automatically extracting roads from remote sensing images on mobile devices. The Road-MobileFormer was designed as the backbone structure of Road-MobileSeg. In the Road-MobileFormer, the Coordinate Attention Module was incorporated to encode both channel relationships and long-range dependencies with precise position information for the purpose of enhancing the accuracy of road extraction. Additionally, the Micro Token Pyramid Module was introduced to decrease the number of parameters and computations required by the model, rendering it more lightweight. Moreover, three model structures, namely Road-MobileSeg-Tiny, Road-MobileSeg-Small, and Road-MobileSeg-Base, which share a common foundational structure but differ in the quantity of parameters and computations, were developed. These models varied in complexity and were available for use on mobile devices with different memory capacities and computing power. The experimental results demonstrate that the proposed models outperform the compared typical models in terms of accuracy, lightweight structure, and latency and achieve high accuracy and low latency on mobile devices. This indicates that the models that integrate with the Coordinate Attention Module and the Micro Token Pyramid Module surpass the limitations of current research and are suitable for road extraction from remote sensing images on mobile devices. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

17 pages, 44970 KiB  
Article
Research on the Efficiency of Bridge Crack Detection by Coupling Deep Learning Frameworks with Convolutional Neural Networks
by Kaifeng Ma, Xiang Meng, Mengshu Hao, Guiping Huang, Qingfeng Hu and Peipei He
Sensors 2023, 23(16), 7272; https://doi.org/10.3390/s23167272 - 19 Aug 2023
Cited by 4 | Viewed by 2383
Abstract
Bridge crack detection based on deep learning is a research area of great interest and difficulty in the field of bridge health detection. This study aimed to investigate the effectiveness of coupling a deep learning framework (DLF) with a convolutional neural network (CNN) [...] Read more.
Bridge crack detection based on deep learning is a research area of great interest and difficulty in the field of bridge health detection. This study aimed to investigate the effectiveness of coupling a deep learning framework (DLF) with a convolutional neural network (CNN) for bridge crack detection. A dataset consisting of 2068 bridge crack images was randomly split into training, verification, and testing sets with a ratio of 8:1:1, respectively. Several CNN models, including Faster R-CNN, Single Shot MultiBox Detector (SSD), You Only Look Once (YOLO)-v5(x), U-Net, and Pyramid Scene Parsing Network (PSPNet), were used to conduct experiments using the PyTorch, TensorFlow2, and Keras frameworks. The experimental results show that the Harmonic Mean (F1) values of the detection results of the Faster R-CNN and SSD models under the Keras framework are relatively large (0.76 and 0.67, respectively, in the object detection model). The YOLO-v5(x) model of the TensorFlow2 framework achieved the highest F1 value of 0.67. In semantic segmentation models, the U-Net model achieved the highest detection result accuracy (AC) value of 98.37% under the PyTorch framework. The PSPNet model achieved the highest AC value of 97.86% under the TensorFlow2 framework. These experimental results provide optimal coupling efficiency parameters of a DLF and CNN for bridge crack detection. A more accurate and efficient DLF and CNN model for bridge crack detection has been obtained, which has significant practical application value. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

20 pages, 4324 KiB  
Article
Sand-Dust Image Enhancement Using Chromatic Variance Consistency and Gamma Correction-Based Dehazing
by Jong-Ju Jeon, Tae-Hee Park and Il-Kyu Eom
Sensors 2022, 22(23), 9048; https://doi.org/10.3390/s22239048 - 22 Nov 2022
Cited by 14 | Viewed by 2656
Abstract
In sand-dust environments, the low quality of images captured outdoors adversely affects many remote-based image processing and computer vision systems, because of severe color casts, low contrast, and poor visibility of sand-dust images. In such cases, conventional color correction methods do not guarantee [...] Read more.
In sand-dust environments, the low quality of images captured outdoors adversely affects many remote-based image processing and computer vision systems, because of severe color casts, low contrast, and poor visibility of sand-dust images. In such cases, conventional color correction methods do not guarantee appropriate performance in outdoor computer vision applications. In this paper, we present a novel color correction and dehazing algorithm for sand-dust image enhancement. First, we propose an effective color correction method that preserves the consistency of the chromatic variances and maintains the coincidence of the chromatic means. Next, a transmission map for image dehazing is estimated using the gamma correction for the enhancement of color-corrected sand-dust images. Finally, a cross-correlation-based chromatic histogram shift algorithm is proposed to reduce the reddish artifacts in the enhanced images. We performed extensive experiments for various sand-dust images and compared the performance of the proposed method to that of several existing state-of-the-art enhancement methods. The simulation results indicated that the proposed enhancement scheme outperforms the existing approaches in terms of both subjective and objective qualities. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

Back to TopTop