Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = hazy atmospheres

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 11262 KiB  
Article
Toward Aerosol-Aware Thermal Infrared Radiance Data Assimilation
by Shih-Wei Wei, Cheng-Hsuan (Sarah) Lu, Emily Liu, Andrew Collard, Benjamin Johnson, Cheng Dang and Patrick Stegmann
Atmosphere 2025, 16(7), 766; https://doi.org/10.3390/atmos16070766 - 22 Jun 2025
Viewed by 361
Abstract
Aerosols considerably reduce the upwelling radiance in the thermal infrared (IR) window; thus, it is worthwhile to understand the effects and challenges of assimilating aerosol-affected (i.e., hazy-sky) IR observations for all-sky data assimilation (DA). This study introduces an aerosol-aware DA framework for the [...] Read more.
Aerosols considerably reduce the upwelling radiance in the thermal infrared (IR) window; thus, it is worthwhile to understand the effects and challenges of assimilating aerosol-affected (i.e., hazy-sky) IR observations for all-sky data assimilation (DA). This study introduces an aerosol-aware DA framework for the Infrared Atmospheric Sounder Interferometer (IASI) to exploit hazy-sky IR observations and investigate the impact of assimilating hazy-sky IR observations on analyses and subsequent forecasts. The DA framework consists of the detection of hazy-sky pixels and an observation error model as the function of the aerosol effect. Compared to the baseline experiment, the experiment utilized an aerosol-aware framework that reduces biases in the sea surface temperature in the tropical region, particularly over the areas affected by heavy dust plumes. There are no significant differences in the evaluation of the analyses and the 7-day forecasts between the experiments. To further improve the aerosol-aware framework, the enhancements in quality control (e.g., aerosol detection) and bias correction need to be addressed in the future. Full article
Show Figures

Figure 1

20 pages, 21844 KiB  
Article
DWTMA-Net: Discrete Wavelet Transform and Multi-Dimensional Attention Network for Remote Sensing Image Dehazing
by Xin Guan, Runxu He, Le Wang, Hao Zhou, Yun Liu and Hailing Xiong
Remote Sens. 2025, 17(12), 2033; https://doi.org/10.3390/rs17122033 - 12 Jun 2025
Viewed by 1197
Abstract
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, [...] Read more.
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, aiming to restore image information in both the frequency and spatial domains to enhance overall image quality. Specifically, we design a wavelet transform-based downsampling module that effectively fuses frequency and spatial features. The input first passes through a discrete wavelet block to extract frequency-domain information. These features are then fed into a multi-dimensional attention block, which incorporates pixel attention, Fourier frequency-domain attention, and channel attention. This combination allows the network to capture both global and local characteristics while enhancing deep feature representations through dimensional expansion, thereby improving spatial-domain feature extraction. Experimental results on the SateHaze1k, HRSD, and HazyDet datasets demonstrate the effectiveness of the proposed method in handling remote sensing images with varying haze levels and drone-view scenarios. By recovering both frequency and spatial details, our model achieves significant improvements in dehazing performance compared to existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Graphical abstract

21 pages, 4536 KiB  
Article
Feature Attention Cycle Generative Adversarial Network: A Multi-Scene Image Dehazing Method Based on Feature Attention
by Na Li, Na Liu, Yanan Duan and Yuyang Chai
Appl. Sci. 2025, 15(10), 5374; https://doi.org/10.3390/app15105374 - 12 May 2025
Viewed by 382
Abstract
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in [...] Read more.
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in the real world are ignored in most current algorithms; that is, the degree of fog is related to the depth of field and scattering coefficient. Moreover, most current dehazing algorithms only consider the image dehazing of land scenes and ignore maritime scenes. To address these problems, we propose a multi-scene image dehazing algorithm based on an improved cycle generative adversarial network (CycleGAN). The generator structure is improved based on the CycleGAN model, and a feature fusion attention module is proposed. This module obtains relevant contextual information by extracting different levels of features. The obtained feature information is fused using the idea of residual connections. An attention mechanism is introduced in this module to retain more feature information by assigning different weights. During the training process, the atmospheric scattering model is established to guide the learning of the neural network using its prior information. The experimental results show that, compared with the baseline model, the peak signal-to-noise ratio (PSNR) increases by 32.10%, the structural similarity index (SSIM) increases by 31.07%, the information entropy (IE) increases by 4.79%, and the NIQE index is reduced by 20.1% in quantitative comparison. Meanwhile, it demonstrates better visual effects than other advanced algorithms in qualitative comparisons on synthetic datasets and real datasets. Full article
Show Figures

Figure 1

20 pages, 61520 KiB  
Article
CCD-Net: Color-Correction Network Based on Dual-Branch Fusion of Different Color Spaces for Image Dehazing
by Dongyu Chen and Haitao Zhao
Appl. Sci. 2025, 15(6), 3191; https://doi.org/10.3390/app15063191 - 14 Mar 2025
Viewed by 766
Abstract
Image dehazing is a crucial task in computer vision, aimed at restoring the clarity of images impacted by atmospheric conditions like fog, haze, or smog, which degrade image quality by reducing contrast, color fidelity, and detail. Recent advancements in deep learning, particularly convolutional [...] Read more.
Image dehazing is a crucial task in computer vision, aimed at restoring the clarity of images impacted by atmospheric conditions like fog, haze, or smog, which degrade image quality by reducing contrast, color fidelity, and detail. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have shown significant improvements by directly learning features from hazy images to produce clear outputs. However, color distortion remains an issue, as many methods focus on contrast and clarity without adequately addressing color restoration. To overcome this, we propose a Color-Correction Network (CCD-Net) based on dual-branch fusion of different color spaces for image dehazing, that simultaneously handles image dehazing and color correction. The dehazing branch utilizes an encoder–decoder structure aimed at restoring haze-affected images. Unlike conventional methods that primarily focus on haze removal, our approach explicitly incorporates a dedicated color-correction branch in the Lab color space, ensuring both clarity enhancement and accurate color restoration. Additionally, we integrate attention mechanisms to enhance feature extraction and introduce a novel fusion loss function that combines loss in both RGB and Lab spaces, achieving a balance between structural preservation and color fidelity. The experimental results demonstrate that CCD-Net outperforms existing methods in both dehazing performance and color accuracy, with CIEDE reduced by 40.81% on RESIDE-indoor and 45.57% on RESIDE-6K compared to the second-best-performing model, showcasing its superior color-restoration capability. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 7661 KiB  
Article
Single Scattering Dynamics of Vector Bessel–Gaussian Beams in Winter Haze Conditions
by Yixiang Yang, Yuancong Cao, Wenjie Jiang, Lixin Guo and Mingjian Cheng
Photonics 2025, 12(3), 182; https://doi.org/10.3390/photonics12030182 - 22 Feb 2025
Viewed by 821
Abstract
This study investigates the scattering dynamics of vector Bessel–Gaussian (BG) beams in winter haze environments, with a particular emphasis on the influence of ice-coated haze particles on light propagation. Employing the Generalized Lorenz–Mie Theory (GLMT), we analyze the scattering coefficients of particles transitioning [...] Read more.
This study investigates the scattering dynamics of vector Bessel–Gaussian (BG) beams in winter haze environments, with a particular emphasis on the influence of ice-coated haze particles on light propagation. Employing the Generalized Lorenz–Mie Theory (GLMT), we analyze the scattering coefficients of particles transitioning from water to ice coatings under varying atmospheric conditions. Our results demonstrate that the presence of ice coatings significantly alters the scattering and extinction efficiencies of BG beams, revealing distinct differences compared to particles coated with water. Furthermore, the study examines the role of Orbital Angular Momentum (OAM) modes in shaping scattering behavior. We show that higher OAM modes, characterized by broader energy distributions and larger beam spot sizes, induce weaker localized interactions with individual particles, leading to diminished scattering and attenuation. In contrast, lower OAM modes, with energy concentrated in smaller regions, exhibit stronger interactions with particles, thereby enhancing scattering and attenuation. These findings align with the Beer–Lambert law in the single scattering regime, where beam intensity attenuation is influenced by the spatial distribution of radiation, while overall power attenuation follows the standard exponential decay with respect to propagation distance. The transmission attenuation of BG beams through haze-laden atmospheres is further explored, emphasizing the critical roles of particle concentration and humidity. This study provides valuable insights into the interactions between vector BG beams and atmospheric haze, advancing the understanding of optical communication and environmental monitoring in hazy conditions. Full article
Show Figures

Figure 1

17 pages, 3996 KiB  
Article
The Influence of Relative Humidity and Pollution on the Meteorological Optical Range During Rainy and Dry Months in Mexico City
by Blanca Adilen Miranda-Claudes and Guillermo Montero-Martínez
Atmosphere 2024, 15(11), 1382; https://doi.org/10.3390/atmos15111382 - 16 Nov 2024
Viewed by 1018
Abstract
The Meteorological Optical Range (MOR) is a measurement of atmospheric visibility. Visibility impairment has been linked to increased aerosol levels in the air. This study conducted statistical analyses using meteorological, air pollutant concentration, and MOR data collected in Mexico City from [...] Read more.
The Meteorological Optical Range (MOR) is a measurement of atmospheric visibility. Visibility impairment has been linked to increased aerosol levels in the air. This study conducted statistical analyses using meteorological, air pollutant concentration, and MOR data collected in Mexico City from August 2014 to December 2015 to determine the factors contributing to haze occurrence (periods when MOR < 10,000 m), defined using a light scatter sensor (PWS100). The outcomes revealed seasonal patterns in PM2.5 and relative humidity (RH) for haze occurrence along the year. PM2.5 levels during hazy periods in the dry season were higher compared to the wet season, aligning with periods of poor air quality (PM2.5 > 45 μg/m3). Pollutant-to-CO ratios suggested that secondary aerosols’ production, led by SO2 conversion to sulfate particles, mainly impacts haze occurrence during the dry season. Meanwhile, during the rainy season, the PWS100 registered haze events even with PM2.5 values close to 15 μg/m3 (considered good air quality). The broadened distribution of extinction efficiency during the wet period and its correlation with RH suggest that aerosol water vapor uptake significantly impacts visibility during this season. Therefore, attributing poor visibility strictly to poor air quality may not be appropriate for all times and locations. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

17 pages, 26224 KiB  
Article
Parametric Analytical Modulation Transfer Function Model in Turbid Atmosphere with Application to Image Restoration
by Mengxing Guo, Pengfei Wu, Zizhao Fan, Hao Lu and Ruizhong Rao
Remote Sens. 2024, 16(21), 3998; https://doi.org/10.3390/rs16213998 - 28 Oct 2024
Cited by 1 | Viewed by 1009
Abstract
To address the issues of image blurring and color distortion in hazy conditions, an image restoration method based on a parametric analytical modulation transfer function model is proposed under turbid atmospheric conditions. A source database is established using a numerical radiative transfer method [...] Read more.
To address the issues of image blurring and color distortion in hazy conditions, an image restoration method based on a parametric analytical modulation transfer function model is proposed under turbid atmospheric conditions. A source database is established using a numerical radiative transfer method based on discrete ordinate. Through multivariate nonlinear fitting and linear interpolation, the quantitative relationships among critical spatial frequency, turbid atmospheric MTF, and key atmospheric optical parameters—such as optical thickness, single scattering albedo, and asymmetry factor—are examined. A fast and efficient parametric analytical MTF model for turbid atmospheres is developed and applied to restore images affected by fog. The results demonstrate that, within the applicable range of the model, the model’s maximum mean relative error and the root mean square error are 7.16% and 0.0454, respectively. The computational speed is nearly a thousand times faster than that of the numerical radiative transfer method, achieving high accuracy and ease of application. Images restored using this model exhibit enhanced clarity and quality, effectively compensating for the degradation in image quality caused by turbid atmospheres. This approach represents a novel solution to the challenges of image processing in complex atmospheric environments. Full article
Show Figures

Figure 1

15 pages, 6308 KiB  
Article
Physics-Driven Image Dehazing from the Perspective of Unmanned Aerial Vehicles
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li, Xiaofei Ji, Jiawei Hao and Jie Yang
Electronics 2024, 13(21), 4186; https://doi.org/10.3390/electronics13214186 - 25 Oct 2024
Viewed by 1236
Abstract
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like [...] Read more.
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like color distortion, reduced contrast, and lower clarity, which negatively impact the performance of subsequent advanced visual tasks. To improve the quality of unmanned aerial vehicle (UAV) images, we propose a dehazing method based on calibration of the atmospheric scattering model. We designed two specialized neural network structures to estimate the two unknown parameters in the atmospheric scattering model: the atmospheric light intensity A and medium transmission t. However, calculation errors always occur in both processes for estimating the two unknown parameters. The error accumulation for atmospheric light and medium transmission will cause the deviation in color fidelity and brightness. Therefore, we designed an encoder-decoder structure for irradiance guidance, which not only eliminates error accumulation but also enhances the detail in the restored image, achieving higher-quality dehazing results. Quantitative and qualitative evaluations indicate that our dehazing method outperforms existing techniques, effectively eliminating haze from drone images and significantly enhancing image clarity and quality in hazy conditions. Specifically, the compared experiment on the R100 dataset demonstrates that the proposed method improved the peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics by 6.9 dB and 0.08 over the second-best method, respectively. On the N100 dataset, the method improved the PSNR and SSIM metrics by 8.7 dB and 0.05 over the second-best method, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Graphical abstract

13 pages, 27539 KiB  
Article
Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction
by Jungyun Kim, Tiong-Sik Ng and Andrew Beng Jin Teoh
Appl. Sci. 2024, 14(17), 7978; https://doi.org/10.3390/app14177978 - 6 Sep 2024
Cited by 2 | Viewed by 1356
Abstract
Haze imagery suffers from reduced clarity, which can be attributed to atmospheric conditions such as dust or water vapor, resulting in blurred visuals and heightened brightness due to light scattering. Conventional methods employing the dark channel prior (DCP) for transmission map estimation often [...] Read more.
Haze imagery suffers from reduced clarity, which can be attributed to atmospheric conditions such as dust or water vapor, resulting in blurred visuals and heightened brightness due to light scattering. Conventional methods employing the dark channel prior (DCP) for transmission map estimation often excessively amplify fogged sky regions, causing image distortion. This paper presents a novel approach to improve transmission map granularity by utilizing multiple 1×1 DCPs derived from multiscale hazy, inverted, and Euclidean difference images. An adaptive airlight estimation technique is proposed to handle low-light, hazy images. Furthermore, an adaptive gamma correction method is introduced to refine the transmission map further. Evaluation of dehazed images using the Dehazing Quality Index showcases superior performance compared to existing techniques, highlighting the efficacy of the enhanced transmission map. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

12 pages, 5283 KiB  
Article
Polarization-Based Two-Stage Image Dehazing in a Low-Light Environment
by Xin Zhang, Xia Wang, Changda Yan, Gangcheng Jiao and Huiyang He
Electronics 2024, 13(12), 2269; https://doi.org/10.3390/electronics13122269 - 10 Jun 2024
Cited by 3 | Viewed by 1624
Abstract
Fog, as a common weather condition, severely affects the visual quality of images. Polarization-based dehazing techniques can effectively produce clear results by utilizing the atmospheric polarization transmission model. However, current polarization-based dehazing methods are only suitable for scenes with strong illumination, such as [...] Read more.
Fog, as a common weather condition, severely affects the visual quality of images. Polarization-based dehazing techniques can effectively produce clear results by utilizing the atmospheric polarization transmission model. However, current polarization-based dehazing methods are only suitable for scenes with strong illumination, such as daytime scenes, and cannot be applied to low-light scenes. Due to the insufficient illumination at night and the differences in polarization characteristics between it and sunlight, polarization images captured in a low-light environment can suffer from loss of polarization and intensity information. Therefore, this paper proposes a two-stage low-light image dehazing method based on polarization. We firstly construct a polarization-based low-light enhancement module to remove noise interference in polarization images and improve image brightness. Then, we design a low-light polarization dehazing module, which combines the polarization characteristics of the scene and objects to remove fog, thereby restoring the intensity and polarization information of the scene and improving image contrast. For network training, we generate a simulation dataset for low-light polarization dehazing. We also collect a low-light polarization hazy dataset to test the performance of our method. Experimental results indicate that our proposed method can achieve the best dehazing effect. Full article
Show Figures

Figure 1

26 pages, 59985 KiB  
Article
Depth-Guided Dehazing Network for Long-Range Aerial Scenes
by Yihu Wang, Jilin Zhao, Liangliang Yao and Changhong Fu
Remote Sens. 2024, 16(12), 2081; https://doi.org/10.3390/rs16122081 - 8 Jun 2024
Cited by 1 | Viewed by 1305
Abstract
Over the past few years, the applications of unmanned aerial vehicles (UAVs) have greatly increased. However, the decrease in clarity in hazy environments is an important constraint on their further development. Current research on image dehazing mainly focuses on normal scenes at close [...] Read more.
Over the past few years, the applications of unmanned aerial vehicles (UAVs) have greatly increased. However, the decrease in clarity in hazy environments is an important constraint on their further development. Current research on image dehazing mainly focuses on normal scenes at close range or mid-range, while ignoring long-range scenes such as aerial perspective. Furthermore, based on the atmospheric scattering model, the inclusion of depth information is essential for the procedure of image dehazing, especially when dealing with images that exhibit substantial variations in depth. However, most existing models neglect this important information. Consequently, these state-of-the-art (SOTA) methods perform inadequately in dehazing when applied to long-range images. For the purpose of dealing with the above challenges, we propose the construction of a depth-guided dehazing network designed specifically for long-range aerial scenes. Initially, we introduce the depth prediction subnetwork to accurately extract depth information from long-range aerial images, taking into account the substantial variance in haze density. Subsequently, we propose the depth-guided attention module, which integrates a depth map with dehazing features through the attention mechanism, guiding the dehazing process and enabling the effective removal of haze in long-range areas. Furthermore, considering the unique characteristics of long-range aerial scenes, we introduce the UAV-HAZE dataset, specifically designed for training and evaluating dehazing methods in such scenarios. Finally, we conduct extensive experiments to test our method against several SOTA dehazing methods and demonstrate its superiority over others. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

12 pages, 8913 KiB  
Communication
Study on the Robustness of an Atmospheric Scattering Model under Single Transmittance
by Xiaotian Shi, Yue Ming, Lin Ju and Shouqian Chen
Photonics 2024, 11(6), 515; https://doi.org/10.3390/photonics11060515 - 28 May 2024
Viewed by 1305
Abstract
When light propagates in a scattering medium such as haze, it is partially scattered and absorbed, resulting in a decrease in the intensity of the light emitted by the imaging target and an increase in the intensity of the scattered light. This phenomenon [...] Read more.
When light propagates in a scattering medium such as haze, it is partially scattered and absorbed, resulting in a decrease in the intensity of the light emitted by the imaging target and an increase in the intensity of the scattered light. This phenomenon leads to a significant reduction in the quality of images taken in hazy environments. To describe the physical process of image degradation in haze, the atmospheric scattering model is proposed. However, the accuracy of the model applied to the usual fog image restoration is affected by many factors. In general, fog images, atmospheric light, and haze transmittances vary spatially, which makes it difficult to calculate the influence of the accuracy of parameters in the model on the recovery accuracy. In this paper, the atmospheric scattering model was applied to the restoration of hazed images with a single transmittance. We acquired hazed images with a single transmittance from 0.05 to 1 using indoor experiments. The dehazing stability of the atmospheric scattering model was investigated by adjusting the atmospheric light and transmittance parameters. For each transmittance, the relative recovery accuracy of atmospheric light and transmittance were calculated when they deviated from the optimal value of 0.1, respectively. The maximum parameter estimation deviations allowed us to obtain the best recovery accuracies of 90%, 80%, and 70%. Full article
Show Figures

Figure 1

25 pages, 6335 KiB  
Article
The Uncertainty Assessment by the Monte Carlo Analysis of NDVI Measurements Based on Multispectral UAV Imagery
by Fatemeh Khalesi, Imran Ahmed, Pasquale Daponte, Francesco Picariello, Luca De Vito and Ioan Tudosa
Sensors 2024, 24(9), 2696; https://doi.org/10.3390/s24092696 - 24 Apr 2024
Cited by 6 | Viewed by 2580
Abstract
This paper proposes a workflow to assess the uncertainty of the Normalized Difference Vegetation Index (NDVI), a critical index used in precision agriculture to determine plant health. From a metrological perspective, it is crucial to evaluate the quality of vegetation indices, which are [...] Read more.
This paper proposes a workflow to assess the uncertainty of the Normalized Difference Vegetation Index (NDVI), a critical index used in precision agriculture to determine plant health. From a metrological perspective, it is crucial to evaluate the quality of vegetation indices, which are usually obtained by processing multispectral images for measuring vegetation, soil, and environmental parameters. For this reason, it is important to assess how the NVDI measurement is affected by the camera characteristics, light environmental conditions, as well as atmospheric and seasonal/weather conditions. The proposed study investigates the impact of atmospheric conditions on solar irradiation and vegetation reflection captured by a multispectral UAV camera in the red and near-infrared bands and the variation of the nominal wavelengths of the camera in these bands. Specifically, the study examines the influence of atmospheric conditions in three scenarios: dry–clear, humid–hazy, and a combination of both. Furthermore, this investigation takes into account solar irradiance variability and the signal-to-noise ratio (SNR) of the camera. Through Monte Carlo simulations, a sensitivity analysis is carried out against each of the above-mentioned uncertainty sources and their combination. The obtained results demonstrate that the main contributors to the NVDI uncertainty are the atmospheric conditions, the nominal wavelength tolerance of the camera, and the variability of the NDVI values within the considered leaf conditions (dry and fresh). Full article
(This article belongs to the Special Issue Advanced UAV-Based Sensor Technologies)
Show Figures

Figure 1

17 pages, 6001 KiB  
Communication
Restoration of Binocular Images Degraded by Optical Scattering through Estimation of Atmospheric Coefficients
by Victor H. Diaz-Ramirez, Rigoberto Juarez-Salazar, Martin Gonzalez-Ruiz and Vincent Ademola Adeyemi
Sensors 2023, 23(21), 8918; https://doi.org/10.3390/s23218918 - 2 Nov 2023
Viewed by 1940
Abstract
A binocular vision-based approach for the restoration of images captured in a scattering medium is presented. The scene depth is computed by triangulation using stereo matching. Next, the atmospheric parameters of the medium are determined with an introduced estimator based on the Monte [...] Read more.
A binocular vision-based approach for the restoration of images captured in a scattering medium is presented. The scene depth is computed by triangulation using stereo matching. Next, the atmospheric parameters of the medium are determined with an introduced estimator based on the Monte Carlo method. Finally, image restoration is performed using an atmospheric optics model. The proposed approach effectively suppresses optical scattering effects without introducing noticeable artifacts in processed images. The accuracy of the proposed approach in the estimation of atmospheric parameters and image restoration is evaluated using synthetic hazy images constructed from a well-known database. The practical viability of our approach is also confirmed through a real experiment for depth estimation, atmospheric parameter estimation, and image restoration in a scattering medium. The results highlight the applicability of our approach in computer vision applications in challenging atmospheric conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 3516 KiB  
Article
Analysis of Hazy Ga- and Zr-Co-Doped Zinc Oxide Films Prepared with Atmospheric Pressure Plasma Jet Systems
by Yu-Tang Luo, Zhehan Zhou, Cheng-Yang Wu, Li-Ching Chiu and Jia-Yang Juang
Nanomaterials 2023, 13(19), 2691; https://doi.org/10.3390/nano13192691 - 1 Oct 2023
Cited by 3 | Viewed by 1742
Abstract
Co-doped ZnO thin films have attracted much attention in the field of transparent conductive oxides (TCOs) in solar cells, displays, and other transparent electronics. Unlike conventional single-doped ZnO, co-doped ZnO utilizes two different dopant elements, offering enhanced electrical properties and more controllable optical [...] Read more.
Co-doped ZnO thin films have attracted much attention in the field of transparent conductive oxides (TCOs) in solar cells, displays, and other transparent electronics. Unlike conventional single-doped ZnO, co-doped ZnO utilizes two different dopant elements, offering enhanced electrical properties and more controllable optical properties, including transmittance and haze; however, most previous studies focused on the electrical properties, with less attention paid to obtaining high haze using co-doping. Here, we prepare high-haze Ga- and Zr-co-doped ZnO (GZO:Zr or ZGZO) using atmospheric pressure plasma jet (APPJ) systems. We conduct a detailed analysis to examine the interplay between Zr concentrations and film properties. UV-Vis spectroscopy shows a remarkable haze factor increase of 7.19% to 34.8% (+384%) for the films prepared with 2 at% Zr and 8 at% Ga precursor concentrations. EDS analysis reveals Zr accumulation on larger and smaller particles, while SIMS links particle abundance to impurity uptake and altered electrical properties. XPS identifies Zr mainly as ZrO2 because of lattice stress from Zr doping, forming clusters at lattice boundaries and corroborating the SEM findings. Our work presents a new way to fabricate Ga- and Zr-co-doped ZnO for applications that require low electrical resistivity, high visible transparency, and high haze. Full article
Show Figures

Graphical abstract

Back to TopTop