Next Article in Journal
An Integrated Disturbance Observer-Based Adaptive Smooth Gain Sliding Mode Control for DC/DC Converters in DC Microgrids
Next Article in Special Issue
Diffusion-Based Approaches for Medical Image Segmentation: An In-Depth Review
Previous Article in Journal
RAISE: Robust and Adversarially Informed Safe Explanations for Reinforcement Learning
Previous Article in Special Issue
Parallelizable and Lightweight Reversible Data Hiding Framework for Encryption-Then-Compression Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visibility Enhancement in Fire and Rescue Operations: ARMS Extension with Gaussian Estimation

1
Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology, 680-4 Kawazu, Iizuka 820-8502, Fukuoka, Japan
2
School of ICT, Robotics, and Mechanical Engineering, IITC, Hankyong National University, Anseong 17579, Gyeonggi-do, Republic of Korea
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(3), 667; https://doi.org/10.3390/electronics15030667
Submission received: 5 January 2026 / Revised: 29 January 2026 / Accepted: 2 February 2026 / Published: 3 February 2026
(This article belongs to the Special Issue Advanced Techniques in Real-Time Image Processing)

Abstract

In fire and emergency rescue operations, visibility is often severely degraded by smoke, airborne debris, or atmospheric pollutants including smog and yellow dust. Several image restoration techniques, including Dark Channel Prior (DCP), Color Attribution Prior (CAP), Peplography, and Adaptive Removal via Mask for Scatter (ARMS), have been proposed to recover clear images under such conditions. However, these methods exhibit significant limitations in heavy scattering environments. This paper proposes a novel visibility restoration method for disaster situations, building upon the state-of-the-art ARMS method. To maximize the suppression of scattering effects, the Scattering Media Model is refined through Gaussian estimation. Additionally, an overlapping matrix is introduced to effectively handle non-uniformly distributed scattering conditions. The proposed method is evaluated using a real rescue operation image dataset provided by the Fire and Disaster Management Agency of Japan. Qualitative visual assessments and quantitative performance metrics demonstrate that the proposed approach significantly outperforms conventional methods under severe scattering conditions.

1. Introduction

Vision is one of the most essential elements of human perception, serving as a foundation for cognition and situational awareness. However, environments such as construction areas and disaster rescue sites, particularly those affected by fire or earthquakes, often experience severe visibility degradation caused by scattering media, including smoke, airborne debris, smog, and dust. To overcome these challenges, a variety of visibility restoration methods have been explored over the past few decades. Broadly, these methods can be classified into three categories: physical approaches [1,2,3], image signal processing (ISP) methods [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19], and deep learning-based models [20,21,22,23,24,25]. In this paper, we focus on ISP-based approaches due to their compact computational structures and physical interpretability. Within this domain, several representative methods have been proposed, including Dark Channel Prior (DCP) [4,5,6,7,8], Color Attribute Prior (CAP) [9,10,11,12,13,14], Peplography [15,16,17], and Adaptive Removal via Mask for Scatter (ARMS) [18]. While these algorithms have demonstrated notable success in light or uniformly distributed scattering environments, their performance deteriorates under heavy or spatially non-uniform scattering conditions. To address these limitations, we propose a new visibility restoration method that extends the state-of-the-art (SOTA) ARMS. Specifically, the proposed method adopts a Gaussian-estimated scattering model and introduces an overlapping matrix operation to enhance performance under complex scattering distributions. The method is validated using a realistic rescue operation image dataset provided by the Fire and Disaster Management Agency (FDMA) of Japan, which includes practical scenarios involving fire suppression and human rescue under dense smoke conditions. Comparative evaluations with conventional methods, such as DCP [4], CAP [9], Peplography [15,17], and ARMS [18], are conducted through both qualitative visualization and quantitative assessment. Although existing works have primarily aimed to aid human visual perception in low-visibility environments, the ultimate objective of this research is to recover visibility for both human operators and computer vision systems. As a preliminary evaluation, object detection using the You Only Look Once (YOLO) [26] object detection model is applied, as illustrated in Figure 1. The same assessment methodology is further employed in Section 4 to demonstrate the effectiveness of the proposed approach. Moreover, because the rescue dataset lacks reference images, owing to its focus on real-world rescue training involving human-like dummies, quantitative evaluation is conducted using No-Reference Image Quality Assessment (NR-IQA) [27,28,29,30,31,32].
However, these kinds of studies must be demonstrated in real-world scenarios rather than simulated or controlled experimental environments. Thus, we utilized the FDMA dataset, which contains fire disaster and rescue training exercises for firefighters under harsh visibility conditions. As a preliminary evaluation, we evaluate the performance of the proposed method with the YOLOv12 object detection model.
This paper demonstrates the effectiveness of the proposed method through a dual-centric evaluation: NR-IQA for “human-centric” and object detection performance for “machine-centric”. The significance of this dual-centric evaluation lies in its practical utility for emergency situations. In real-world disaster operations, rescue personnel or operators often suffer from reduced situational awareness due to extreme visibility degradations. The proposed method addresses this by not only improving visual quality for human perception but also ensuring that machine learning models can accurately identify hazards that might be overlooked by human recognition. This provides a comprehensive safety layer for practical applications such as autonomous rescue robots and firefighter assistance systems.
The remainder of this paper is organized as follows. Section 2 reviews related dehazing algorithms and discusses their limitations. Section 3 presents the proposed method in detail. Section 4 reports the experimental results and comparative evaluations. Finally, Section 5 provides conclusions and future perspectives.

2. Literature Review

2.1. Dark Channel Prior

As discussed in the Introduction, numerous methods have been proposed to restore visibility in scattering environments. Among them, Dark Channel Prior (DCP) [4] is one of the most widely recognized algorithms in the fields of image dehazing and deblurring.
The DCP is based on the Atmosphere Scattering Model (ASM) [33], which estimates the scattering effects caused by airlight. The atmosphere scattering principle used in DCP is illustrated in Figure 2. The captured image can be represented as [33]:
I ( x ) = J ( x ) t ( x ) + A [ 1 t ( x ) ] ,
where x denotes an image pixel index, and I ( x ) , J ( x ) , t ( x ) , and A represent the captured image, the haze-free image, the transmission map, and the airlight, respectively. The DCP estimates the approximate transmission map t ( x ) by analyzing the dark channel of the image. This approach has served as the foundation for subsequent methods such as the Color Attribute Prior (CAP) [9] and various deep learning-based models. However, DCP faces a key challenge in accurately estimating scene depth, and the subtraction of the ASM using the estimated transmission map often introduces distortion in the restored image. The haze-free image is computed as [4]:
J ( x ) = I ( x ) A [ 1 t ( x ) ] t ( x ) .

2.2. Peplography

Following the introduction of the DCP, a novel image restoration technique termed Peplography [15,16,17] was developed for imaging through scattering media. In this approach, the scattering medium is statistically modeled by Gaussian distribution, whose parameters are estimated using maximum likelihood estimation (MLE), and the estimated scattering component is subsequently subtracted from the recorded hazy image. As a result, Peplography yields a dark image that is effectively free from the influence of the scattering medium, and the corresponding scattering-free image is expressed as [15]:
I ( x ) = I ( x ) μ i m g ,
where I ( x ) and I ( x ) denote the captured image and the image without scattering effects, respectively. The parameter μ i m g represents the solution of the Gaussian distribution obtained by MLE. Using this estimate, Peplography employs a photon-counting algorithm to statistically extract ballistic photons that traverse the scattering medium, thereby restoring the intensity of the dark image. The photon-counting algorithm is given as [15]:
I ^ ( x ) | I ˜ ( x ) P o i s s o n [ γ c N p I ˜ ( x ) ] ,
where I ^ ( x ) and I ˜ ( x ) denote the reconstructed image without scattering effects and the normalized irradiance of I ( x ) , respectively. The parameters N p , c, and γ c represent the expected number of ballistic photons derived from I ˜ ( x ) , the color channel index, and the ballistic photon coefficient for each color channel, respectively. Photon shot noise is modeled under the estimated ballistic photon distribution using a Poisson process, and Peplography assumes that a scattering image is captured as illustrated in Figure 3.

2.3. Adaptive Removal via Mask for Scatter

Among the statistical approaches for removing scattering media, one of the SOTA methods is ARMS. As described in the Introduction, the present work is developed on the basis of ARMS. ARMS exploits an analogy between imaging systems and wireless communication systems: in imaging, reflected light from an object propagates to the camera as illustrated in Figure 4a, whereas in wireless communication, a transmitted signal propagates from the transmitter to the receiver through the channel, as depicted in Figure 4b.
In both cases, the propagating signals are affected by scattering media along the path, yet wireless systems routinely recover degraded signals at the receiver by modeling channel fading. Motivated by this analogy, ARMS transfers wireless fading theory to the imaging domain and incorporates two fading models into the image-processing pipeline, namely, the Rician [34] and Rayleigh [35,36] models, as illustrated in Figure 5.

2.3.1. Scattered Image Model

The ARMS comprises two models, analogous to the fading models used in wireless communication systems: the Scattered Image Model (SIM) [18] and Scattering Media Model (SMM) [18]. Even when scattering media are distributed in front of the camera, the captured images retain faint object edges, indicating that a residual Line-of-Signal (LOS) component remains. In wireless communications, the Rician fading model [34,37,38] is employed when a dominant direct signal is superimposed with multiple scattered components to describe and compensate for signal power degradation induced by the scattering channel. Motivated by this similarity, ARMS assumes that the SIM exhibits Rician fading behavior, as illustrated in Figure 6, and models it using the Rice distribution, which is given by the following equation [18]:
f R i c e ( r | ν , σ ) = r σ 2 exp r 2 + ν 2 2 σ 2 I 0 r ν σ 2 ,
Here, r and ν denote the instantaneous power and the average magnitude of the received signal, respectively. The parameter σ represents the scale of signal fluctuations, and I 0 is the 0th-order modified Bessel function of the first kind. However, these parameters cannot be directly mapped to image quantities. In imaging, scattering causes additional diffuse light to be recorded by the camera as noise, which increases the mean image intensity while reducing the intensity standard deviation as the scattering strength grows. By contrast, in wireless communication systems, scattering leads to signal fading, so the received signal power decreases as the channel becomes more severe and the receiver fails to detect part of the signal. To address this discrepancy, ARMS introduces modified parameter definitions for the imaging context, which are given as follows [18]:
μ ˜ = L μ i m g ,
σ ˜ = L σ i m g ,
where μ i m g and σ i m g denote the mean and standard deviation of the image, and μ ˜ and σ ˜ are their corresponding modified parameters. The parameter L represents the maximum intensity in the dynamic range and is set to 255 for 8-bit images. To quantify the modified parameters, ARMS employs the k-factor, a metric used in wireless communication systems to characterize the severity of scattering; a k-factor close to 0 indicates the presence of strongly scattering media between the transmitter and receiver. The k-factor is defined as follows [18]:
k = ν 2 2 σ 2 .
The modified k ˜ -factor, expressed in terms of μ ˜ and σ ˜ , is given by the following equations [18]:
k = μ ˜ 2 2 σ ˜ 2 = L μ i m g 2 2 × L σ i m g 2 ,
k ˜ = σ i m g 2 2 μ i m g 2 .
To evaluate the modified parameters and the k ˜ -factor, two images were acquired, as shown in Figure 7: a clear image and a scattered image. The clear image has a mean intensity of μ c = 35.096 and a standard deviation of σ c = 43.374, whereas the scattered image has μ s =166.740 and σ s = 28.068. From these statistics, the corresponding modified k ˜ -factors are computed as k ˜ c =0.764 for the clear image and k ˜ s = 0.014 for the scattered image, indicating that the clear image is much less affected by scattering media than the scattered image. A small k ˜ -factor thus characterizes a highly scattering environment, consistent with the interpretation of the Rician k-factor in wireless channels. This experiment also confirms that the modified parameters effectively bridge imaging and wireless communication models, as they can be embedded into the Rice distribution to formulate the SIM. The SIM is expressed by the following equation [18]:
f S I M ( I | μ ˜ , σ ˜ ) = I σ ˜ 2 exp I 2 + μ ˜ 2 2 σ ˜ 2 I 0 I μ ˜ σ ˜ 2 ,
where I denotes the image intensity, and μ ˜ and σ ˜ are the mean and standard deviation of the image, respectively. Using this formulation, ARMS defines the SIM, establishing an explicit correspondence with the Rician fading model. Figure 6 illustrates this parallel between the Rician fading model and SIM.

2.3.2. Scattering Media Model

To mitigate the scattering effect, ARMS introduces the second model, the Scattering Media Model (SMM), which is derived from the Rayleigh fading model in wireless communication theory. The Rayleigh fading model characterizes channels where no direct LOS path exists between the transmitter and receiver, and the received signal arises solely from multiple scattered components produced by diffraction and reflection. Exploiting this principle, ARMS formulates the SMM to describe and estimate complex scattering environments in imaging, as illustrated in Figure 8, which highlights the correspondence between the SMM and the Rayleigh fading model. In this method, the Rayleigh fading behavior is modeled using the Rayleigh distribution, given by the following equation [18]:
f R a y l e i g h ( r , σ ) = r σ 2 exp r 2 2 σ 2 ,
where r denotes the received signal power and σ is the scale parameter characterizing the magnitude of signal fluctuations. The Rician fading model, discussed in Section 2.3.1, describes channels with both direct and scattered propagation components, whereas the Rayleigh fading model accounts only for purely scattered propagation without an LOS path. Accordingly, the Rayleigh distribution can be interpreted as a special case of the Rice distribution in which the direct propagation parameter ν is zero.
ARMS also introduces modified parameters to translate wireless communication fading theory into imaging systems. The SMM is then formulated using these parameters within the Rayleigh distribution, yielding the following equation [18]:
f S M M ( I , σ ˜ ) = I σ ˜ 2 exp I 2 2 σ ˜ 2 ,
where I denotes the intensity of the captured image and σ ˜ is the modified standard deviation of the image. Using the SIM and SMM, ARMS retrieves the direct propagation component by subtracting the two models. The resulting direct propagation component (DPC) is given by the following equation [18]:
f D P C ( I | μ ˜ , σ ˜ ) = I σ ˜ 2 exp I 2 2 σ ˜ 2 I 0 I μ ˜ σ ˜ 2 exp μ ˜ 2 2 σ ˜ 2 1 .
Conventional fading models are defined as one-dimensional functions. ARMS generalizes these models to two dimensions by introducing additional parameters and then uses the resulting distribution to construct Fourier-domain masks that suppress the influence of scattering media. The clear image is subsequently obtained using the following equations [18]:
I c l e a r = I s c a t t e r f D P C = F 1 { F { I s c a t t e r } × F D P C }
where I c l e a r and I s c a t t e r denote the restored clear image and the captured scattered image, respectively, and F D P C is the Fourier-domain mask obtained from F { f D P C } . Figure 9 presents the clear image, the scattered image, and the ARMS-restored image, whose modified k ˜ -factors are k ˜ c = 0.764, k ˜ s = 0.014, and k ˜ A R M S = 0.079, respectively.
As illustrated in Figure 1, ARMS exhibits strong performance by exploiting a Fourier-domain mask, effectively removing globally distributed scattering media. However, the inherent global nature of Fourier-domain filtering makes it difficult to suppress locally or partially distributed scattering. In contrast, spatial-domain approaches such as DCP, CAP, and Peplography perform pixel- or patch-wise processing, making them better suited for heterogeneous scattering media. To address these limitations, in this paper, we propose a method that refines the SMM and partitions the image into small local regions, thereby enhancing the removal of scattering effects.

3. Proposed Algorithm

As discussed at the end of Section 2.3, conventional ARMS has two main limitations arising from the SMM and the Fourier-domain filtering strategy. To overcome these issues, in this paper, we introduce two enhancements: a modified ARMS (mARMS) based on a modified SMM (mSMM), and an overlapping-matrix scheme for local processing. In the proposed approach, the scattering medium in the imaging system is modeled with a Gaussian distribution within the SMM, and the mARMS is applied to small, overlapping image patches to alleviate the shortcomings of global Fourier-domain filtering.

3.1. Modified Scattering Media Model

To describe the scattering media, this work adopts the Rayleigh fading model from wireless communication theory. As discussed in Section 2.3.2, Rayleigh fading assumes the absence of an LOS component between the transmitter and receiver, so the receiver records only randomly scattered multi-path signals. Analogously, in imaging systems operating under scattering conditions, the camera sensor measures randomly scattered light rather than a single, well-defined propagation path. Thus, to estimate the scattering effects, we utilized the Rayleigh fading model to define the scattering media—which is related to Rayleigh scattering—and can be easily explained with the Reyleigh distribution [35,36,39,40].
However, Rayleigh scattering is typically modeled for scattering media smaller than the wavelength of light. The dataset, provided by the FDMA of Japan, is captured under non-uniform, heavy scattering conditions, which are difficult to characterize with such specific physical distributions.
Specifically, the captured images are composed of a superposition of numerous lights with random phases. While passing through the scattering media, the propagation of the light shows randomness, which can be explained by a Gaussian distribution. The sensor integrates the energy of a large number of scattered light impinging on each pixel over the exposure time; according to the Central Limit Theorem (CLT), the sum of these independent random contributions tends toward a Gaussian distribution. Consequently, the scattering medium in the imaging system is modeled using a Gaussian distribution, expressed by the following equation [41]:
f ( x ) = 1 2 π s exp x x ¯ 2 2 s 2 ,
where x ¯ and s denote the sample mean and standard deviation of the scattering medium, respectively, yielding the mSMM for imaging systems under a Gaussian model. Even when a scattered image appears globally uniform, the local density of the scattering medium generally varies across the scene. To estimate these local scattering characteristics, the image is partitioned into small regions, and area extraction is performed as follows [15]:
I s c a t t e r ( x , y ) ( m , n ) = I s c a t t e r ( x + m , y + n ) , x = 0 , 1 , 2 , 3 , , h k x , y = 0 , 1 , 2 , 3 , , v k y , m = 0 , 1 , 2 , , k x 1 , n = 0 , 1 , 2 , , k y 1 ,
where I s c a t t e r ( x , y ) denotes the local patch centered at column x and row y extracted from the captured scattered image I s c a t t e r , h and v are the width and height of the image, and k x and k y are the width and height of the local patch, respectively. The mSMM parameters are estimated by treating the local scattering intensities as samples from a Gaussian distribution and computing the unknown mean μ via MLE as follows [15]:
L ( I s c a t t e r ( x , y ) ( m , n ) | μ ( x , y ) , σ ( x , y ) ) = m = 0 k x 1 n = 0 k y 1 1 σ x , y 2 π exp I s c a t t e r ( x , y ) ( m , n ) μ ( x , y ) 2 2 σ x , y 2 , = 1 σ x , y 2 π exp m = 0 k x 1 n = 0 k y 1 { I s c a t t e r ( x , y ) μ ( x , y ) } 2 2 σ ( x , y ) 2 .
To simplify the computation, the log-likelihood function is used, as given by the following expression [15]:
log { I s c a t t e r ( x , y ) | μ ( x , y ) , σ ( x , y ) } = log 1 σ 2 π m = 0 k x 1 n = 0 k y 1 { I s c a t t e r ( x , y ) μ ( x , y ) } 2 2 σ ( x , y ) 2 .
The estimated mSMM is obtained as follows [15]:
μ ^ ( x , y ) = arg [ max μ ( x , y ) log { I s c a t t e r ( x , y ) ( m , n ) | μ ( x , y ) , σ ( x , y ) } ] = 1 k x × k y m = 0 k x 1 n = 0 k y 1 I s c a t t e r ( x , y ) ( m , n ) .
Through MLE, the mSMM reduces to the sample mean of each local region ( k x × k y ). Using this statistical estimate from the captured image, the mSMM and the modified f D S C are obtained as follows:
f ^ D P C ( I | μ ˜ , σ ˜ ) = SIM mSMM , = I σ ˜ 2 exp I 2 + μ ˜ 2 2 σ ˜ 2 I 0 I μ ˜ σ ˜ 2 μ ^ ( I ) .

3.2. Overlapping Matrix

Conventional ARMS struggles to suppress partially distributed scattering media because of the inherently global nature of Fourier-domain filtering, as shown in Figure 10. To overcome this limitation, the proposed method introduces an overlapping-matrix scheme that applies mARMS within small local kernels, as shown in Figure 11. Even when scattering appears non-uniform at the image level, each small local kernel can be reasonably approximated as having uniformly distributed scattering. The local kernels, defined in Equation (17), are extracted and then aggregated onto an initially empty image via overlapping placement. To avoid overflow of pixel intensities during aggregation, the number of overlaps per pixel is tracked using an overlapping-kernel counting matrix, defined as [42]
K e r n a l O L ( x , y ) = m = 0 k x 1 n = 0 k y 1 1 ( x + m , y + n ) ,
where K e r n e l O L ( x , y ) denotes the overlapping counting matrix, and 1 ( · ) is an indicator function that returns 1 when the pixel ( x + m , y + n ) is covered by the overlapping kernel and 0 otherwise. The captured image is likewise decomposed into patches of the same size as the overlapping kernel and each patch is processed by mARMS as follows:
m A R M S O L ( x , y ) = m = 0 k x 1 n = 0 k y 1 [ I ( x + m , y + n ) f ^ D P C ( x + m , y + n ) ] ,
where m A R M S O L ( x , y ) denotes the kernel processed by mARMS at location ( x , y ) .
The final restored clear image is obtained by aggregating the overlapped kernel outputs and normalizing each pixel value by the corresponding entry in the overlapping counting matrix as
I c l e a r = x = 0 N x k x y = 0 N y k y m = 0 k x 1 n = 0 k y 1 m A R M S O L ( x , y ) ( m , n ) K e r n e l O L ( x , y ) ( m , n ) .

4. Experimental Results

To validate the effectiveness of the proposed method, qualitative and quantitative comparisons were conducted against conventional approaches, including DCP [4], CAP [9], IDE [14], Peplography [15], that proposed by Song et al. [17], and ARMS [18]. All experiments used a rescue operation image dataset acquired under heavy-smoke conditions by the FDMA of Japan, which pose severe challenges for standard dehazing algorithms. The dataset consists of 600 consecutive frames captured at 20 frames per second (FPS) with a resolution of 720 × 420 pixels, representing a dataset recorded during fire disaster and rescue training exercises for firefighters under heavy scattering conditions. All experiments were implemented in Python 3.11 with OpenCV 4.10, NumPy 2.2.6, and SciPy 1.16.1 on a PC with an Intel Core i7–10700K CPU @3.80 GHz and 16 GB RAM. Furthermore, we utilized triple-comparison to ensure a comprehensive evaluation: qualitative comparison, quantitative comparison, and object detection-based validation. First, the qualitative comparison provides the visual performance of each processed image in Section 4.1. Second, the quantitative comparison evaluates each processed image with NQ-IQA, including NIQE, PIQE, BRISQUE, and BLIINDS-II in Section 4.2. Finally, we utilize the YOLOv12 object detection model to evaluate the performance of each processed image through both visual detection results and numerical scores—including mAP, Recall, Precision, and F-1—in Section 4.3.
As illustrated in Figure 12, the kernel size significantly impacts the result of the proposed method. The proposed method shows the trade-off relationship between scattering media removal performance and color distortion. Specifically, smaller kernels, as shown in Figure 12a,f, remove scattering media effectively, but they show the color distortion and noise of box artifacts. Conversely, a larger kernel size, as seen in Figure 12e,j, retains original color information better but shows a reduced ability to remove heavy scattering media.
Furthermore, Figure 12f,j shows the average precision (AP) scores: (f) 0.605, (g) 0.715, (h) 0.715, (i) 0.75, and (j) 0.72. These results demonstrate that selecting the optimal kernel size is important for extracting maximum performance. Thus, we utilized the 80 × 80 for the FDSM dataset.

4.1. Qualitative Comparison

Figure 13 presents a qualitative comparison of the restoration results under non-uniformly distributed scattering media. The methods illustrated in Figure 13b,d noticeably reduce haze but substantially darken the image, causing loss of detail in shadowed regions, while those in Figure 13b,d exhibit color distortions around bright light sources. The Peplography-based approaches in Figure 13e,f introduce pronounced noise because the photon-counting algorithm is the most effective under extremely low-illumination conditions; in heavily scattered scenes, the expected photon count N p becomes concentrated in bright regions associated with the scattering medium, amplifying noise near these areas.
ARMS, shown in Figure 13g, stably removes broadly distributed haze but struggles with partially distributed scattering due to the global nature of Fourier-domain filtering. In contrast, the proposed method in Figure 13h effectively suppresses both extensive and localized scattering, and the combination of mSMM with the overlapping-matrix strategy preserves fine structural details, such as textures, while maintaining natural color fidelity even under heavy and non-uniform smoke.

4.2. Quantitative Comparison

Qualitative comparisons rely on subjective visual inspection, so performance must also be evaluated using objective and impartial measures. For this purpose, image quality assessment (IQA) metrics are widely adopted, typically categorized into Full-Reference (FR), Reduced-Reference (RR), and Non-Reference (NR) methods. Because the rescue operation dataset is captured under real fire disaster conditions, the corresponding ground-truth clear images are unavailable, making the FR and RR approaches impractical. Consequently, this paper employs NR-IQA metrics, including the Naturalness Image Quality Evaluator (NIQE) [27], the Perception-based Image Quality Evaluator (PIQE) [28], the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [29], and the Blind Image Integrity Notation using DCT Statistics (BLIINDS-II) [30,31], to evaluate the restored images, where a lower score indicates higher perceptual quality and less distortion.
Figure 14 shows the frame-by-frame NR-IQA scores over 600 consecutive frames from the rescue operation video, enabling assessment of both static quality and temporal stability of each dehazing method. The proposed method consistently achieves lower NIQE scores than the competing approaches across the entire sequence, with a particularly clear advantage in Figure 14b,c. Traditional methods such as DCP and CAP exhibit large score fluctuations and higher values in the presence of dense, spatially varying smoke, whereas the proposed method maintains uniformly low scores over time.
Table 1 summarizes the average NR-IQA scores across the entire dataset, revealing that the proposed method attains the best performance on most metrics, with especially notable improvements in PIQE and BRISQUE relative to SOTA baselines. These quantitative results confirm that the proposed algorithm effectively suppresses scattering while preserving essential image information.

4.3. Vision Application with Deep Learning Models

This paper aims to support both human operators and computer vision systems by enhancing visibility in rescue operations. To assess the practical effectiveness of the proposed method in such scenarios, the latest YOLOv12 object detection model was applied to the restored images, and performance was quantified using standard metrics: mean Average Precision (mAP), Recall, Precision, and F-1 score. To ensure the reproducibility and fairness of the evaluation, all experiments were conducted using the YOLOv12 model pre-trained on the MSCOCO 2017 dataset [43]. The input images were processed by resizing to 640 × 640 resolution and normalizing pixel values to the [0, 1] range. Furthermore, the detection task was configured as binary classification, as person or others, to specifically assign the performance of the model in identifying human targets within the FDMA dataset.
Figure 15 illustrates the qualitative detection results. In the original hazy frames shown in Figure 15a, heavy scattering severely obscures object features (e.g., right person), leading to missed detections and low-precision predictions. Conventional methods, particularly those based on Peplography, frequently fail to detect targets because of residual haze and information distortion. By contrast, the proposed method effectively suppresses scattering with higher confidence and tighter bounding boxes than in either the hazy input or the conventionally restored images.
For quantitative evaluation, Table 2 reports the object detection metrics, mAP, Recall, Precision, and F-1 score, computed on the rescue operation dataset. The proposed method achieves the highest values across all metrics with especially notable gains in Precision and F-1 score (0.048 and 0.5514, respectively, relative to the baseline), indicating a substantial reduction in both missed detections and false alarms compared to existing approaches. These results demonstrate that the proposed method markedly improves feature predictability for modern detection networks under realistic disaster conditions.

5. Conclusions

In this paper, we presented an image processing method for fire disaster and emergency rescue operations and proposed a novel visibility enhancement method based on the Adaptive Removal via Mask for Scatter (ARMS). We identified the limitations of the ARMS and the filtering of the Fourier domain. Nevertheless, through the modified Scattering Media Model (mSMM) and overlapping-matrix method, we overcame the limitations of the Fourier-domain filtering method. The mSMM, grounded in the perspective of the image sensor, modeled scattered light with random variables, which follow a Gaussian distribution.
The experimental results focused on the application of real-world scenarios, which utilized the Fire and Disaster Management Agency of Japan. The conventional methods, including Dark Channel Prior (DCP), Color Attribution Prior (CAP), and Peplography, show color distortion of DCP and Peplography, and the weakness of CAP to remove heavy scattering media. Additionally, some of the state-of-the-art methods, including image dehazing based on EASM (IDE) and Song et al. [17], show color distortion, and the ARMS is weak in removing heavy scattering media.
Despite the weakness of CAP and ARMS, verification of the processed images with the YOLOv12 object detection model shows that it effectively detects two people exactly.
The proposed method shows the most effective performance in the overall comparisons—including qualitative, quantitative metrics, and object detection scores—in 600 frames of video data. These environments not only involve “human-centric” visibility but also “object-centric” visibility, which is important. Thus, the proposed method evaluated its performance in terms of both kinds of visibility.
However, the proposed method showed a slow processing time due to the overlapping matrix. To solve this limitation, we will explore the parallel processing of the overlapping matrix in the future. Specifically, we will focus on the designed logic circuit with FPGAs to reduce the processing time or utilize the GPU for synchronized parallel processing of each kernel. Furthermore, the dataset consisted of records with training scenarios of fire suppression and rescue operations for firefighters in heavy scattering media, which lack extreme boundary cases such as bright flames or intense backlighting. Under these conditions, we must suppress the oversaturation caused by flames and reduce the scattering media. Therefore, our future research will focus on enhancing the algorithmic robustness and optimization for various problems and evaluating the performance in a broader range of disaster and rescue environments.

Author Contributions

Conceptualization, J.J.; data curation, J.J.; writing—original draft preparation, J.J.; writing—review and editing, M.C. and M.-C.L.; supervision, M.-C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fire and Disaster Management Agency Promotion Program for Scientific Fire and Disaster Prevention Technologies Program Grant Number JPJ000255.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, W.; Liang, J.; Wang, G.; Zhang, H.; Fu, S. Review of passive polarimetric dehazing methods. Opt. Eng. 2021, 60, 030901. [Google Scholar] [CrossRef]
  2. Shen, L.; Zhao, Y.; Peng, Q.; Chan, J.C.W.; Kong, S.G. An Iterative Image Dehazing Method with Polarization. IEEE Trans. Multimed. 2019, 21, 1093–1107. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Tang, L.; Wang, H.; Zhang, L.; He, Y.; Wang, Y. A Polarization Image Dehazing Method Based on the Principle of Physical Diffusion. arXiv 2024, arXiv:2411.09924. [Google Scholar] [CrossRef]
  4. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  5. Han, L.; Lv, H.; Han, C.; Zhao, Y.; Han, Q.; Liu, H. Atmospheric scattering model and dark channel prior constraint network for environmental monitoring under hazy conditions. J. Environ. Sci. 2025, 152, 203–218. [Google Scholar] [CrossRef]
  6. Lee, S.; Yun, S.; Nam, J.H.; Won, C.S.; Jung, S.W. A review on dark channel prior based image dehazing algorithms. EURASIP J. Image Video Process. 2016, 2016, 4. [Google Scholar] [CrossRef]
  7. Xie, B.; Guo, F.; Cai, Z. Improved Single Image Dehazing Using Dark Channel Prior and Multi-scale Retinex. In Proceedings of the 2010 International Conference on Intelligent System Design and Engineering Application, Changsha, China, 13–14 October 2010; Volume 1, pp. 848–851. [Google Scholar] [CrossRef]
  8. Wang, J.B.; He, N.; Zhang, L.L.; Lu, K. Single image dehazing with a physical model and dark channel prior. Neurocomputing 2015, 149, 718–728. [Google Scholar] [CrossRef]
  9. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef]
  10. Bui, T.M.; Kim, W. Single Image Dehazing Using Color Ellipsoid Prior. IEEE Trans. Image Process. 2018, 27, 999–1009. [Google Scholar] [CrossRef]
  11. Wang, H.; Xu, K.; Lau, R.W. Local color distributions prior for image enhancement. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 343–359. [Google Scholar] [CrossRef]
  12. Gao, C.; Panetta, K.; Agaian, S. Color image attribute and quality measurements. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications 2014, Baltimore, MD, USA, 5–9 May 2014; Volume 9120, pp. 238–251. [Google Scholar] [CrossRef]
  13. Zhu, Q.; Mai, J.; Shao, L. Single image dehazing using color attenuation prior. In Proceedings of the BMVC, Los Angeles, CA, USA, 9–12 December 2014; Volume 4, pp. 1674–1682. [Google Scholar]
  14. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image Dehazing and Exposure Using an Enhanced Atmospheric Scattering Model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
  15. Cho, M.; Javidi, B. Peplography—A passive 3D photon counting imaging through scattering media. Opt. Lett. 2016, 41, 5401–5404. [Google Scholar] [CrossRef] [PubMed]
  16. Lee, J.; Cho, M.; Lee, M.C. 3D Visualization of Objects in Heavy Scattering Media by Using Wavelet Peplography. IEEE Access 2022, 10, 134052–134060. [Google Scholar] [CrossRef]
  17. Song, S.; Kim, H.W.; Cho, M.; Lee, M.C. Automated Scattering Media Estimation in Peplography Using SVD and DCT. Electronics 2025, 14, 545. [Google Scholar] [CrossRef]
  18. Jeong, J.; Lee, M.C. Scattering Medium Removal Using Adaptive Masks for Scatter in the Spatial Frequency Domain. IEEE Access 2025, 13, 72769–72777. [Google Scholar] [CrossRef]
  19. Wang, C.; Zhang, Q.; Wang, X.; Zhou, L.; Li, Q.; Xia, Z.; Ma, B.; Shi, Y.Q. Light-Field Image Multiple Reversible Robust Watermarking Against Geometric Attacks. IEEE Trans. Dependable Secur. Comput. 2025, 22, 5861–5875. [Google Scholar] [CrossRef]
  20. Suárez, P.L.; Sappa, A.D.; Vintimilla, B.X.; Hammoud, R.I. Deep learning based single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1169–1176. [Google Scholar]
  21. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany; pp. 154–169. [Google Scholar] [CrossRef]
  22. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative adversarial networks with fusion-discriminator for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 10729–10736. [Google Scholar] [CrossRef]
  23. Huang, L.Y.; Yin, J.L.; Chen, B.H.; Ye, S.Z. Towards Unsupervised Single Image Dehazing with Deep Learning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2741–2745. [Google Scholar] [CrossRef]
  24. Song, Y.; Li, J.; Wang, X.; Chen, X. Single Image Dehazing Using Ranking Convolutional Neural Network. IEEE Trans. Multimed. 2018, 20, 1548–1560. [Google Scholar] [CrossRef]
  25. Liu, Y.; Wang, C.; Lu, M.; Yang, J.; Gui, J.; Zhang, S. From Simple to Complex Scenes: Learning Robust Feature Representations for Accurate Human Parsing. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5449–5462. [Google Scholar] [CrossRef]
  26. Tian, Y.; Ye, Q.; Doermann, D. YOLO12: Attention-Centric Real-Time Object Detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar] [CrossRef]
  27. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  28. N, V.; D, P.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar] [CrossRef]
  29. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  30. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
  31. Saad, M.A.; Bovik, A.C.; Charrier, C. DCT statistics model-based blind image quality assessment. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3093–3096. [Google Scholar] [CrossRef]
  32. Sheikh, H. LIVE Image Quality Assessment Database Release 2. 2005. Available online: http://live.ece.utexas.edu/research/quality (accessed on 20 February 2025).
  33. McCartney, E.J. Optics of the atmosphere: Scattering by molecules and particles. Phys. Bull. 1976, 28, 521. [Google Scholar] [CrossRef]
  34. Abdi, A.; Tepedelenlioglu, C.; Kaveh, M.; Giannakis, G. On the estimation of the K parameter for the Rice fading distribution. IEEE Commun. Lett. 2001, 5, 92–94. [Google Scholar] [CrossRef]
  35. Zheng, Y.R.; Xiao, C. Simulation models with correct statistical properties for Rayleigh fading channels. IEEE Trans. Commun. 2003, 51, 920–928. [Google Scholar] [CrossRef]
  36. Lindsey, W. Error probabilities for Rician fading multichannel reception of binary andn-ary signals. IEEE Trans. Inf. Theory 1964, 10, 339–350. [Google Scholar] [CrossRef]
  37. Rice, S.O. Mathematical analysis of random noise. Bell Syst. Tech. J. 1944, 23, 282–332. [Google Scholar] [CrossRef]
  38. Sklar, B. Rayleigh fading channels in mobile digital communication systems. I. Characterization. IEEE Commun. Mag. 1997, 35, 90–100. [Google Scholar] [CrossRef]
  39. Rayleigh, L. XII. On the resultant of a large number of vibrations of the same pitch and of arbitrary phase. London Edinburgh Dublin Philos. Mag. J. Sci. 1880, 10, 73–78. [Google Scholar] [CrossRef]
  40. Rayleigh, L. XXXI. On the problem of random vibrations, and of random flights in one, two, or three dimensions. London Edinburgh Dublin Philos. Mag. J. Sci. 1919, 37, 321–347. [Google Scholar] [CrossRef]
  41. Blumenson, L.; Miller, K. Properties of generalized Rayleigh distributions. Ann. Math. Stat. 1963, 34, 903–910. [Google Scholar] [CrossRef]
  42. Ha, J.U.; Kim, H.W.; Cho, M.; Lee, M.C. Three-Dimensional Visualization Using Proportional Photon Estimation Under Photon-Starved Conditions. Sensors 2025, 25, 893. [Google Scholar] [CrossRef] [PubMed]
  43. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar] [CrossRef]
Figure 1. Comparative evaluation of dehazing methods on object detection performance. Top row: (a) Original hazy image, (b) DCP, (c) Peplography, (d) ARMS. Bottom row: (eh) Corresponding object detection results obtained using YOLOv12 for each dehazing method. Detection labels are color-coded: yellow for trucks, white for cars, and pink for buses.
Figure 1. Comparative evaluation of dehazing methods on object detection performance. Top row: (a) Original hazy image, (b) DCP, (c) Peplography, (d) ARMS. Bottom row: (eh) Corresponding object detection results obtained using YOLOv12 for each dehazing method. Detection labels are color-coded: yellow for trucks, white for cars, and pink for buses.
Electronics 15 00667 g001
Figure 2. Illustration of the Atmosphere Scattering Model for DCP.
Figure 2. Illustration of the Atmosphere Scattering Model for DCP.
Electronics 15 00667 g002
Figure 3. Illustration of the light propagation proposed by Peplography.
Figure 3. Illustration of the light propagation proposed by Peplography.
Electronics 15 00667 g003
Figure 4. Illustration of similarity between wireless communication systems and imaging systems. (a) Imaging systems and (b) wireless communication systems.
Figure 4. Illustration of similarity between wireless communication systems and imaging systems. (a) Imaging systems and (b) wireless communication systems.
Electronics 15 00667 g004
Figure 5. Illustration of fading models of wireless communication theory. (a) Rician fading model and (b) Rayleigh fading model.
Figure 5. Illustration of fading models of wireless communication theory. (a) Rician fading model and (b) Rayleigh fading model.
Electronics 15 00667 g005
Figure 6. Illustration of fading models at wireless communication theory. (a) Rician fading model and (b) Scattered Image Model.
Figure 6. Illustration of fading models at wireless communication theory. (a) Rician fading model and (b) Scattered Image Model.
Electronics 15 00667 g006
Figure 7. k ˜ parameter verification. (a) Clear image and (b) hazy image.
Figure 7. k ˜ parameter verification. (a) Clear image and (b) hazy image.
Electronics 15 00667 g007
Figure 8. Illustration of fading models at wireless communication theory. (a) Rayleigh fading model and (b) Scattering Media Model.
Figure 8. Illustration of fading models at wireless communication theory. (a) Rayleigh fading model and (b) Scattering Media Model.
Electronics 15 00667 g008
Figure 9. Comparison of the clear, hazy, and processed images. (a) Clear image, (b) hazy image, and (c) ARMS result image.
Figure 9. Comparison of the clear, hazy, and processed images. (a) Clear image, (b) hazy image, and (c) ARMS result image.
Electronics 15 00667 g009
Figure 10. Limitations of Fourier-domain filtering. (a) Uniformly distributed scattering media, (b) ARMS result for (a), (c) non-uniformly distributed scattering media, and (d) ARMS result for (c).
Figure 10. Limitations of Fourier-domain filtering. (a) Uniformly distributed scattering media, (b) ARMS result for (a), (c) non-uniformly distributed scattering media, and (d) ARMS result for (c).
Electronics 15 00667 g010
Figure 11. Proposed method sequence. (a) Captured image, (b) extracted local kernel, (c) local kernel processed by mARMS, (d) overlapping matrix, (e) overlapped image, and (f) final restored clear image.
Figure 11. Proposed method sequence. (a) Captured image, (b) extracted local kernel, (c) local kernel processed by mARMS, (d) overlapping matrix, (e) overlapped image, and (f) final restored clear image.
Electronics 15 00667 g011
Figure 12. Comparison of the processed image depending on the kernel size. Top row: (a) 10 × 10 kernel size with the proposed method, (b) 20 × 20 kernel size with the proposed method, (c) 40 × 40 kernel size with the proposed method, (d) 80 × 80 kernel size with the proposed method, (e) 120 × 120 kernel size with the proposed method. Bottom row: (fj) Corresponding object detection results obtained using YOLOv12 for each dehazign method.
Figure 12. Comparison of the processed image depending on the kernel size. Top row: (a) 10 × 10 kernel size with the proposed method, (b) 20 × 20 kernel size with the proposed method, (c) 40 × 40 kernel size with the proposed method, (d) 80 × 80 kernel size with the proposed method, (e) 120 × 120 kernel size with the proposed method. Bottom row: (fj) Corresponding object detection results obtained using YOLOv12 for each dehazign method.
Electronics 15 00667 g012
Figure 13. Comparison of the hazy image and processed image. (a) Hazy image, (b) DCP, (c) CAP, (d) IDE, (e) Peplography, (f) Song et al. [17], (g) ARMS, and (h) proposed method.
Figure 13. Comparison of the hazy image and processed image. (a) Hazy image, (b) DCP, (c) CAP, (d) IDE, (e) Peplography, (f) Song et al. [17], (g) ARMS, and (h) proposed method.
Electronics 15 00667 g013
Figure 14. NR-IQA score of each frame: (a) Naturalness Image Quality Evaluator, (b) Perception-based Image Quality Evaluator, (c) Blind/Referenceless Image Spatial Quality Evaluator, and (d) Blind Image Integrity Notation using DCT Statistics.
Figure 14. NR-IQA score of each frame: (a) Naturalness Image Quality Evaluator, (b) Perception-based Image Quality Evaluator, (c) Blind/Referenceless Image Spatial Quality Evaluator, and (d) Blind Image Integrity Notation using DCT Statistics.
Electronics 15 00667 g014
Figure 15. Object detection results on hazy and processed images. (a) Hazy image, (b) DCP, (c) CAP, (d) IDE, (e) Peplography, (f) Song et al. [17], (g) ARMS, and (h) proposed method.
Figure 15. Object detection results on hazy and processed images. (a) Hazy image, (b) DCP, (c) CAP, (d) IDE, (e) Peplography, (f) Song et al. [17], (g) ARMS, and (h) proposed method.
Electronics 15 00667 g015
Table 1. Summary of the average score of the NR-IQAs.
Table 1. Summary of the average score of the NR-IQAs.
Hazy ImageDCP [4]CAP [9]IDE [14]Peplography [15]Song et al. [17]ARMS [18]Proposed Method
NIQE4.22244.07424.02873.981514.48167.13694.13263.7328
PIQE37.702948.612645.577941.749583.031459.071242.472824.0190
BRISQUE48.483249.587744.587132.451948.819344.711046.913814.7779
BLIINDS-II23.670812.583323.333310.637591.979251.404223.050010.6000
Table 2. Object detection performance comparison.
Table 2. Object detection performance comparison.
Hazy ImageDCP [4]CAP [9]IDE [14]Peplography [15]Song et al. [17]ARMS [18]Proposed Method
m A P 50 0.45080.44220.46390.41550.05220.23150.45080.4829
m A P 50 95 0.18990.15210.20380.19770.02050.08190.18990.2078
Recall0.36250.42110.38710.38710.09680.17670.36250.4194
Precision0.73730.57070.70100.87220.08570.40550.73730.8048
F-10.48610.48460.49880.48670.09090.24610.48610.5514
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jeong, J.; Cho, M.; Lee, M.-C. Visibility Enhancement in Fire and Rescue Operations: ARMS Extension with Gaussian Estimation. Electronics 2026, 15, 667. https://doi.org/10.3390/electronics15030667

AMA Style

Jeong J, Cho M, Lee M-C. Visibility Enhancement in Fire and Rescue Operations: ARMS Extension with Gaussian Estimation. Electronics. 2026; 15(3):667. https://doi.org/10.3390/electronics15030667

Chicago/Turabian Style

Jeong, Jongpil, Myungjin Cho, and Min-Chul Lee. 2026. "Visibility Enhancement in Fire and Rescue Operations: ARMS Extension with Gaussian Estimation" Electronics 15, no. 3: 667. https://doi.org/10.3390/electronics15030667

APA Style

Jeong, J., Cho, M., & Lee, M.-C. (2026). Visibility Enhancement in Fire and Rescue Operations: ARMS Extension with Gaussian Estimation. Electronics, 15(3), 667. https://doi.org/10.3390/electronics15030667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop