Restoration of Binocular Images Degraded by Optical Scattering through Estimation of Atmospheric Coefficients

A binocular vision-based approach for the restoration of images captured in a scattering medium is presented. The scene depth is computed by triangulation using stereo matching. Next, the atmospheric parameters of the medium are determined with an introduced estimator based on the Monte Carlo method. Finally, image restoration is performed using an atmospheric optics model. The proposed approach effectively suppresses optical scattering effects without introducing noticeable artifacts in processed images. The accuracy of the proposed approach in the estimation of atmospheric parameters and image restoration is evaluated using synthetic hazy images constructed from a well-known database. The practical viability of our approach is also confirmed through a real experiment for depth estimation, atmospheric parameter estimation, and image restoration in a scattering medium. The results highlight the applicability of our approach in computer vision applications in challenging atmospheric conditions.


Introduction
Digital image processing allows the extraction of useful information from the real world by processing captured images of an observed scene [1].In practice, image capturing can be affected by multiple perturbations, including additive noise, blurring, nonuniform illumination, and the effects of bad weather, among others [2].In these conditions, the reliability of information extraction by image processing can be compromised [3].
Image restoration in the presence of optical scattering induced by haze is crucial for many real-world computer vision applications, such as autonomous driving [4], surveillance [5], and remote sensing [6], where accurate visual data are essential for decisionmaking and analysis.The development of effective techniques to mitigate the impact of scattering effects is critical for increasing the reliability of computer vision systems in real-world scenarios affected by adverse atmospheric conditions [7].These techniques can have a direct impact on saving lives and increasing safety.In a scattering medium, image capturing is carried out in the presence of particles suspended in the medium, which produce a twofold undesired effect [8].First, light from scene objects attenuates as the particle density in the medium and the distance of the scene points to the observer increase.Second, attenuated light is replaced by scattered light due to the interaction between the particles and the airlight.The problem of image restoration in these conditions is still open as it requires estimating several unknown components of the image formation process from one or more captured images.Furthermore, inherent space-variant scattering degradation makes conventional image restoration methods ineffective [9].
Currently, several approaches aim to improve the visibility of images degraded by optical scattering [10].Table 1 presents widely investigated approaches for image restora-tion in these conditions.One approach utilizes different sensors to characterize the scene and scattering degradation [11][12][13][14].This approach aims to simplify the image restoration problem.However, the time required to characterize the scattering degradation can be long because it is usually needed to wait for the atmospheric conditions to change.Another wellknown approach to mitigate optical scattering effects consists of processing a single scene image [15][16][17][18][19].This approach estimates the airlight and medium transmission function from a single hazy image.Next, a restored image is obtained using an atmospheric optics restoration model.This approach is very suitable for real-time applications.However, the image restoration problem becomes ill-posed due to the need to estimate several unknown image components from a single captured image.Consequently, the use of this approach often produces restored images with overprocessing effects and artificial artifacts that distort the original appearance and colors of the scene [7,15].

Advantages Disadvantages References
Several sensors -Accurate results.
-Simple solution model.
-Introduction of noticeable artifacts.
-Poor performance due to unmet assumptions.
-Operates with two or more images.
[ [22][23][24][25][26][27] Because optical scattering degradation is space-variant, several stereo-vision-based methods have been proposed for image dehazing.One main advantage of these methods is their suitability for distinguishing between nearby slightly degraded scene objects and faraway highly degraded objects, enabling a reduction in overprocessing effects commonly produced by single-image dehazing methods.In this scenario, solving the image dehazing problem requires estimating the atmospheric parameters of the medium and scene depth involved in the image formation process.The successful single-image dehazing approach is unable to correctly estimate the unknown image formation components, providing only a partial solution for visibility improvement.Moreover, several stereo vision-based image dehazing methods rely on estimating the medium's transmission function rather than the scene depth using complex machine learning models that require intensive training.The performance of these methods depends on the availability of a substantial dataset of hazy training images.Additionally, these methods are unsuitable for computer vision tasks involving metric distance calculations and three-dimensional reconstruction.
This work proposes a stereo vision approach for the accurate restoration of images degraded by optical scattering.This approach is based on estimating the scene depth and atmospheric parameters of the medium from a pair of binocular images degraded by optical scattering.The scene depth is obtained by triangulation using a disparity map computed through stereo matching [28].Next, the atmospheric parameters of the medium are determined using an introduced robust estimator based on the Monte Carlo method.Finally, image restoration is performed using the estimated depth and atmospheric parameters in an optics-based restoration model.The proposed approach allows estimating the unknown components of an image-formation model based on atmospheric optics for scattering media.As a result, our approach accurately restores hazy images without intensive offline training.It is also well-suited for computer vision tasks involving metric distance computation and three-dimensional reconstruction.This paper is organized as follows.Section 2 briefly describes different successful stereo-vision-based methods for image dehazing.Section 3 presents the proposed method for the accurate restoration of images degraded by optical scattering using binocular vision.The theoretical principles for estimating the scene's depth in a scattering medium using binocular vision are presented.Additionally, we explain the proposed method for the estimation of the atmospheric parameters, namely, airlight and attenuation coefficients.Section 4 presents performance evaluation results obtained with the proposed method for restoring images degraded by optical scattering using test images from a well-known stereo image dataset.These results are also compared and discussed with those of two similar existing methods based on stereo vision.Moreover, the practical viability of the proposed approach is validated in a real laboratory experiment involving scene depth estimation, atmospheric parameter estimation, and image restoration in a scattering medium.Finally, Section 5 presents the conclusions of this research.

Related Works
In this section, we provide a brief overview of successful existing methods that utilize stereo vision for image dehazing.Murez et al. [22] proposed a photometric stereo-vision method for three-dimensional object reconstruction in scattering media.This approach models the scattered light as an unscattered point light source affected by a blurring degradation.A drawback of this method is that it is unsuitable for dynamic scene applications.Li et al. [23] proposed an iterative algorithm that performs both scene depth estimation and image dehazing using stereo vision.This method estimates the atmospheric parameters of the scene using a two-step procedure.First, the airlight coefficient is determined from the intensity distribution of the captured images.Then, the attenuation coefficient is estimated statistically.Fujimura et al. [24] proposed a deep-learning-based image dehazing cost volume method for multi-view stereo in scattering media.This method simultaneously estimates the scene depth, airlight, and attenuation coefficients from a set of captured stereo images.However, this method requires an intensive offline training process, and its performance depends upon the availability of a vast dataset of training images.Furthermore, this method requires a considerable number of images of the scene captured from different perspectives, which limits its applicability in dynamic scenarios.Recently, existing stereo-vision-based methods for image dehazing rely on estimating the medium transmission function rather than the scene depth through machine learning [25,26].This approach is preferred over scene depth-based methods due to the compact dynamic range of the transmission function and the possibility of reducing the number of image components to be estimated, as both the depth and attenuation coefficient are considered in the transmission [29].

Image Restoration in a Scattering Medium Using Binocular Vision
Consider the binocular camera array that captures a pair of images of a scene in a homogeneous scattering medium, as shown in Figure 1.The scattering medium contains a density of suspended particles that attenuate the light reflected by objects in the scene, as the distance from the observer increases.In addition, the particles scatter the light from natural illumination (known as airlight), leading to a loss of visibility in the captured scene images.In this scenario, the i-th captured image of the scene can be given by [30] where s i (x, y) is the undegraded image captured by the left (i = 0) or right (i = 1) camera, d i (x, y) is the depth distribution of s i (x, y), β is an attenuation coefficient specifying the particle density, and A is an airlight coefficient [31].From Equation (1), the image restoration can be performed as where β, Â, and di (x, y) are estimates of the unknown components of the image formation model given in Equation (1).In general terms, the estimation of these unknown components is hard because only one image equation per camera is available.This work presents the development of a robust and accurate method for estimating these unknown components from binocular images captured in a scattering medium for image restoration using Equation (2).
g 3 e p a T 9 a L 9 T q L Z q z 5 n 0 P 4 I + v j B 5 q a j j A = < / l a t e x i t > g 3 e p a T 9 a L 9 T q L Z q z 5 n 0 P 4 I + v j B 5 q a j j A = < / l a t e x i t > f A suggested procedure for the restoration of images captured in a scattering medium is depicted in Figure 2. Initially, a pair of images of the scene is captured with a binocular camera array.Next, the captured images are rectified to meet the horizontal epipolar geometry [32].The resultant rectified images are preprocessed to improve their visibility by applying a locally-adaptive contrast enhancement method [33].Afterwards, the improved images are processed by stereo-matching to obtain estimates of the depth functions d i (x, y) of the scene by triangulation using the computed disparity map θ i (x, y) [28].Next, the hazy images f i (x, y), estimated depth d i (x, y), and disparity map θ i (x, y) are used to estimate the atmospheric parameters β and A in a proposed algorithm based on the Monte Carlo method [34].Finally, haze-free images of the scene are obtained using the restoration model given in Equation ( 2).In the next subsection, we explain in detail the proposed method for accurate estimation of the required components for the restoration model given in Equation ( 2).

Depth Estimation in a Scattering Medium
Let P = [X, Y, Z] T be a point of a scene under the influence of a scattering medium that is imaged by a binocular camera array, as depicted in Figure 1.Let p 0 = [x 0 , y 0 ] T and p 1 = [x 1 , y 1 ] T be the pixel point of P in the image plane of the left and right camera, respectively, given as [32] where λ i are arbitrary scalar values, K i and L i are the intrinsic and extrinsic camera parameters, respectively, and for any vector x, is the homogeneous coordinate operator with base w [35].For simplicity, we employ the pinhole camera model with intrinsic parameter matrix given as [35] where f i is the camera's lens focal length, ζ i is the skewness, σ x i × σ y i specify the pixel size, and (τ x i , τ y i ) is the principal point, respectively, of the i-th camera.Without loss of generality, we consider that the world coordinate frame coincides with the local frame of the left camera (i = 0).Therefore, the extrinsic parameters of the left and right cameras are where I 3 is the 3 × 3 identity matrix, 0 3 is the 3 × 1 zero vector, R and t are a rotation matrix and a translation vector, respectively, which define the pose of the right camera with respect to the left camera.It is worth noting that rotation matrices will be handled using the Rodrigues formula where γ 1 is the rotation axis, u is a unit vector defining the rotation axis as where γ 2 and γ 3 are the polar and azimuth angles of vector u, and the superscript [•] × denotes the cross product operator as Note that the angles γ 1 , γ 2 , and γ 3 are sufficient to describe a rotation matrix using the Rodrigues formula given by Equation (7).Now, let θ(p 0 ) be the horizontal disparity value of the pixel point p 1 with respect to p 0 .The point p 1 can be specified in terms of p 0 and θ(p 0 ) as The spatial coordinates of the observed point P can be retrieved from Equations ( 3) and ( 10) by triangulation, solving the matrix equation where the singular value decomposition method can be used to efficiently compute the un- Important Remarks

•
The solution of Equation ( 11) requires prior calibration of the binocular system to determine the intrinsic and extrinsic camera parameters.

•
The rotation matrix R and translation vector t in Equation ( 6) can be extracted from the fundamental matrix estimated during the image rectification process [32].After the rectification, the extrinsic parameters of the right camera can be considered as where B is the stereo baseline.

•
Although the scattering medium and scene depth are independent, the disparity estimation can be affected by the visibility reduction caused by the scattering degradation.
To overcome this issue, we apply a locally adaptive contrast enhancement method to the captured hazy images [33].

Estimation of Atmospheric Parameters β and A
For simplicity, consider that f i , s i , and d i are column vectors containing the N d total pixel points of f i (x, y), s i (x, y), and d i (x, y), respectively, placed in a lexicographic order.Thus, considering the unknowns A and β as parameters, the j-th pixel of the i-th undegraded image can be expressed from Equation (2) as Notice that by applying conventional algebraic manipulations, Equation ( 13) can be rewritten as log Furthermore, assuming that the stereo images s i (x, y) are rectified and the scene depth d i (x, y) has been estimated as described in Section 3.1, the following assumptions on Equations ( 13) and ( 14) are valid: To estimate β, let b be a random variable defined within the range r β = [β max − β min ] of feasible values of the coefficient β.Thus, from Equation ( 13) and assumption 2, the coefficient β can be estimated by minimizing the following error function: It can be shown, that the random variable b † that minimizes Equation ( 15) can be obtained by solving A reliable estimate of the unknown coefficient β can be obtained by computing the expected value of Equation ( 16) using the robust estimator for location given by [37] where is the median of absolute deviations from the median, and a is an outlier tolerance coefficient usually set to a = 1.5.Now, to estimate A, let α be a random variable within the range of feasible values of A defined as r A = [A max , A min ].By taking into account Equation (13) and assumption 1, the coefficient A can be estimated by minimizing the error function The random variable α † , which minimizes Equation (19), can be obtained by solving An estimate of the airlight coefficient A can also be obtained by employing the robust location estimator given in Equations ( 17) and (18).
It is worth mentioning that in order to compute reliable estimates of β and A, it is required to solve the nonlinear system composed of Equations ( 16) and (20).There are different numerical approaches to solve this kind of system [38].In this work, we propose a simple two-step approach based on the Monte Carlo method, whose steps are detailed in Algorithm 1.

Results
In this section, the results obtained with the proposed approach for atmospheric parameter estimation and restoration of images degraded by optical scattering are presented and discussed.Initially, we briefly describe the dataset preparation for constructing a set of binocular images degraded by optical scattering.Afterwards, we present the performance evaluation results of the proposed method in estimating the atmospheric parameters β and A. For this, we analyze two cases: one assumes prior knowledge of the scene's depth, while the other utilizes an estimate of the scene depth obtained from a disparity map computed by stereo matching from the input degraded images, as detailed in Section 3.1.Further-more, the performance of the proposed method in the restoration of images degraded by optical scattering with different atmospheric parameter values is analyzed and discussed.Additionally, the performance of the proposed approach is compared with that of two existing stereo vision methods, namely, the method based on depth estimation proposed by Li et al. [23] and the method based on medium transmission estimation proposed by Ding et al. [25].Finally, to validate the practical usefulness of the proposed approach, an experimental laboratory evaluation of scene depth estimation, atmospheric parameter estimation and image restoration of a scene is carried out in a real scattering medium.

Image Dataset Preparation
We construct a test set of binocular images degraded by optical scattering using images of the well-known Middlebury stereo dataset [39][40][41].This dataset contains several rectified stereo images and provides the corresponding ground-truth disparities.Figure 3a,b show examples of the dataset images and ground-truth disparities.Note that the disparity maps shown in Figure 3b contain dark color regions representing unknown disparities values.
The main challenge in the construction of synthetic images degraded by optical scattering lies in applying proper refinement techniques to remove these unknown values.This is because the unknown disparity values can lead to undesirable artifacts in the resulting synthetic images degraded by optical scattering and introduce errors in the estimation of atmospheric parameters as well as image restoration.To remove these unknown values, the ground-truth disparities are preprocessed with the hole-filling method presented in [28], obtaining refined disparities as shown in Figure 3c, where the unknown disparities are removed.Next, the undegraded binocular images of the dataset and their corresponding refined disparities are utilized to compute the scene depth by solving Equation ( 11) as detailed in Section 3.1, by considering the camera parameters specified by the Middlebury dataset.The computed depths for the images shown in Figure 3a are depicted in Figure 3d.Finally, test images degraded by optical scattering for prespecified values of the atmospheric parameters A and β are constructed using Equation (1) from the undegraded images and computed depth.Figure 3e presents examples of the constructed test images degraded by optical scattering from the undegraded images shown in Figure 3a.

Performance Evaluation in Estimation of the Atmospheric Parameters A and β
We evaluate the performance of Algorithm 1 in the estimation of the coefficients A and β from input binocular images degraded by optical scattering.First, we evaluate the performance of Algorithm 1 by assuming that the disparity map of each input binocular images is known.This test aims to quantify the performance of Algorithm 1 when the assumptions given in Section 3.2 are fully met.We also evaluate the performance of Algorithm 1 when the disparities and scene depth of each input image are estimated as described in Section 3.1.The disparities are computed using the stereo-matching method based on morphological correlation presented in Ref. [28].Additionally, the performance of the existing stereo-vision methods proposed by Li et al. [23] and Ding et al. [25] are evaluated.For the method proposed by Li et al. [23], we consider the estimated disparities and scene depth utilized to evaluate the proposed method.For the evaluation of the method by Ding et al. [25], the value of β is obtained from the logarithm of the estimated transmission considering the ground-truth depth.
Twenty different binocular image pairs of the Middlebury dataset [39][40][41] are considered.For each image pair, we construct twenty-five image pairs degraded by optical scattering varying the atmospheric parameters within the range r A = [210, 150] for A and r β = [5, 1] for β.For each degraded image pair, the atmospheric parameters β and A are estimated with the two variants of Algorithm 1 and the existing stereo vision methods proposed by Li et al. [23] and Ding et al. [25].The performance of parameter estimation is measured in terms of the mean-absolute-error (MAE) given as where v j is the real value, vj is the estimated value, and N T = 500 is the total number of trials.Additionally, we compute the percentage of estimation accuracy (%Acc) as where r is the parameter range.
The results of metrics MAE and %Acc with 95% confidence are presented in Figure 4a,b,d,e.The proposed algorithm yields very low MAE values in estimating both β and A coefficients when ground-truth disparities are utilized.This version of the proposed algorithm is referred to as prop-Dgt .It is worth mentioning that the estimation accuracy of the prop-Dgt is 95.31% for A and 96.5% for β.On the other hand, when the disparities are estimated by stereo-matching, the proposed algorithm obtains an estimation accuracy of 86% for A and 80.25% for β.This version of the proposed algorithm is referred to as prop-Dest.The accuracy reduction in prop-Dest is due to errors in disparity estimation.However, the incorporation of an advanced disparity refinement method [42] can help to increase the accuracy of the prop-Dest method.In contrast, the accuracy of parameter estimation obtained with the method proposed by Li et al. [23] was 44.68% for A and 54.5% for β, while the accuracy from the method by Ding et al. [25] was 24.4% for A and 47.08% for β.These results are considerably worse than those obtained with the proposed method.

Performance Evaluation in Restoration of Images Degraded by Optical Scattering
The estimated atmospheric parameters β and Â using the proposed and existing methods for each pair of test images are used to perform image restoration with the help of Equation (2).The accuracy of image restoration is measured in terms of the peak signal-to-noise-ratio (PSNR) given as where MAX s is the maximum intensity value of the reference image s(x, y), MSE is the mean-squared-error given as and ŝ(x, y) is the restored image.Additionally, we compute the Image Enhancement Factor (IEF) given as The obtained results are presented in the Figures 4c,f and 5.In Figure 4c,f we can observe that when using the estimated atmospheric parameters with the prop-Dgt method, high PSNR and IEF values of 44.5 ± 2.72 dB and 150.6 ± 15.3, respectively, are obtained with 95% confidence.Figure 5c shows several restored images obtained with the prop-Dgt method.Note that these restored images closely match the reference undegraded images shown in Figure 5b.Now, note that the prop-Dest method produces a PSNR value of 28.48 ± 1.06 dB and a IEF value of 14.65 ± 5.4 with 95% confidence.Examples of the restored images obtained with the prop-Dest method are shown in Figure 5d.It can be seen that these restored images are very similar to the reference undegraded images shown in Figure 5b.In contrast, the stereo vision method proposed by Li et al. [23] yields a PSNR value of 23.1 ± 1.16 dB and IEF value of 4.17 ± 1.05, while the method proposed by Ding et al. [25] produces a PSNR value of 16.70 ± 1.42 dB and IEF value of 0.6 ± 0.23 with 95% confidence.The restored images obtained with the existing tested methods are shown in Figure 5e,f respectively.Note that some of these restored images contain very noticeable undesired effects.For instance, the Aloe image shown in Figure 5e contains overprocessing effects that distort the original colors of the undegraded image.Furthermore, in the Moebius and Recycle images, scattering effects still persist, reducing visibility in the restored image.These undesirable effects are caused by wrongly estimated atmospheric parameters by the Li et al. [23] method.Additionally, note that the method by Ding et al. [25] effectively removes the scattering degradation, but introduces undesirable over-processing effects, as shown in Figure 5f.

Performance Evaluation of the Proposed Method in a Real Scattering Medium
The practical feasibility of the proposed method is validated in a real scattering medium.We constructed an experimental platform composed of an acrylic chamber with dimensions of 85 × 54 × 49 cm, containing a created scene and illuminated by an external light-emitting diode lamp, as depicted in Figure 6a.This platform permits capturing undegraded images of the scene, as shown in Figure 6a, and degraded scene images by introducing scattering particles into the chamber using a fog machine, as illustrated in Figure 6b.This experiment aims to evaluate the performance of the proposed image restoration method in real scattering conditions.
First, undegraded images of the scene are captured using a binocular camera array when the chamber is free of scattering particles.The binocular camera array comprises two UI-3880CP-C-HQ R2, Imaging Development Systems, Obersulm, Germany, digital cameras with 3088 × 2076 pixels and 8 mm focal length imaging lens mounted on a horizontal fixture with a 36.42 mm baseline.The ground-truth depth of the scene is computed using three-dimensional spatial point computation by fringe projection profilometry [36].The implemented fringe projection system consists of a binocular camera array and a pair of PowerLite W30, Epson, Suwa, Nagano, Japan, LCD projectors with a resolution of 1280 × 800 pixels, as depicted in Figure 6a.The intrinsic and extrinsic parameters of the cameras and projectors of the fringe projection system are determined using the calibration method proposed in Ref. [43].The resultant estimated parameters are summarized in Table 2. Next, the captured images of the scene, denoted as s0 (x, y) and s1 (x, y), are rectified through the projective transformation method [32].The rectified, undegraded image s 0 (x, y) and ground-truth depth d 0 (x, y) of the scene computed through fringe projection are presented in Figure 7a,b, respectively.Afterwards, binocular images of the scene are captured with the used camera array by varying the concentration of scattering particles in the chamber.Figure 8a-c shows three captured images of the scene in severe, moderate, and mild scattering conditions, respectively.These captured images are assumed to be formed according to Equation (1).The proposed method depicted in Figure 2 is utilized for restoring the degraded images shown in Figure 8a-c.The degraded images are firstly preprocessed using the locally-adaptive filtering method suggested in Ref. [33].Afterwards, the disparity map is computed utilizing the stereo-matching algorithm based on morphological correlation, as proposed in Ref. [28].Finally, the disparity map is further refined using the algorithm suggested in Ref. [42].The resultant disparity map of the scene computed under the influence of a scattering medium is shown in Figure 7c.Next, the scene's depth is computed by solving Equation ( 11), considering the estimated disparity map and the camera parameters obtained by the calibration process given in Table 2.The computed depth of the scene is shown in Figure 7d.It can be observed that despite the presence of scattering particles in the medium, the estimated depth is closely approximated to the ground-truth depth shown in Figure 7b obtained by fringe projection profilometry.3. It is worth noting that the estimated value of β increases with the concentration of scattering particles in the chamber while the estimated value of A decreases.The images of the scene captured in the presence of scattering particles depicted in Figure 8a-c are restored using the estimated atmospheric parameters and scene depth in the restoration model given in Equation ( 2).The resultant restored images are presented in Figure 8d-f.Note that all the restored images effectively suppress the effects of optical scattering and successfully restore the visibility of the scene without introducing noticeable artifacts or overprocessing effects.Furthermore, to assess the accuracy of the proposed method for image restoration in real optical scattering conditions, we calculate the PSNR and IEF values for each restored image shown in Figure 8d-f, considering the undegraded captured image shown in Figure 7a as the reference.The resultant PSNR and IEF values for each restored image are presented in the fourth and fifth column of Table 3.Note that the restored images produce PSNR values of 22.0 dB and 21.15 dB and IEF values of 1.42 and 1.21 for the mild and moderate scattering conditions, respectively.This result can also be confirmed by observing that the restored images shown in Figure 8e,f closely match the reference image shown in Figure 7a.Moreover, for the case of severe scattering degradation, the restored image yields a PSNR of 18.58 and IEF of 1.13 despite the fact that the light reflected by the farther objects in the scene is severely attenuated.These results confirm that the proposed method is highly effective in mitigating the effects of optical scattering and exhibits significant potential for computer vision applications including vehicle navigation and surveillance.

Conclusions
This research introduced a binocular vision-based method for restoring images captured in scattering media.This method performs scene depth estimation through stereo matching, estimation of atmospheric parameters, and image restoration based on atmospheric optics modeling.As a result, it effectively suppresses optical scattering effects in captured scene images without introducing noticeable artifacts in the restored images.The performance of the proposed approach was evaluated in terms of the accuracy of atmospheric parameter estimation and image restoration, using synthetic hazy images constructed from a well-known dataset.The proposed method outperformed two existing similar methods in all performed tests.To validate the practical viability of the proposed method, we performed a laboratory experiment comprising depth estimation, atmospheric parameter estimation, and image restoration, using binocular images captured in a real scattering medium.The results confirmed the effectiveness and robustness of the proposed method, highlighting its potential applicability for computer vision applications under challenging atmospheric conditions.A limitation of the proposed method is its exclusive design for homogeneous scattering media, potentially limiting its effectiveness in nonhomogeneous scattering conditions.Additionally, while the proposed method yields strong performance in scene depth estimation, errors in disparity estimation can affect the accuracy of atmospheric parameter estimation.These limitations can be addressed by considering an adaptive parameter estimation approach and advanced disparity refinement algorithms.For future work, we will explore the integration of machine learning techniques to adapt to different scattering conditions, implement the proposed approach in specialized hardware to enable massive parallelism, and assess its performance in real-world outdoor applications.
t e x i t s h a 1 _ b a s e 6 4 = " N 3 8 C 7 p 8 a 4 s

Figure 1 .
Figure 1.Geometry of a stereo vision system in a scattering medium.Reflected light (purple arrows) by scene objects is attenuated by scattering particles.The attenuated light (cyan arrows) is replaced by scattered light (orange arrows) due to airlight (red arrows).

Figure 2 .
Figure 2. Block diagram of the proposed method for restoration of images degraded by optical scattering using binocular vision.

Algorithm 1 : 2 N 3 M ← median{data} 4 MAD ← 1 . 5 J 10 acc ← acc + 1 11 r 14 f 15 d 16 θ 17 acc ← 1 18for j = 1 to N d do 19 b
Proposed algorithm for robust estimation of the airlight A and the attenuation coefficients β.Input: Hazy images f i , estimated depth d i , and estimated disparity θ i .Output: Estimates of the airlight Â and the attenuation coefficients β. 1 Function robustEST(data): d ← Card{data} /* Compute cardinality of data */ 4826 median{|data − M|} ← mean{v} /* Compute the mean value as a location estimate.For heavy-tailed data it is recommended to use the median */ i [j] ← f i (x, y) /* data vector containing the elements of f i (x, y) */ i [j] ← di (x, y) /* data vector containing the elements of di (x, y) */ i [j] ← θi (x, y) /* data vector containing the elements of θi (x, y) */ ← U(b min , b max ) /* Uniformly distributed random number within the interval [b min , b max ]

Figure 3 .
Figure 3. Examples of synthetic test images degraded by optical scattering.(a) Original images of the Middlebury stereo dataset.(b) Original ground-truth disparity map.(c) Refined groundtruth disparity map.(d) Computed scene depths.(e) Constructed test images degraded by optical scattering with A = 187 and β = 5.

Figure 4 .
Figure 4. Results with 95% confidence of atmospheric parameter estimation and image restoration using the proposed method with ground-truth disparities (prop-Dgt), the proposed method with estimated disparities (prop-Dest), the method by Li et al. [23], and the method by Ding et al. [25].(a) MAE in estimation of A. (b) MAE in estimation of β.(c) PSNR of image restoration.(d) %Acc in estimation of A. (e) %Acc in estimation of β. (f) IEF of image restoration.

Figure 6 .
Figure 6.Constructed platform for experimental evaluation.(a) Setup for capturing the reference undegraded image of the scene and computation of the ground-truth depth using fringe projection profilometry.(b) Setup for image capturing of the scene in a real scattering medium.

Figure 7 .
Figure 7. (a) Undegraded captured image of a real scene.(b) Ground-truth depth of the scene obtained by fringe projection profilometry.(c) Estimated disparity map of the scene in a scattering medium.(d) Depth of the scene computed by triangulation using the disparity map shown in (c).

Figure 8 .
Figure 8. Captured images of the scene in the constructed platform in (a) severe scattering conditions, (b) moderate scattering conditions, and (c) mild scattering conditions.(d-f) Restored images corresponding to (a-c), respectively.Now, we employ the proposed Algorithm 1 to estimate the atmospheric parameters A and β for each degraded image shown in Figure 8a-c.The estimated atmospheric parameters for the three captured degraded images are presented in Table3.It is worth noting that the estimated value of β increases with the concentration of scattering particles in the chamber while the estimated value of A decreases.The images of the scene captured in the presence of scattering particles depicted in Figure8a-c are restored using the estimated atmospheric parameters and scene depth in the restoration model given in Equation (2).The resultant restored images are presented in Figure8d-f.Note that all the restored images effectively suppress the effects of optical scattering and successfully restore the visibility of the scene without introducing noticeable artifacts or overprocessing effects.Furthermore, to assess the accuracy of the proposed method for image restoration in real optical scattering conditions, we calculate the PSNR and IEF values for each restored image shown in Figure8d-f, considering the undegraded captured image shown in Figure7aas the reference.The resultant PSNR and IEF values for each restored image are presented in the fourth and fifth column of Table3.Note that the restored images produce PSNR values of 22.0 dB and 21.15 dB and IEF values of 1.42 and 1.21 for the mild and moderate scattering conditions, respectively.This result can also be confirmed by observing that the restored images shown in Figure8e,f closely match the reference image shown in Figure7a.Moreover, for the case of severe scattering degradation, the restored image yields a PSNR of 18.58 and IEF of 1.13 despite the fact that the light reflected by the farther objects in the scene is severely attenuated.These results confirm that the proposed method is highly effective in mitigating the effects of optical scattering and exhibits significant potential for computer vision applications including vehicle navigation and surveillance.

Table 1 .
Summary of principal approaches for image restoration in scattering media.

Table 2 .
Estimated parameters from binocular and fringe projection system calibration.

Table 3 .
Estimated atmospheric parameters and computed PSNR and IEF values in restoring the real images captured in a scattering medium shown in Figure8a-c.