Reconstruction of the Instantaneous Images Distorted by Surface Waves via Helmholtz–Hodge Decomposition

: Imaging through water waves will cause complex geometric distortions and motion blur, which seriously affect the correct identiﬁcation of an airborne scene. The current methods main rely on high-resolution video streams or a template image, which limits their applicability in real-time observation scenarios. In this paper, a novel recovery method for the instantaneous images distorted by surface waves is proposed. The method ﬁrst actively projects an adaptive and adjustable structured light pattern onto the water surface for which random ﬂuctuation will cause the image to degrade. Then, the displacement ﬁeld of the feature points in the structured light image is used to estimate the motion vector ﬁeld of the corresponding sampling points in the scene image. Finally, from the perspective of ﬂuid mechanics, the distortion-free scene image is reconstructed based on the Helmholtz-Hodge Decomposition (HHD) theory. Experimental results show that our method not only effectively reduces the distortion to the image, but also signiﬁcantly outperforms state-of-the-art methods in terms of computational efﬁciency. Moreover, we tested the real-scene sequences of a certain length to verify the stability of the algorithm.


Introduction
Viewing the airborne scene through a water surface using a submerged camera constitutes a virtual periscope.Unlike a true periscope, the system is entirely submerged and does not draw attention.Images acquired in this way suffer from serious refractive distortions due to the random fluctuations of the water-air interface (WAI).Such effects seriously affect the acquisition of real-scene information and limit the development of underwater vehicles in practical applications, such as path planning, air target monitoring, and marine biological research [1][2][3][4][5][6].However, removing such distortions form a single image is an inherently ill-posed problem.It is similar to the process of blind deconvolution; however, the kernel changes randomly in space, which is much more complicated than the process of image deblurring.As such, most previous studies [7][8][9][10][11][12][13][14] have mainly relied on high-frame rate video frames or templates, which limits their applicability in practice.
The studies on this challenging problem have been carried out for decades.One way to approach the problem is to create a spatial distortion model using a long video sequence and then fit it to each frame to recover the undistorted image synchronously.Tian et al. [15] creatively proposed a model-tracking method to recover real underwater scene images.In later work [16], they proposed a data-driven iterative method to reduce the distortions of images, which introduced the notion of a "pull-back" operation.K. Seemakurthy et al. [17] proposed the responding methods to remove distortions caused by unidirectional cyclic waves and circular ripples.However, it is difficult for this simple spatial model, such as that reported in [15][16][17], to represent various complex fluid motion phenomena in natural scenes.More recently, Li et al. [18] first attempted to remove the dynamic refractive effects using a single image via a deep learning method.Later, James et al. [19] put forward to a novel method combining compressive sensing (CS) and a local polynomial image representation.Simron Thapa et al. [20] proposed a distortion-guided network (DG-Net) to remove the dynamic refractive effects.They first introduced a physically constrained convolutional network to predict the distortion map from the warped image and then used it to guide the generation of a distortion-free image.For the single image undistortion problem, a deep learning framework can indeed achieve end-to-end processing without a prior image and active excitation.However, a large amount of training data and time is often required in the network-training phase.Obviously, it is very difficult to construct a widely applicable dataset for complex and changeable fluid motion.
Another typical approach is by estimating the instantaneous shape of surface waves, which tries to first estimate the slope distribution of the instantaneous water surface by studying the optical properties of light rays at the interface of the medium and then restores the distortion-free image synchronously.Milder et al. [21,22] first presented a scheme of recovering the real scene by estimating the shape of the water surface.Assuming the sky was uniform in brightness without sunlight and clouds, the total reflection extinction area outside the Snell window was completely dark.On this basis, they first estimated the shape of the wavy water surface based on a map among sky brightness, the angle of incidence, and the slope of the water surface.Then, the distortion-free image was recovered through reverse ray tracing.Levin et al. [23] designed an experimental setup to remove the geometric distortion effect of underwater target images.At first, a color camera was used to simultaneously acquire an image of the object and a glitter pattern on the water surface using multiple illumination sources and then proces the glitter pattern to obtain the values of surface slopes at a limited number of points and to use these slopes for the retrieval of image fragments [24].Finally, a complete distortion-free image was generated through multiple accumulations of short-exposure images [25][26][27][28].Alterman et al. [29] estimated the shape of the wavy water interface by adding an additional refractive imaging sensor.The system uses the sun as the guide star.The wavefront sensor includes a pinhole array, a diffuser plate, and a submerged camera.The position of each spot that goes through a pinhole is used to figure out the sampled normal vector of the water surface.This is then used to estimate the water interface.Y. Schechner et al. [30] utilized multiple viewpoints along a wide baseline to observe the scene under uncorrelated distortion conditions and recover sparse point clouds.R.H. Gardashov et al. [31] proposed a method for recovering the instantaneous images distorted by surface waves.They first used the signature of the sun's flickering to determine the instantaneous shape of the wavy water surface and then corrected the warped image based on back-projection.However, this is only suitable for scenarios, such as monitoring from an airplane, because there is no sun glint features if viewing an airborne scene from a submerged camera [21,29,30].The above methods either rely on special natural illumination conditions [21,23,29,31] or require a multi-viewpoint device [30], and the imaging accuracy is not satisfactory, so it is difficult to apply it to practical application scenarios.
The three-dimensional measurement technology based on structured light has the advantages of high precision, fast speed, and strong adaptability.In recent years, it has been widely used in various fields [32][33][34].This paper proposed a novel method for recovering the instantaneous images distorted by surface waves.In contrast to previous work, our method does not need a long video sequence [7][8][9][10][11][12][13][14][15][16][17], a large amount of training data [18][19][20], special illumination conditions [21,23,29,31], multiple-viewpoints [30], or a special illumination source [23].We only require a simple projection setup and single distorted scene image.The main contributions of this paper are as follows: (1) in a virtual periscope, propose a based active excitation water-air imaging model; (2) from the perspective of fluid mechanics, an underwater image-correction algorithm based on Helmholtz-Hodge Decomposition is proposed; (3) demonstrate the approach and analyze its effectiveness.

Snell's Window in Flat WAI
Imaging through a water surface, light ray transmission follows Snell's Law [35,36].When the water surface is flat, the field of view of a submerged camera only needs 97.2 • , which can observe the whole sky scene regardless of the observer's depth.As shown in Figure 1, a submerged camera observes from an underwater location of O. OM and ON are the total reflection boundaries, where ∠MON = 97.2• .The conical area formed by the rotation of OM and ON around the optical axis is called the Snell cone, and the circular area on the water surface between M and N is the Snell window (SW), the boundary of which is the extinction boundary.SW is surrounded by a dark region that represents light that is entirely internally reflected from the sea and back to the observer from the water below.Imaging through a water surface, light ray transmission follows Snell's Law [35,36].When the water surface is flat, the field of view of a submerged camera only needs  97.2 , which can observe the whole sky scene regardless of the observer's depth.As shown in Figure 1, a submerged camera observes from an underwater location of O .OM and ON are the total reflection boundaries, where    MON 97.2 .The conical area formed by the rotation of OM and ON around the optical axis is called the Snell cone, and the circular area on the water surface between M and N is the Snell window (SW), the boundary of which is the extinction boundary.SW is surrounded by a dark region that represents light that is entirely internally reflected from the sea and back to the observer from the water below.

Snell's Window in Wavy WAI
Assuming that the brightness distribution of airborne scenes is uniform, only the effect of the water-air interface for the illumination distribution on the image plane is considered, ignoring the absorption and scattering of the water body and the receiving loss of a camera.Letting the initial value of the sky brightness be 1, according to the Fresnel formula [37], the illuminance on the image plane can be expressed as where   air ( , ) E x y represents the radiant emittance of the objective scene located at   ( , ) x y ,    air ( , ) 1 E x y .( , ) x y E u v is the brightness of the corresponding pixel. i and  t are the angle of incidence and refraction, respectively.Figure 2 shows the contour distribution of irradiance on the image plane.Among them, WAI is numerically simulated based on the wave spectrum theory [38,39] by a wind speed of 5.0 m/s , and the water depth of the camera of 8 m .
The research results show that there is always a Snell window on the WAI when observing the airborne scenes through water surface.When the water is flat, the Snell

Snell's Window in Wavy WAI
Assuming that the brightness distribution of airborne scenes is uniform, only the effect of the water-air interface for the illumination distribution on the image plane is considered, ignoring the absorption and scattering of the water body and the receiving loss of a camera.Letting the initial value of the sky brightness be 1, according to the Fresnel formula [37], the illuminance on the image plane can be expressed as where E air (x , y ) represents the radiant emittance of the objective scene located at (x , y ), E air (x , y ) = 1.E(u x , v y ) is the brightness of the corresponding pixel.θ i and θ t are the angle of incidence and refraction, respectively.Figure 2 shows the contour distribution of irradiance on the image plane.Among them, WAI is numerically simulated based on the wave spectrum theory [38,39] by a wind speed of 5.0 m/s, and the water depth of the camera of 8 m.
window is a regular, smooth circle area.However, in the presence of surface waves, the edge of the Snell window will change irregularly and extend outwards.In addition, there might be a dark extinction region within the window due to the total reflection of light (Figure 2a), which can cause the loss of detail of the edge field of view.

Imaging through Wavy WAI
Imaging through the wavy water-air interface using a submerged camera, the light rays of the airborne scene will be bent by the dynamic refractive media, resulting in complex geometric distortions.Therefore, the image degradation model of random distortions can be expressed as where   2 x represents the pixel coordinates on the image plane, g I is the distortion- free image, and I is the distorted image. ( ) w x is a 2-D warping field that represents the pixel displacement between I and g I caused by the refraction of the water-air interface.
As Figure 3 shows, the pixel coordinate i x will image the scene point A when the water surface is calm.However, in the presence of surface waves, the slope of the intersection i q , which is the crossover point between WAI and the back-projected ray from i x through lab o (optical center), will change across the fluctuation of water.At this moment, the camera will actually observe scene point  rather than A .Thus, the amount of translation of pixel i x on the image plane can be given by s ( ) ( ) ,  The research results show that there is always a Snell window on the WAI when observing the airborne scenes through water surface.When the water is flat, the Snell window is a regular, smooth circle area.However, in the presence of surface waves, the edge of the Snell window will change irregularly and extend outwards.In addition, there might be a dark extinction region within the window due to the total reflection of light (Figure 2a), which can cause the loss of detail of the edge field of view.

Imaging through Wavy WAI
Imaging through the wavy water-air interface using a submerged camera, the light rays of the airborne scene will be bent by the dynamic refractive media, resulting in complex geometric distortions.Therefore, the image degradation model of random distortions can be expressed as where x ∈ R 2 represents the pixel coordinates on the image plane, I g is the distortion-free image, and I is the distorted image.w(x) is a 2-D warping field that represents the pixel displacement between I and I g caused by the refraction of the water-air interface.
As Figure 3 shows, the pixel coordinate x i will image the scene point A when the water surface is calm.However, in the presence of surface waves, the slope of the intersection q i , which is the crossover point between WAI and the back-projected ray from x i through o lab (optical center), will change across the fluctuation of water.At this moment, the camera will actually observe scene point B rather than A. Thus, the amount of translation of pixel x i on the image plane can be given by where w(x i ) and d s (x i ) are both representations of homogeneous coordinates.d s (x i ) = [d x , d y , 0] T represents the amount of displacement that the back-projected ray of the pixel x i experiences above water, where d x , d y and 0 are the axial components of the vector d s (x i ) in the camera coordinate frame, respectively.M and N are the pixel size of the image.P is the projection matrix of the camera [34,35].
where f represents the focus length of the camera, and z c is the distance from the optical center o lab to the planar scene above water (ignoring the 3-D properties of the object).d x and d y represent the physical sizes of each pixel on the image plane at the x-axis and y-axis, respectively.It can be seen from Equation (3  [34,35]. where f represents the focus length of the camera, and c z is the distance from the optical center lab o to the planar scene above water (ignoring the 3-D properties of the object).x d and y d represent the physical sizes of each pixel on the image plane at the x -axis and y -axis, respectively.It can be seen from Equation (3) that since the fluctuations of WAI are spatially variable, different pixel points may have different pixel displacements, resulting in complex geometric distortions of the image.

Overview
This paper proposed an image-recovery scheme based on the Helmholtz-Hodge Decomposition to recover the instantaneous images distorted by surface waves.The key idea is to leverage the feature information of the preset structural light pattern to realize the water-air interface(WAI) sampling and simultaneously acquire the scene image in the air and the structured light image on the diffuser plate.Then, the deformation vector field of the control points in the scene image were estimated using the displacement relationship of the corresponding points between the structural light image and the scene image.Finally, the distortion-free scene image is recovered based on the Helmholtz-Hodge Decomposition theory.
As Figure 4 shows, similar to our previous work [6], we first introduced a water-air imaging model with structured light projection to simultaneously acquire the distorted structured image and the warped scene image.In which the camera s obtains the warped structured light image from the diffuser plane, while the camera v acquires the scene image through the same sampled region.Next, we will describe in detail how the algorithm works.

Overview
This paper proposed an image-recovery scheme based on the Helmholtz-Hodge Decomposition to recover the instantaneous images distorted by surface waves.The key idea is to leverage the feature information of the preset structural light pattern to realize the water-air interface(WAI) sampling and simultaneously acquire the scene image in the air and the structured light image on the diffuser plate.Then, the deformation vector field of the control points in the scene image were estimated using the displacement relationship of the corresponding points between the structural light image and the scene image.Finally, the distortion-free scene image is recovered based on the Helmholtz-Hodge Decomposition theory.
As Figure 4 shows, similar to our previous work [6], we first introduced a water-air imaging model with structured light projection to simultaneously acquire the distorted structured image and the warped scene image.In which the camera s obtains the warped structured light image from the diffuser plane, while the camera v acquires the scene image through the same sampled region.Next, we will describe in detail how the algorithm works.

Sampling of WAI
First, an adaptive and adjustable structured light pattern is projected onto the water surface to accomplish the sampling of the instantaneous wave surface, as shown in Figure 4. Once the system parameters are given, the distribution of WAI sampling points and the corresponding control point distribution on the scene image are easy to acquire through perspective projection transformation [40,41].A simulation example for the WAI sampling is shown in Figure 5. Figure 5a is a preset structural pattern, and Figure 5b is the virtual image formed on the calm water surface (note: whether it is daytime or a moonlit night, no projected image can be formed on the water.For ease of analysis, we introduce a virtual image).

Feature Extraction and Matching
A distorted structural light image will be formed in the diffusion plate plane after reflection through the water surface, as shown in Figure 6.Then, feature point extraction and matching were performed on the reference and the twisted structured light images

Sampling of WAI
First, an adaptive and adjustable structured light pattern is projected onto the water surface to accomplish the sampling of the instantaneous wave surface, as shown in Figure 4.
Once the system parameters are given, the distribution of WAI sampling points and the corresponding control point distribution on the scene image are easy to acquire through perspective projection transformation [40,41].A simulation example for the WAI sampling is shown in Figure 5. Figure 5a is a preset structural pattern, and Figure 5b is the virtual image formed on the calm water surface (note: whether it is daytime or a moonlit night, no projected image can be formed on the water.For ease of analysis, we introduce a virtual image).

Sampling of WAI
First, an adaptive and adjustable structured light pattern is projected onto the water surface to accomplish the sampling of the instantaneous wave surface, as shown in Figure 4. Once the system parameters are given, the distribution of WAI sampling points and the corresponding control point distribution on the scene image are easy to acquire through perspective projection transformation [40,41].A simulation example for the WAI sampling is shown in Figure 5. Figure 5a is a preset structural pattern, and Figure 5b is the virtual image formed on the calm water surface (note: whether it is daytime or a moonlit night, no projected image can be formed on the water.For ease of analysis, we introduce a virtual image).

Feature Extraction and Matching
A distorted structural light image will be formed in the diffusion plate plane after reflection through the water surface, as shown in Figure 6.Then, feature point extraction and matching were performed on the reference and the twisted structured light images

Feature Extraction and Matching
A distorted structural light image will be formed in the diffusion plate plane after reflection through the water surface, as shown in Figure 6.Then, feature point extraction and matching were performed on the reference and the twisted structured light images using a new corner detection algorithm (described in Section 3.3.1),respectively.Thereby, the displacement field of the feature points in the structured light image can be expressed as where m × n represents the number of feature points.x k and x ' k are the pixel coordinates of the corresponding feature points on the distorted structure light image J(x) and the reference one J g (x), respectively.
where  m n represents the number of feature points.

A Slope Search-Based Corner Detection Algorithm
Corner detection is a method used to extract features in computer vision systems, which is widely used in motion detection, image registration, 3D reconstruction, and object recognition.The edge profile of the structured light image will no longer be smooth due to the influence of random fluctuations of the water surface, absorption, and scattering based on the water body, which makes it difficult for the current corner detection methods [42][43][44] to accurately extract feature information.Hence, a slope search-based corner detection method is proposed in this paper.
The framework of the feature-extraction method is shown in Figure 7. Firstly, distorted structured light images are converted to an HSI color model from an RGB color model, where H is the hue, representing the frequency of the color; S represents the depth of the color, called saturation; and I represents the intensity or lightness.The HSI model can be given as

A Slope Search-Based Corner Detection Algorithm
Corner detection is a method used to extract features in computer vision systems, which is widely used in motion detection, image registration, 3D reconstruction, and object recognition.The edge profile of the structured light image will no longer be smooth due to the influence of random fluctuations of the water surface, absorption, and scattering based on the water body, which makes it difficult for the current corner detection methods [42][43][44] to accurately extract feature information.Hence, a slope search-based corner detection method is proposed in this paper.
The framework of the feature-extraction method is shown in Figure 7. Firstly, distorted structured light images are converted to an HSI color model from an RGB color model, where H is the hue, representing the frequency of the color; S represents the depth of the color, called saturation; and I represents the intensity or lightness.The HSI model can be given as where H component represents the frequency of the color, which is related to the wavelength.Random fluctuations in the water surface and absorption and scattering by water bodies affect the brightness of the image, but not the wavelengths of the colors.As such, we choose the H channel as processing-frame J H (x) to extract the feature information for distorted structured light images.Moreover, we further enhanced the edge information of J H (x) using the guider filtering [45] and transformed it to the binary image J b H (x) using Otsu's method [46].
where H component represents the frequency of the color, which is related to the wavelength.Random fluctuations in the water surface and absorption and scattering by water bodies affect the brightness of the image, but not the wavelengths of the colors.As such, we choose the H channel as processing-frame ( ) H J x to extract the feature information for distorted structured light images.Moreover, we further enhanced the edge information of ( ) H J x using the guider filtering [45] and transformed it to the binary image ( ) H b J x using Otsu's method [46].Then, the boundary pixel set  l of the arbitrary connected domain l  is obtained based on the connected domain operator, and the geometric center of  l is further calculated based on the spatial moment function, which represents the centroid  l .The spa- tial moment function is given as ( , ) x y p q pq xy l x y where x  and y  represent the column and row coordinates of an arbitrary pixel in the contour set  l , respectively.From Equation( 7), the centroid l  of  l can be calculated as where x  and y  are the column and row of the corresponding centroid  l .Then, the boundary pixel set Θ l of the arbitrary connected domain Ω l is obtained based on the connected domain operator, and the geometric center of Ω l is further calculated based on the spatial moment function, which represents the centroid ε l .The spatial moment function is given as where ϕ x and ϕ y represent the column and row coordinates of an arbitrary pixel in the contour set Θ l , respectively.From Equation ( 7), the centroid ε l of Ω l can be calculated as where ϕ x and ϕ y are the column and row of the corresponding centroid ε l .Subsequently, the contour set Θ l can be divided into k s different subsets according to the shape of the structured light and the centroid ε l .As our structured light pattern is rectangle, set k s to 4. The specific classification standards are as follows: The corner points of the image are generally the points where the local curvature suddenly changes.As shown in Figure 7, the grid will exhibit varying degrees of compression or expansion as a result of water surface changes.Hence, we can define a corner point of the structured light image as the intersection of the sub-contour K τ , τ = 1, 2, 3, 4 and the diagonal of the smallest external rectangle, where the external rectangle is defined by the sub-contour K τ and the centroid ε l and the diagonal passes through the centroid ε l .The diagonal slope λ τ of each subset K τ of connected domain Θ l can be calculated using Equation (10), specifically where K τ,x , K τ,y , τ = 1, 2, 3, 4 represent all column and row coordinates of K τ , respectively.Then, the corner coordinates of Θ l can be given as Finally, all connected domains are traversed to obtain all corner points of the inputstructured light image.
After generating all corner points, the corner points in the reference image and the twisted image are respectively sorted using the bubble sorting method.The key idea is to update the corner matrix by the column.The specific process is as follows: assuming that the corner point matrix is C and the size is m × n, firstly, the candidate set Cr is constructed from the corner points, which are located at the current column and the next column.Then, sort them in ascending order according to the row values to get a vector Cm with a size of 2m × 1. Next, divide the vector Cm into m subsets Cs i , i = 1, . . .m from smallest to largest, i = 1, ..., m.Finally, select the corner point with the smallest row value from each subset as the sorting result of the current column and update the matrix C. The entire process for corner point detection is described in Algorithm 1. ; for each boundary pixel set Θ l , l = 1, 2, . . ., length(Θ)

Distortion Estimation of the Control Points
Water fluctuations will cause the radiated rays passing through the wave surface to be angularly offset, which means that the offsets of any two light rays passing through the same point on the water surface are correlated.Inspired by this, we attempted to establish the displacement relationship of the corresponding points between the structural light image and the scene image from the perspective of geometric optics.
As Figure 8 shows, the global coordinate system is established with the projection center o pro as the origin, and the z-axis is vertically upward.Let p k represent the 3-D position of the arbitrary feature point of the structured light pattern to be projected, and the corresponding projected ray is where vp k is the projected ray direction vector and l p represents the propagation length along the ray.The projected ray intersects the WAI at q k , namely the WAI sampling point, and the corresponding reflected ray is where vk denotes the reflected ray direction vector and l k is the propagation length along the ray vk .The reflected ray irradiates a spot on the diffuser plane Π diffuser , at 3D location s k .The WAI normal is Nk , and s ' k is the corresponding spot location as the water is flat.Thus, the displacement vector δ(p k ) of the feature points in the diffusion plane Π diffuser is equal to where w str k denotes the displacement amount of corresponding feature points in the structured light image.p '  s is the inverse projection matrix of the camera s where f s is the focal length of camera s. z s is the distance from the diffuser plane Π diffuser to camera s. dx and dy represent the physical size of a single pixel.Relative to the global coordinate system, the pose of camera s can be defined by a rotation matrix and a translation vector, and the 3D position of the origin of camera s is o lab s = −R s T t s .Similarly, the origin of camera v is o lab v = −R v T t v .Camera v observes an airborne scene through wavy WAI, and the back-projected ray forms x k i through o lab v in water as where vk w is the back-projected ray direction vector of the pixel x k i and l w represents the propagation length along the ray vk w .The back-projected ray intersects the WAI at where s f is the focal length of camera s .z s is the distance from the diffuser plane  diffuser to camera s .
 dx and  dy represent the physical size of a single pixel.Relative to the global coordinate system, the pose of camera s can be defined by a rotation matrix and a translation vector, and the 3D position of the origin of camera s is Similarly, the origin of camera v is  The corresponding back-projected ray above water is where vk a is the airborne observing ray direction vector of pixel x k i and l a is he propagation length along the ray vk a .The back-projected ray above water points to the distortion-free objective scene and intersects the planar scene Π object at s k b .s k a represents the corresponding scene point as the water is flat.Therefore, when the inverse projected ray of pixel x k i is viewing an airborne scene through the same point q k at the WAI, the displacement vector in the scene plane Π object is Then, the amount of displacement of the pixel x k i can be calculated as where P v is the projection matrix of camera v, where f v is the focal length of camera v and z v denotes the distance from the scene plane Π object to camera v, where h(q k ) represents the altitude of intersection point q k and z h indicates the height of the water depth of the diffuser plate Π diffuser .The projected ray R p k and the reverse projected ray R k w (x k i ) of the pixel point x k i intersect the same point q k on the WAI; hence, in the presence of surface waves, the displacement amounts δ(p k ) and d s (x k i ) are only related to the slope and height of the intersection point q k and the height of the diffuser plane z h .
As Figure 5c shows, the set {q k } (denoted as q ' k ) is periodic when the water is flat.However, in the presence of surface waves, {q k } is quasi-periodic, which has a perturbation of periodicity.Considering that the variations in the height of the water surface are smaller than the work depth of the system (∆h h 0 ), then where h 0 is the system height, which is the time-averaged underwater depth of the projector and can be determined in the field using a pressure-based depth gauge.c k is the z-axis component of the vector vk .Hence, the displacement vectors δ(p k ) and d s (x k i ) can be expressed as where c k , c k , c k a , and c k a are the z-axis components of the vectors vk , k and v' k a represent the reflected ray direction vector of feature point q k and the airborne viewing ray direction vector of the corresponding pixel x k i as the water is flat, respectively.According to the Snell' law, the vectors vk , a can be further expressed as where N0 is the WAI normal as the water surface is flat, N0 = [0, 0, 1] T ; Nk is the normal of the WAI sampling point q k when the water fluctuates; n w is the refractive index of water, n w = 4/3.
From Equations ( 25) and ( 26), it can be seen that the displacement vectors δ(p k ) and d s (x k i ) are only related to the WAI normal Nk , specifically only to the instantaneous slope of q k , once the system parameters are determined.
Give the normal of the WAI sampling point Nk , the mapping relationship between δ(p k ) and d s (x k i ) is easily obtained by ray tracing, where F(•) is the transformation function between δ(p k ) and d s (x k i ).According to the displacement vector model derived from Equations ( 25) and ( 26), given the normal vector Nk of WAI, this paper estimates the mapping relationship F(•) between the two based on the polynomial representation method.
Moreover, the WAI normal vectors fluctuate randomly around the z-axis; hence, this perturbation can be divided into two components.One is the xoz plane component, at which the normal vector is where θ k is the inclination angle of the WAI.The other one is the yoz plane component, and the corresponding normal is According to Equations ( 15), ( 25), ( 26), (28), and ( 29), using the least squares polynomial representation, Equation ( 27) can be further expressed as where η 1 and η 2 denote the orders of the fitted polynomial, respectively; a i and b j represent the corresponding coefficients, respectively.In summary, based on Equations ( 5), ( 15), (21), and (30), this paper realizes the estimation of the distortion vector field w(x k i ) of the pixel points in the scene image using the deformation vector field w str k of the corresponding feature points in the structured light image.

Image Restoration Algorithm Based on Helmholtz-Hodge Decomposition
The Helmholtz-Hodge Decomposition of vector fields is one of the fundamental theorems in fluid dynamics, which has been widely used in various fields [47][48][49][50].Viewing the scene above water with a submerged camera, ignoring the absorption and scattering of water, the essential reasons for image distortion can be attributed to the following three aspects: (1) the height variation of the water surface; (2) the whirlpool of water; (3) the relative position of the camera and the WAI.In 1858, Hermann von Helmholtz et al. [51] first explained the connection between fluid motion and vector field decomposition.From the perspective of fluid mechanics, this paper proposes an image restoration algorithm based on the Helmholtz-Hodge Decomposition.The algorithm flow is shown in Figure 9.

Image Recovery Algorithm Based on HHD
In a bounded space domain, given the divergence (∇•ξ), curl (∇ × ξ), and boundary conditions of a smooth vector field ξ, the vector field ξ has the following unique Helmholtz-Hodge decomposition: where d is the divergence component, which is normal to the boundary; r represents the curl component, which is parallel to the boundary; h denotes the harmonic component.
In IR 2 , the components d and r can be calculated as the gradient of the scalar potential D and the curl of the scalar potential R, respectively.J(•) is an operator [47] that rotates a 2D vector counterclockwise by π/2; specifically, if the scene above water with a submerged camera, ignoring the absorption and scattering of water, the essential reasons for image distortion can be attributed to the following three aspects: (1) the height variation of the water surface; (2) the whirlpool of water; (3) the relative position of the camera and the WAI.In 1858, Hermann von Helmholtz et al. [51] first explained the connection between fluid motion and vector field decomposition.From the perspective of fluid mechanics, this paper proposes an image restoration algorithm based on the Helmholtz-Hodge Decomposition.The algorithm flow is shown in Figure 9.

Image Recovery Algorithm Based on HHD
In a bounded space domain, given the divergence ( ξ ), curl (  ξ ), and boundary conditions of a smooth vector field ξ , the vector field ξ has the following unique Helmholtz-Hodge decomposition: where d is the divergence component, which is normal to the boundary; r represents the curl component, which is parallel to the boundary; h denotes the harmonic compo- nent.In 2 IR , the components d and r can be calculated as the gradient of the scalar Let (u k , v k ) and ( ûk , vk ) be the corresponding control point coordinates in the ground- truth scene image I g (x and the distorted scene image I(x), respectively, then the 2D distortion vector field ξ is defined as where the homogeneous coordinate form of ξ is w( where ∇• and ∇× are the divergence operator and the curl operator of a vector field, respectively; ∆(•) represents the Laplace operator.The divergence potential D and curl potential R can be calculated by solving the Poisson Equation (33).Then, the harmonic components can be acquired based on h = ξ − ∇D − J(∇R), where the harmonic components are regarded as residuals.
Due to the WAI sampling {q k } is sparse, and the corresponding 2D vector field ξ is also discrete and sparse.Therefore, the divergence component D, the curl component R, and the harmonic components H of the 2-D distortion vector field w of the scene image can be further calculated using the bicubic interpolation algorithm [52].Then, the 2D warped vector field w of the scene image can be obtained by the inverse HHD, specifically Finally, from Equations ( 2) and ( 34), the distortion-free scene image can be acquired via bi-linear interpolation.

Finite Difference Method for HHD
According to the analysis in Section 3.5.1, it can be seen that the key to the decomposition of a two-dimensional vector field lies in the solution of the Poisson equation: where the divergence (∇• ξ) and the curl (∇ × ξ) are known, and ∆(•) represents the Laplace operator.Considering that the wave surface is sampled periodically when the water is calm, we solve the Poisson equation using the finite difference method [53].
Firstly, the problem of solving Poisson's equations is transformed into a system of linear equations for solving grid nodes.In a two-dimensional regular grid, the finite difference method uses the difference quotient to approximate the derivative; that is, the second derivative of the Laplace operator is approximated using the central difference formula, so the Poisson equation can be expressed as Specifically, for each sampling point (x, y), Equation ( 34) can be further expressed as + R(x,y+1)+R(x,y−1)−2R(x,y) where ∆x,∆y represent the horizontal spacing and vertical spacing of the control points, respectively; F(x, y) and G(x, y) are the divergence and curl of the sampling point (x, y), respectively.Assuming that the number of feature points of a structured light pattern is m × n, then mn sets of linear equations can be constructed using Equation (33).Then, the Poisson equation can be further expressed in the form of a matrix: where Φ are a rectangular sparse matrix of size mn × mn; D, R, F, and G are the vector forms of the divergence potential, curl potential, divergence, and curl of the two-dimensional vector field ξ, respectively, with a size of mn × 1.Finally, D and R can be calculated using the generalized inverse of the matrix, specifically where Φ −1 is the generalized inverse matrix of Φ>Φ.

Results and Discussion
In the experiment, we first set up a simplified water-air imaging platform with structured light projection and then tested the proposed method on MATLAB (Math Work Co., Natick, MA, USA) with real through-water scene images.The datasets can be acquired from [54].Moreover, we tested the real-scene sequences of a certain length to verify the stability of the algorithm.

Experiment Setup
As Figure 10 shows, we first built an experimental platform for imaging through water in the laboratory.The experimental setup included: an acrylic water tank with a size of 100 × 70 × 60 cm, for which the water-level was 42 cm, a projector (model: LOGO 35w) that was placed at the left side of the water tank, with the vertical height from the bottom of the water tank being zero, a diffuser plate, and a camera (model: MV-EM200C).Component S and V need simultaneous imaging of the diffuser plane and airborne scene as Figure 4 shows; for ease of operation, we used a single camera to image both regions simultaneously as Figure 10 shows.System calibration utilizes Zhang's calibration method [55].Other system parameters include: θ pro = 40 • , h 0 = 42 cm, and z h = 30 cm; a target scene was placed at z a = 160 cm above water.In the experiment, the water surface ripples in the real environment were simulated using the artificial wave-making method, in which the agitation causes the water surface to generate natural oscillations close to surface waves.A sample image acquired by the camera in the presence of surface waves is shown in Figure 11, which is composed of two parts.One part is the distorted structured light image, and the other is the corresponding scene image above water.Both pass through the same area on the water surface.

Image Restoration
The processing procedure of the proposed method is shown in Figure 12.In order to simplify the system operation, the camera firstly collected the structured light projection In the experiment, the water surface ripples in the real environment were simulated using the artificial wave-making method, in which the agitation causes the water surface to generate natural oscillations close to surface waves.A sample image acquired by the camera in the presence of surface waves is shown in Figure 11, which is composed of two parts.One part is the distorted structured light image, and the other is the corresponding scene image above water.Both pass through the same area on the water surface.

Image Restoration
The processing procedure of the proposed method is shown in Figure 12.In order to simplify the system operation, the camera firstly collected the structured light projection on the WAI (a diffusion plate was placed on WAI) and the structured light image on the diffusion plane to obtain the WAI sampling information and the reference structured light image when the water surface was calm.Of course, we can also obtain the above two images using a full computer system simulation, once the system parameters are known.The results are shown in Figure 12a,b.Then, feature extraction and matching were performed on the reference structured light image and the distorted structured light image (Figure 12c), respectively, and the deformation vector field of the control points in the scene image were estimated.Finally, a distortion-free image was generated using the method described in Section 3.5 (Figure 12f).Intuitively, it can be seen that the distortion of the details of the target scene has been corrected, which indicates that our method can significantly reduce the distortion.
In the experiment, the water surface ripples in the real environment were simulated using the artificial wave-making method, in which the agitation causes the water surface to generate natural oscillations close to surface waves.A sample image acquired by the camera in the presence of surface waves is shown in Figure 11, which is composed of two parts.One part is the distorted structured light image, and the other is the corresponding scene image above water.Both pass through the same area on the water surface.

Image Restoration
The processing procedure of the proposed method is shown in Figure 12.In order to simplify the system operation, the camera firstly collected the structured light projection on the WAI (a diffusion plate was placed on WAI) and the structured light image on the diffusion plane to obtain the WAI sampling information and the reference structured light image when the water surface was calm.Of course, we can also obtain the above two images using a full computer system simulation, once the system parameters are known.The results are shown in Figure 12a,b.Then, feature extraction and matching were performed on the reference structured light image and the distorted structured light image (Figure 13c), respectively, and the deformation vector field of the control points in the scene image were estimated.Finally, a distortion-free image was generated using the method described in Section 3.5 (Figure 13f).Intuitively, it can be seen that the distortion of the details of the target scene has been corrected, which indicates that our method can significantly reduce the distortion.

Image Quality Evaluation
To verify the performance of the proposed method, we first made a comparison with the same type of method, namely the LWM method [56,57], which also requires only a single distorted image.They used the same datasets form the real through-water scenes [54].
Moreover, we utilized the three standard image quality metrics, namely GMSD [10], PSNR [11], and SSIM [14], to evaluate the recovery results of different methods.This paper randomly selected some sampling images from the real scene sequences [54] for testing, in which the scene image size of interest was 256 256  .Figure 13 shows the restoration results of the two algorithms, and Table 1 presents the results of the numerical comparison in Figure 13.It was obvious that both methods can significantly reduce the distortions of the target scene images as Figure 13 shows.However, the performance of the proposed

Image Quality Evaluation
To verify the performance of the proposed method, we first made a comparison with the same type of method, namely the LWM method [56,57], which also requires only a single distorted image.They used the same datasets form the real through-water scenes [54].
Moreover, we utilized the three standard image quality metrics, namely GMSD [10], PSNR [11], and SSIM [14], to evaluate the recovery results of different methods.This paper randomly selected some sampling images from the real scene sequences [54] for testing, in which the scene image size of interest was 256 × 256. Figure 13 shows the restoration results of the two algorithms, and Table 1 presents the results of the numerical comparison in Figure 13.It was obvious that both methods can significantly reduce the distortions of the target scene images as Figure 13 shows.However, the performance of the proposed method outperforms the LWM method significantly [57], as shown in Table 1.

Figure 13.
Recovery results of different methods.Left to right: the ground-truth image, the distorted structured light image, the sampled image at the corresponding moment, the results of the LWM method [51], and the results of our method.Up to down: recovery results of sampled images at different times using the above two methods.In order to more accurately evaluate the performance of our method and its scope of application, we further compared our experiment results with other state-of-the-art methods, such as Oreifej's Method [7], Tao Sun's method [11], and James's methods [19].It should be noted that these methods [7,11,19] require that the input parameter is not a single image but an image sequence, while our method is only used to recover the single instantaneous scene image.Considering the recovery result of the same sampled frame, we took the sampled frame at a specific moment as the center, selected a certain length of image sequence forward or backward as the input sequence of the previous methods [7,11,19], and then compared the restoration result of the corresponding frame with our method.Moreover, we also considered the image processing time to evaluate the computing efficiency, in which all of the above methods were performed in the same operating  [51], and the results of our method.Up to down: recovery results of sampled images at different times using the above two methods.In order to more accurately evaluate the performance of our method and its scope of application, we further compared our experiment results with other state-of-the-art methods, such as Oreifej's Method [7], Tao Sun's method [11], and James's methods [19].It should be noted that these methods [7,11,19] require that the input parameter is not a single image but an image sequence, while our method is only used to recover the single instantaneous scene image.Considering the recovery result of the same sampled frame, we took the sampled frame at a specific moment as the center, selected a certain length of image sequence forward or backward as the input sequence of the previous methods [7,11,19], and then compared the restoration result of the corresponding frame with our method.Moreover, we also considered the image processing time to evaluate the computing efficiency, in which all of the above methods were performed in the same operating environment, which contains a CPU (i7-6700HQ) and RAM (8GB).
All of the above methods were tested using the same source data form [54], for which the size of the target scene image was 256 × 256.The maximum number of iterations for Oreifej's [7] and T. Sun's method [11] was set to 3 (after 3 iterations for our dataset, the results have been stabilized), and each patch in the related sequence was set as 74 × 74 in [11].
The recovery results of different methods are shown in Figure 14.It can be seen from the figure that all methods are able to effectively reduce the distortions to the image; however, due to the respective limitations of the algorithms, the restoration results still have more or fewer residual distortions.The numerical comparison results of different methods are displayed in the Table 2. Here, we focus on analyzing the experimental results from the two aspects of restoration accuracy and computational efficiency.
The recovery results of different methods are shown in Figure 14.It can be seen from the figure that all methods are able to effectively reduce the distortions to the image; however, due to the respective limitations of the algorithms, the restoration results still have more or fewer residual distortions.The numerical comparison results of different methods are displayed in the Table 2. Here, we focus on analyzing the experimental results from the two aspects of restoration accuracy and computational efficiency.
First, we discuss the restoration accuracy of the methods.Compared with other methods, the results of Oreifej's method and T. Sun's method seem to perform better in terms of visual effects; as shown in Figure 14, the edge details of the target scene are smoother.However, surprisingly, their restoration accuracy is not the best, as shown in Table 2.The main reasons for this result can be attributed to the following two aspects: (1) the length of the image sequence is not long enough, resulting in fluctuations in the restoration accuracy; (2) the difference between frames is large, resulting in serious degradation of the mean image or fusion image, reducing the restoration accuracy.In recent years, deep learning theory has been used as a brand-new technology to solve such problems.As a model based on deep learning, James's method also requires a long video sequence of a static target scene for accurate reconstruction.Obviously, as shown in Figure 14, it is difficult for James's method to generate a high-quality result using a shorter sequence, such as 10 or 20 frames.In contrast, we attempt to solve a single image distortion-removing problem through the theory of a vector field decomposition and a distorted structured image.Our method cleverly utilizes the characteristics of structured-light images to achieve periodic sampling of the WAI, which can effectively estimate the displacement field of the corresponding target scene image.It can be seen from Table 2 that compared with other state-of-the-art methods, in terms of restoration accuracy, our method performs better.
Figure 14.Recovery results of different methods.Left to right: the ground-truth image, the sampled image, the results of Oreifej's method using 10 frames, the results of Oreifej's method using 20 frames, the results of T. Sun's method using 10 frames, the results of T. Sun's method using 20 frames, the results of James's method using 10 frames, the results of James's method using 20 frames, and the results of our method using a single image.Recovery results of different methods.Left to right: the ground-truth image, the sampled image, the results of Oreifej's method using 10 frames, the results of Oreifej's method using 20 frames, the results of T. Sun's method using 10 frames, the results of T. Sun's method using 20 frames, the results of James's method using 10 frames, the results of James's method using 20 frames, and the results of our method using a single image.
First, we discuss the restoration accuracy of the methods.Compared with other methods, the results of Oreifej's method and T. Sun's method seem to perform better in terms of visual effects; as shown in Figure 14, the edge details of the target scene are smoother.However, surprisingly, their restoration accuracy is not the best, as shown in Table 2.The main reasons for this result can be attributed to the following two aspects: (1) the length of the image sequence is not long enough, resulting in fluctuations in the restoration accuracy; (2) the difference between frames is large, resulting in serious degradation of the mean image or fusion image, reducing the restoration accuracy.In recent years, deep learning theory has been used as a brand-new technology to solve such problems.As a model based on deep learning, James's method also requires a long video sequence of a static target scene for accurate reconstruction.Obviously, as shown in Figure 14, it is difficult for James's method to generate a high-quality result using a shorter sequence, such as 10 or 20 frames.In contrast, we attempt to solve a single image distortion-removing problem through the As we all know, the accuracy of feature point extraction will affect the quality of the restoration results of an image.In [58], a corner detection algorithm based on Euclidean distance was used to solve the feature extraction problem of distorted structured light images, and compared with other classic algorithms, such as SURF, Harris, and SUSAN, it was proven that the method performs better than the classic algorithms.However, as shown in Figure 15, we found that there are still misdetection problems (purple marks) in the detection results, which directly reduce the restoration accuracy and visualization effect of the corresponding scene image.On the contrary, the corner detection scheme proposed in this paper can effectively correct this problem (red mark), and compared with the previous work, the image restoration results are significantly better.Moreover, compared with the ground-truth image, there are still several residual distortions in our results, which means that we can further improve our corner-detection scheme to optimize our restoration result.
distance was used to solve the feature extraction problem of distorted structured light images, and compared with other classic algorithms, such as SURF, Harris, and SUSAN, it was proven that the method performs better than the classic algorithms.However, as shown in Figure 15, we found that there are still misdetection problems (purple marks) in the detection results, which directly reduce the restoration accuracy and visualization effect of the corresponding scene image.On the contrary, the corner detection scheme proposed in this paper can effectively correct this problem (red mark), and compared with the previous work, the image restoration results are significantly better.Moreover, compared with the ground-truth image, there are still several residual distortions in our results, which means that we can further improve our corner-detection scheme to optimize our restoration result.

Analysis of the Limitations of the Algorithm
Figure 16 demonstrates an example of the recovery result of our method.It is obvious that our method can effectively remove the geometric distortion of the image, as shown in Figure 16; however, the removal results for motion blur seem to be not as satisfactory (red marks).In future work, we need to further study additional image deblurring methods to improve and compensate for the shortcomings of our algorithm.4.6.Analysis of the Limitations of the Algorithm Figure 16 demonstrates an example of the recovery result of our method.It is obvious that our method can effectively remove the geometric distortion of the image, as shown in Figure 16; however, the removal results for motion blur seem to be not as satisfactory (red marks).In future work, we need to further study additional image deblurring methods to improve and compensate for the shortcomings of our algorithm.

Analysis of the Limitations of the Algorithm
an example of the recovery result of our method.It is obvious that our method can effectively remove the geometric distortion of the image, as shown in Figure 16; however, the removal results for motion blur seem to be not as satisfactory (red marks).In future work, we need to further study additional image deblurring methods to improve and compensate for the shortcomings of our algorithm.
We conducted extensive tests on our method and other state-of-the-art methods [7,11,19,57] using the real-scene dataset from [54].Experimental results show that our method not only effectively reduces the distortion to the image, but also significantly outperforms state-of-the-art methods in terms of computational efficiency.This means that our method can be applied to a wider range of practical scenarios, especially for dynamic target detection.
Moreover, we found that the accuracy of the Helmholtz-Hodge Decomposition of the vector field is directly related to the sampling interval of the WAI (described in Section 3.5).Once the complexity of image distortion increases, with the fixed interval structured light projection method, it is obviously difficult to guarantee the accuracy of image restoration.It is necessary to further study the method of adaptive structured light projection in the next work.

Figure 1 .
Figure 1.Example of Snell's window as the WAI is calm.

Figure 1 .
Figure 1.Example of Snell's window as the WAI is calm.

Figure 2 .
Figure 2. Contour distribution of irradiance on the image plane.(a) Contour distribution of normalized illuminance for downwelling radiation from the sky on the image plane.(b) Contour distribution of normalized illuminance (the red dashed line is the boundary of Snell's window in flat water; the blue solid line is the boundary of Snell's window in wavy water).

Figure 2 .
Figure 2. Contour distribution of irradiance on the image plane.(a) Contour distribution of normalized illuminance for downwelling radiation from the sky on the image plane.(b) Contour distribution of normalized illuminance (the red dashed line is the boundary of Snell's window in flat water; the blue solid line is the boundary of Snell's window in wavy water).

Figure 4 .
Figure 4. Geometry of the image-restoration model via structured light projection, comprising a structured light projection system S and a viewing camera V .Component S consists of a pro- jector, diffuser plane, and a camera s.

Figure 5 .
Figure 5. Examples of WAI sampling via structured light projection.(a) A preset structured light pattern in the projector.(b) Distribution of WAI sampling points.(c) Control point distribution on the image plane, where the points correspond to WAI sampled points.

Figure 4 .
Figure 4. Geometry of the image-restoration model via structured light projection, comprising a structured light projection system S and a viewing camera V. Component S consists of a projector, diffuser plane, and a camera s.

Figure 4 .
Figure 4. Geometry of the image-restoration model via structured light projection, comprising a structured light projection system S and a viewing camera V .Component S consists of a pro- jector, diffuser plane, and a camera s.

Figure 5 .
Figure 5. Examples of WAI sampling via structured light projection.(a) A preset structured light pattern in the projector.(b) Distribution of WAI sampling points.(c) Control point distribution on the image plane, where the points correspond to WAI sampled points.

Figure 5 .
Figure 5. Examples of WAI sampling via structured light projection.(a) A preset structured light pattern in the projector.(b) Distribution of WAI sampling points.(c) Control point distribution on the image plane, where the points correspond to WAI sampled points.
J. Mar.Sci.Eng.2022, 10, x FOR PEER REVIEW 7 of 25 using a new corner detection algorithm (described in Section 3.3.1),respectively.Thereby, the displacement field of the feature points in the structured light image can be expressed as str

Figure 6 .
Figure 6.Processing of structured light images in the experiment.(a) Reference structured light image for flat water surface.(b) Distorted structured light image in the presence of surface waves.(c) Feature extraction for the reference frame.(d) Feature extraction for the distorted frame.

Figure 6 .
Figure 6.Processing of structured light images in the experiment.(a) Reference structured light image for flat water surface.(b) Distorted structured light image in the presence of surface waves.(c) Feature extraction for the reference frame.(d) Feature extraction for the distorted frame.

Figure 7 .
Figure 7. Framework of the feature extraction method in this paper.

Figure 7 .
Figure 7. Framework of the feature extraction method in this paper.

Algorithm 1 :
A Slope Search-based Corner Detection Algorithm Input: Distorted structured light image J ∈ R M×N , Output: Corner points matrix C ∈ R m×n [H, S, I] ⇐ HSI (rgb (J))

Figure 8 .
Figure 8. Illustration of the distortion estimation of the scene image.( ) k δ p is the amount of displacement of the reflected ray corresponding to the incident light p ˆk v on the diffuser plane; s ( ) k i d x is the amount of displacement of the airborne viewing ray a ˆk v corresponding to the back-projected ray w ˆk v ; ( ) k δ p and s ( ) k i d x are a pair of corresponding displacement vectors.str k w and ( ) k i w x

Figure 8 .
Figure 8. Illustration of the distortion estimation of the scene image.δ(p k ) is the amount of displacement of the reflected ray corresponding to the incident light vp k on the diffuser plane; d s (x k i ) is the amount of displacement of the airborne viewing ray vk a corresponding to the back-projected ray vk w ; δ(p k ) and d s (x k i ) are a pair of corresponding displacement vectors.w str k and w x k i are a pair of corresponding pixel displacement vectors.

Figure 9 .
Figure 9. Flow chart of image recovery algorithm based on HHD.

Figure 9 .
Figure 9. Flow chart of image recovery algorithm based on HHD.

JFigure 10 .
Figure 10.Experiment setup.(a) Our real laboratory scene.(b) Scheme of the experiment, comprising a projector, diffuser plane, a viewing camera, and a water tank.

Figure 11 .
Figure 11.A real scene image captured by the camera.

Figure 10 .
Figure 10.Experiment setup.(a) Our real laboratory scene.(b) Scheme of the experiment, comprising a projector, diffuser plane, a viewing camera, and a water tank.

Figure 11 .
Figure 11.A real scene image captured by the camera.

Figure 11 .
Figure 11.A real scene image captured by the camera.

Figure 12 .
Figure 12.Processing image.(a) WAI sampling.(b) Reference structured light image as the WAI was flat.(c) Distorted structured light image.(d) Ground-truth image.(e) Distorted image.(f) Recovered image using our method.

Figure 12 .
Figure 12.Processing image.(a) WAI sampling.(b) Reference structured light image as the WAI was flat.(c) Distorted structured light image.(d) Ground-truth image.(e) Distorted (f) Recovered image using our method.

Figure 13 .
Figure13.Recovery results of different methods.Left to right: the ground-truth image, the distorted structured light image, the sampled image at the corresponding moment, the results of the LWM method[51], and the results of our method.Up to down: recovery results of sampled images at different times using the above two methods.

Figure 15 .
Figure 15.An example of recovery results using the different corner-detection algorithms [58].

Figure 15 .
Figure 15.An example of recovery results using the different corner-detection algorithms [58].

Figure 16 .
Figure 16.Recovery result of a sampled frame using the proposed method.Figure 16.Recovery result of a sampled frame using the proposed method.

Figure 16 .
Figure 16.Recovery result of a sampled frame using the proposed method.Figure 16.Recovery result of a sampled frame using the proposed method. ) that since the fluctuations of WAI are spatially variable, different pixel points may have different pixel displacements, resulting in complex geometric distortions of the image.in the camera coordinate frame, respectively.M and N are the pixel size of the image.P is the projection matrix of the camera i d x

Table 1 .
Numerical comparison among sampled images, the LWM method, and our method.

Table 1 .
Numerical comparison among sampled images, the LWM method, and our method.

Table 3 .
Comparison results of robustness of LWM method and our method.