Next Article in Journal / Special Issue
Working Mechanism and Progress of Electromagnetic Metamaterial Perfect Absorber
Previous Article in Journal
Yb-Doped Mode-Locked Fiber Laser Based on an All-Fiber Interferometer Filter
Previous Article in Special Issue
Active Polarization Imaging for Cross-Linear Image Histogram Equalization and Noise Suppression in Highly Turbid Water
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Performance Polarization Imaging Reconstruction in Scattering System under Natural Light Conditions with an Improved U-Net

School of Computer and Information, Hefei University of Technology, Hefei 230009, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(2), 204; https://doi.org/10.3390/photonics10020204
Submission received: 18 January 2023 / Revised: 5 February 2023 / Accepted: 6 February 2023 / Published: 13 February 2023
(This article belongs to the Special Issue Advanced Polarimetry and Polarimetric Imaging)

Abstract

:
Imaging through scattering media faces great challenges. Object information will be seriously degraded by scattering media, and the final imaging quality will be poor. In order to improve imaging quality, we propose using the transmitting characteristics of an object’s polarization information, to achieve imaging through scattering media under natural light using an improved U-net. In this paper, we choose ground glass as the scattering medium and capture the polarization images of targets through the scattering medium by a polarization camera. Experimental results show that the proposed model can reconstruct target information from highly damaged images, and for the same material object, the trained network model has a superior generalization without considering its structural shapes. Meanwhile, we have also investigated the effect of the distance between the target and the ground glass on the reconstructing performance, in which, and although the mismatch distance between the training set and the testing sample expands to 1 cm, the modified U-net can also efficaciously reconstruct the targets.

1. Introduction

Scattering media, such as the atmosphere [1,2,3], underwater environments [4,5], and biological tissues [6,7], are among the important factors affecting imaging quality in reality. When light passes through a scattering medium, the ballistic light decays rapidly, and target information will be severely corrupted. In order to get imaging results that are as clear as possible, many typical imaging techniques have been proposed, including transmission matrices [8,9], wavefront shaping [10], light storage effects [11,12], and ghost imaging [13,14,15,16]. However, these methods have certain limitations and so do not work better in complex scattering media situations. Moreover, they cost a lot of time and money.
Following developments in polarization transmission theory in recent years [17,18], polarization technology now plays an important role in solving target imaging from scattering media [19,20,21]. In recent years, physical models and image processing methods based on polarization information have been proposed to improve the clarity of imaging in scattering media [22,23]. In 1996, J.S. Tyo et al., proposed the Polarization Difference (PD) method for imaging through scattering media [24]. In 2001, Y.Y. Schechner et al., added polarization effects to the atmospheric defogging model [25]. Liang et al. proposed that the estimated parameters of the angel of polarization (AoP) can be used in defogging [26], which not only significantly improves the clarity of blurry images but also can be applied to dense fog environments [27]; they also tried to fuse visible and infrared polarized images together to defog and improve target recognition efficiency [28]. Hu et al., proposed a recovery algorithm based on the estimation of polarization differential imaging that takes into account the previously overlooked polarized light radiated by the target itself and proves that it is feasible to improve and optimize the quality of the recovered image [29]. In addition, they also proposed a method based on corrected transmittance to clearly improve the quality of the underwater image [30]. Shao et al. developed an active polarization imaging technology based on wavelength selection [31], which uses the dependence of scattering light at different wavelengths in a turbid underwater situation. In addition, Guo et al. obtained the Muller matrix (MM) of a scattering medium based on the Monte Carlo (MC) algorithm [3] and proposed a polarization inversion method to study polarization transmission characteristics in layered dispersion systems [15,17,32,33,34], a layered atmosphere [19], and underwater environments [5].
At the same time, deep-learning (DL) techniques have been verified to be a very effective method for damaged image recovery by researchers who used DL to find a mapping relationship between speckled images caused by scattering and original targets [35]. A “one to all” convolutional neural network (CNN) can learn the characteristic information in speckle patterns obtained in the same scattering medium [36]. Li et al. established the “IDiffNet” network structure, which is made up of a tightly connected CNN architecture, to learn the characteristics of the scattering medium and proved that the network’s superior generalization capability through the network still works in spite of input data from other scattering media [37]. Lyu et al., proposed a hybrid neural network based on computational imaging in thick scattering media to reconstruct target information hidden in the scattering medium [38]. Sun et al., reconstructed the scattered spot image using the DL algorithm in the low-light environment, which cannot be imaged using a traditional imaging method because the resulting spot contains limited information and has the influence of Poisson noise [39]. Zhu et al., used the autocorrelation imaging of scattered spots to learn the generalized statistical invariants of the scattering medium using DL networks, which improves the applicability of the network model [40]. The combination of polarization information and the DL method has also become an important direction of imaging reconstruction. Li et al. used Q information to train the network and prove that the model-Q has superior generalization and robustness in different aspects [41]. Li et al. proposed the PDRDN to achieve the removing of the underwater fog effect using four angle polarization pictures (0°, 45°, 90°, 135°) [42]. In addition, DL based on polarization is applied to target detection [43,44,45], underwater imaging [46], image denoising [47], and image fusion [48], etc., which can get higher detection accuracy, significant noise suppression, and effective removal of the scattered light, and can obtain more detailed target information. However, data-driven network models depend too much on the data, resulting in limited generalization capabilities, which is also a major difficulty in applying deep learning to reality. On the one hand, training the network with stable target features will improve the stability of the network. Even if the external environment changes within a certain range, it will not affect the reconstruction results of the trained model. Therefore, in order to improve the stability of the model, we use the polarization information of the target as the training set, which carries stable target features during transmission. The stable target feature carried by the polarization information is capable of adapting to many changes of environment, thereby improving the generalization ability of the network model.
Effective physical priors can prompt networks to find an optimal solution for different situations. The degree of polarization (DoP) is the ratio of the polarization to the total light intensity, and it can be considered as the most intense polarization state. Thus, in this paper, we use DoP to focus the polarization characteristics of the scattering system and then utilize the powerful DL to obtain the polarization characteristics from the scattering system, which can solve the generalization problem of single material objects in scattering scenes and reduce the dependence of deep learning on data. Experimental results demonstrate that the network model trained by DoP has a better recovery performance, and for targets that are not in the training set, the model can still recover them with high accuracy. What is more, the model can still work when there is a mismatch distance between the training set and the testing sample. Moreover, the influence of polarization characteristics also provides a certain basis for the application of deep learning in polarization information-based remote sensing. Finally, we present the quantitative-evaluation results with multiple indicators, which show the accuracy and robustness of the scheme, and reflect the great potential of combining physical knowledge and deep learning technology.

2. Materials and Methods

2.1. Physical Foundation

Light can be represented by the Stokes vector S = (I, Q, U, V)T whether it is polarized or non-polarized [49]. Elements in a Stokes vector can be obtained from the intensity of four angles (0°, 45°, 90°, 135°):
S = I Q U V = E 0 x E 0 x * + E 0 y E 0 y * E 0 x E 0 x * E 0 y E 0 y * E 0 x E 0 y * + E 0 y E 0 x * i E 0 x E 0 y * E 0 y E 0 x * = I 0 ° + I 90 ° I 0 ° I 90 ° I 45 ° I 135 ° I R ° I L °
where I is the total light intensity, Q is the difference between horizontal and vertical components, U is the difference between 45° and 135° components, and V represents the difference between right-handed and left-handed components. The components in Stokes vector satisfy:
I 2 Q 2 + U 2 + V 2
The Stokes vector is relative to the light intensity. An existing focal-plane polarization camera can directly obtain polarization pictures of four angles (0°, 45°, 90°, 135°). Therefore, we can easily get three elements of (I, Q, U), minus the V component.
Polarization information of light can be destroyed by scattering media during the transmission process, and the process can be expressed as:
S o u t = M S o b j
where M is the Muller matrix (MM) of the scattering media, Sout represents the Stokes vector of output light, and Sobj represents the object’s Stokes vector in the incident light. The aim is to reconstruct targets using the Sobj; therefore, Equation (3) is transformed and expressed as follows:
S o b j = M 1 S o u t
where M−1 is the inverse of M, which contains the polarization characteristics of the scattering media. For scattering media, the larger optical thickness (OT) becomes, the more damaging target polarization information will be; therefore, the detector can only capture spots which contain limited information from the target. For targets, when the difference of polarization characteristics between target and background is slight, the receiver cannot completely distinguish them.
Reconstructing the target can be regarded as an inverse process of imaging in scattering media. The DL as an excellent method can be used to solve the inverse process. Inspired by this, we utilized the powerful fitting capacity of DL to obtain the map between speckles and the original images. In order to solve the inverse problem better, it is necessary to make full use of the polarization physical priors. Specifically, the learning framework consists of the pre-physical step and post-neural network step based on a physical prior, which can be seen in Figure 1. Firstly, the pre-physical step is used to acquire the linear-polarization images, and the DoLP can be expressed as:
D o L P = Q 2 + U 2 I
As the ratio of the linear-polarization component to the total light intensity, DoLP is a common polarization parameter and can be used to describe the polarization characteristics of the scattering systems. Therefore, when we use DoLP images as a training set to train the network, we can filter out redundant information with more effective characteristics for training the network.
In addition, the polarization information is very sensitive to the material and the structure of the targets. Therefore, the generalization performance of the model must be most closely related to the scattering medium and the polarization properties of targets. The model trained by the target with the same material will have a broader generalization about materials.

2.2. Measurement System

To get the dataset, we set up a polarization scattering imaging scene in experiments; the schematic of the experimental setup is shown in Figure 2. In order to capture more target information, we placed a polarizer in the front of the LED light source to provide polarization illuminance. The light of S = (1, 1, 0, 0)T can be modulated by the polarizer, which facilitates the implementation of the polarization algorithm [50]. Then, the polarized light irradiates to the target and is reflected from it. Finally, the reflective light transmits through the ground glass and is captured by the polarization camera (DoFP). In our experiments, the targets are a series of handwritten digits with the ink on the white paper. We put the target at a certain distance behind the ground glass of 5 mm and define the distance between the target and the ground glass as “d”.
The polarization camera in our experiment is a commercial DoFP (division of focal plane) polarization camera (LUCID, PHX055S-PC) with pixel counts of 2048 × 2448, whose pixel array surface is covered with a polarization array consisting of four micro-polarizers with four different polarization orientations of 0°, 45°, 90°, and 135°, respectively. The polarization image of the four angles can be used to calculate the image of DoLP. Here, we have captured 200 images with the DoFP cameras and expanded the dataset to 1000 training sets by data enhancement, such as rotation, clipping, etc.; of them, 900 are used for training and 100 for verification.

2.3. Neural Network Design

With the developments of DL technology, many excellent network structures have been built in the field of imaging reconstructions. U-Net, as a fully convolutional neural network structure, has been also proposed for semantic segmentation of medical images. Now it has been showing its superior effects on image reconstruction. The principle of U-Net is similar to that of the self-coder model. Our goal is to extract and reconstruct the target information from the polarization speckles. This process can be regarded as the process of encoding and decoding. Moreover, the skip-connection structure contained in the U-Net solves the problem of gradient explosion and gradient disappearance during training in deeper networks, which is one of the reasons for its excellent performance. DenseNet is a network structure proposed in 2017 [51], and it is a composite layer composed of multiple dense blocks, each of which is connected to the next layer by means of a connection operation. That makes the transmission of features and gradients more efficient and the training process of the network easier [52].
In our scheme, we change the number of convolutional layers and channels of the original U-Net network to form an improved U-Net based DL network, as shown in Figure 1. We replace a single convolutional layer with dense blocks for feature extraction, which will improve the network performance. In the dense block, we use a 3 × 3 convolutional kernel and a circle of padding to ensure that the input and output feature map size is unchanged. Each dense block is connected to the batch normalization and linear activation functions. As the number of network layers and filters increase, the max pooling layer with a step size of 2 × 2 is used to reduce the image length and height to half of the original. In addition, the decoder acts as the inverse of the encoder, and the last layer of each decoder is an up-pooling layer. Throughout our network model, the activation function uses a rectified linear unit (ReLU) that enables fast and efficient training of the network. Meanwhile, in order to reduce the occurrence of overfitting, we add a dropout layer. After that, the images with 256 × 256 pixels can be reconstructed by convolutional layers. In addition, we calculate parameters and floating-point operations (FLOPs) to assess the complexity of the network, which are 53.86 M and 68136.58 M, respectively. During training, the loss function reflects the model’s ability to fit the data. Here, we use MAE as the loss function:
M A E = 1 M × N i = 1 M j = 1 N X ( i , j ) Y ( i , j )
where X(i, j) and Y(i, j) represent the values of (i, j) pixel in the reconstruction image and in the ground truth, respectively, and M and N are the size of the image.
We trained the model in an image processing unit (NVIDIA RTX 3080) using a Pytorch framework with Python 3.6, training 200 epochs. The optimizer is the Adam (Add Momentum Stochastic Gradient Descent) with a learning rate of 0.001.

3. Results

The polarization characteristics of the target are not easily affected by the scattering media during transmission. Therefore, the model trained with the polarization information of the target is more stable. Therefore, in this section, we designed different test experiments to verify the stability of the trained model with the polarization information of the target.

3.1. Subsection

Unlike the speckled images obtained by laser irradiations, the images obtained by emitting natural light do not have obvious light and dark distributions, and the whole of them is cloudy. Moreover, the greater the distance between the ground glass and the target, the more blurred the outline of the target. At the same time, the spectral width of light reduces the associated length of scattered light and the FoV of the imaging system in a real-world experiment [53,54]. The experiment is set up without ambient light, and we get images only by irradiating the targets with a white-light LED. The results for the circumstance of d = 4.0 cm are shown in Figure 3.
With increasing d, the energy of light reaching the ground glass will decrease; therefore, the target information passing through the scattering medium will also be decreasing. We collected the data at a distance of d = 4 cm, where the target profile was completely obscured by the noise from the ground glass, and the calculated DoP image also cannot distinguish the target from the background. Under this condition, we chose the DoP images of the targets as the training set and used the images of targets without the scattering medium as the related labels. The sizes of the scattering images and labels are the same, 256 × 256. After collecting and classifying the data, the proposed methods can be used for training and testing.
In the case of d = 4 cm, we prepared 200 scattering images used as the training sets, which are the DoP imaging results of different structural targets (10 handwriting digits: 0~9) transmitting through the ground glass. Original images without scattering served as the respective labels. We also expanded 200 scattering DoP images to 1000 DoP images, in which 900 and 100 images served as training set and validation set, respectively; the trained network can be called the Model-DoP.

3.2. The Results of Reconstructing Untrained Structural Targets with DoP

In this section, we set targets with different structures, which have not been trained. If the trained Model-DoP can reconstruct those untrained targets, it proves that our proposed method has superior stability on the structure of targets. As shown in Figure 4a, the scattering images are not the samples used to train the Model-DoP, and after transmitting through the ground glass, the corresponding scattering DoP images cannot be distinguished, as depicted in Figure 4b. However, they can be reconstructed well by the trained Model-DoP (as shown in Figure 4), in which the edge of the targets can be identified accurately. The results reflect that the scattering DoP images as training sets can effectively drive the network to learn the polarization characteristics of the different targets, which is helpful to achieve the target reconstruction.
In order to further verify the generalization of the Model-DoP, we changed the structure of the target to test the Model-DoP trained by the digit target. First, we replaced digit targets to English alphabet targets while the background remained unchanged. The reconstructed results are shown in Figure 5. Figure 5b shows the scattering DoP images, and Figure 5c the corresponding reconstructed results. Moreover, we also used some graphics as the targets to further demonstrate the diversity and complexity of the generalization of the Model-DoP. The ground truth, results of the scattering DoP images and reconstructed images are shown in Figure 6. In the case of the limited number of training data, the Model-DoP can reconstruct the untrained targets, including both English alphabets and graphical data, which reflects that the Model-DoP studies not only the mapping relationship between pixels but also the polarization characteristic of different materials. Therefore, the targets with different shapes can be also reconstructed as long as they are made of the same material.
The Structural Similarity Index (SSIM) is a common indicator to evaluate the image quality and measure the similarity of images [55]. Here, we also use the SSIM to evaluate the quality of the reconstructed targets, for quantitatively describing the results of the reconstruction and performance of our network. The SSIM consists of three parts: brightness, contrast, and structure. Given the original image and the predicted image (X, Y), the SSIM of them can be calculated as follows:
S S I M ( X , Y ) = ( 2 μ X μ Y + C 1 ) ( 2 σ X Y + C 2 ) ( μ X 2 + μ Y 2 + C 1 ) ( σ X 2 + σ Y 2 + C 2 )
where μx is the mean of X, μY is the mean of Y, σx is the variance of X, σY is the variance of Y, σXY is the covariance of X and Y, and C1 and C2 are small normal numbers used to avoid the zero denominator. The SSIM value range is 0 to 1. The higher the SSIM value, the more similar the image.
The SSIM of three graphs with different complexity and diversity do not have much difference, which can be seen from Table 1. Although the SSIMs have a downward trend with increasing complexity and diversity, the overall fluctuation is not too large. The graphic targets, whose relevance to the target in the training set is the weakest, still have more than 70% similarity.

3.3. The Performance of Model-DoP on the Different Polarization Characteristics

To further investigate the sensitivity of the model-DoP to the polarization properties of the target, we conducted a test using targets composed of other materials that had not been trained. Firstly, the target material was set to steel, and other conditions were unchanged with the background being paper. Therefore, the targets can be called “Steel-Paper” targets, as depicted in Figure 7a. Then, scattering DoP images under natural light condition were obtained, as shown in Figure 7b, and images were entered into the original “Ink–Paper” trained model. The specific reconstruction results are shown in Figure 7c. Due to the high reflectivity and low deflection characteristics of steel, the image obtained through the scattering medium retains a large amount of target information. It can also be seen that the polarization characteristics of steel and paper are quite different from Table 2 [19,56]. Therefore, the model-DoP can also identify the outline of the target.
Figure 7. The test results of model-DoP for the untrained target materials. (a) Ground truth with target-background as Steel-Paper; (b) Scattering DoP images; (c) Reconstructed images by the Model-DoP.
Figure 7. The test results of model-DoP for the untrained target materials. (a) Ground truth with target-background as Steel-Paper; (b) Scattering DoP images; (c) Reconstructed images by the Model-DoP.
Photonics 10 00204 g007
In addition, the targets can also be set as “Ink–Wood” targets, as shown in Figure 8a, in which the background material is set as wood. The reconstruction results are demonstrated in Figure 8c. From Table 2, because the value of corresponding elements of wood and paper are similar, the model-DoP trained by “Ink–Paper” can distinguish the target and the background. Moreover, the difference between wood and paper impacts the result of the model trained by “Ink–Wood”, but it cannot affect the identification and recovery of the target globally. Although the model does not recover well for letter patterns, this problem should be solved by enriching target structures and materials in the training sets.
Finally, the targets have been set as “Steel-Wood” targets, as shown in Figure 9a, in which the materials of target and background are set as steel and wood, respectively. The model reconstruction results are demonstrated in Figure 9c. The wood background can be distinguished, but the texture cannot be restored. The steel target cannot be recovered with the complete structural information, but the difference in polarization characteristics of the edges can be captured. From Table 2, it can be seen that the difference of corresponding elements of ink and steel is very large. Therefore, the target cannot be recovered very well because of that. When the material is not trained by the DL net, the performance of the target reconstruction will be reduced. The effect of the reconstruction is related to the difference of the polarization properties of the test material and the training material. Therefore, based on the sensitivity of the polarization characteristics of the target, the model, which is trained by the same material target, has a certain cross-material generalization for targets with similar polarization characteristics. It should be noted here that if we train more materials in the DL net, the reconstruction performances would be enhanced for different materials’ targets and backgrounds.

3.4. The Performance of Model-DoP on the Generalization of the Imaging Distance

Different materials have different polarization characters which can be described by a 4 × 4 matrix called MM. At the same time, when targets and scattering media are determined in a system, the MM of those will not change. Therefore, the trained Model-DoP is still able to reconstruct the targets with different imaging distances (the targets move within a certain range). Therefore, we have also explored the influence of targets at different locations by changing the imaging distances between the ground glass and the targets. We capture the scattering DoP images at the distances of d = 3.5 cm, 4.0 cm, 4.25 cm, 4.5 cm, 5.0 cm and 5.5 cm, and reconstruct the target images through the Model-DoP trained in the imaging distance of d = 4 cm. The results are shown in Figure 10.
It can be seen that the Model-DoP can reconstruct targets at different imaging distances. When d = 3.5 cm, the information of images is enough to provide features for the Model-DoP, and the good retention of the target polarization information strongly improves the imaging quality. Besides, when d is longer than 4.0 cm, the Model-DoP still has a certain generalization ability which is because the model can still obtain some part of the targets’ polarization characteristics, allowing the target hidden behind the noise to still be reconstructed until d = 5.0. However, at the imaging distance of d = 5.5 cm, the model cannot reconstruct the target details, though it can still distinguish between the background and the target.
The trained Model-DoP by polarization information is less affected by scattering media because DoP carries stable target features. Therefore, when the target moves within a range, the Model-DoP can still reconstruct it, proving that our proposed method can be adapted to imaging with telescopic distance. Table 3 shows the SSIM of recovered images with increasing imaging distances, where the SSIM is gradually decreasing; however, the magnitude of the decrease is relatively small, which verifies that the DoP can retain the transmitting polarization information in scattering media to improve the stability of the network.

3.5. Compared with the Model-I, Model-IX and Model-Q

DoP can filter out redundant information to a certain extent and focus on the polarization characteristics of targets. Then, the model with both accuracy and stability can be obtained with a small number of datasets. In order to prove that the DoP images as training data are better than I, IX and Q images, we trained the network to obtain Model-I, Model-IX, Model-Q and Model-DoP, respectively. Unlike before, we needed to exclude the compensatory effect of emitted polarized light on polarized images, making the difference between different results of the model more obvious. We took the polarizer off and got a series of data directly in natural light conditions. The compared results between those of Model-I, Model-IX, Model-Q and Model-DoP have been investigated and are demonstrated in Figure 11.
From Figure 11b, it can be seen that targets and backgrounds obtained from Model-IX can be distinguished; however, the contrast of recovered images is low, and the target structure is distorted, especially letter and graph targets. The IX component also carries the polarization information of targets, but it also has too much redundant information, making the useful polarization information of targets less prominent. Therefore, in the case of the same amount of data, the network cannot efficiently capture the target polarization information for model building.
The contrast of the result from Model-I is better than that from Model-IX from Figure 11c, because the intensity is obtained by adding IX and IY, which has more information than IX alone. However, the background of the result of the Model-I has some noise, especially the edge section. It is precisely because the network trained by intensity cannot accurately distinguish different polarization characteristics.
Thanks to the Q component, which is the difference between IX and IY, it will eliminate some effect of scattering. Therefore, in Figure 11d, the background of the result from the Model-Q has less noise than that from model-I. However, part of the target may not be recovered completely when the gap between the test target and the training target is large, which may contribute to that the Q component may cancel out some target information when the polarization characteristics of the target are not very strong. Therefore, the model-Q has certain restrictions on the material. In Figure 11e, the results of Model-DoP not only recover the goal, but also accurately distinguish the part with different polarization characteristics, although it is not a full reflection. Meanwhile, there is no need to consider the offsets of the targets’ polarization information in the DoP information. The quantitative comparison of the four models is shown in Table 4, which further confirms the above information and discussion.

4. Conclusions

In this article, we combine polarization theory and DL technology to propose a novel method for reconstructing targets. The neural network trained by the DoP can effectively learn the polarization characteristics of different targets and demonstrate certain capability of generalization. Moreover, using the DoP as the polarization information stream reveals a better performance in the reconstruction results, which can provide more target details. Our explorations demonstrate that the polarization information combining the improved U-net shows promise in solving the problem of the information extraction and target identification under strong scattering environments. What is more, for the target with different imaging distances, it can still be reconstructed. It should be further noted that if we use the coherent laser as the illumination source, the much higher performance should be expected, because the speckle effect will be enhanced, as well the characteristics of the scattering fields. In the following work to improve the performance of polarization scattering imaging, we will focus on the following points: (i) extracting usable polarization information from multi-material target information to target reconstruction for more scenarios; (ii) because there is more than one physical quantity that can express the polarization characteristics of the target and they can reflect the different aspects and characteristics of the target, the multi-dimensional polarization information can be used to improve the expression of the target features for improving the performance of target reconstruction.

Author Contributions

Conceptualization, B.L. and Z.G.; methodology, B.L.; software, B.L.; validation, B.L, X.F., D.L. and Z.G.; formal analysis, B.L., X.F. and D.L.; investigation, B.L., X.F. and D.L.; resources, B.L.; writing—original draft preparation, B.L.; writing—review and editing, Z.G.; supervision, Z.G.; funding acquisition, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (61775050).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El-Wakeel, A.S.; Mohammed, N.A.; Aly, M.H. Free space optical communications system performance under atmospheric scattering and turbulence for 850 and 1550 nm operation. Appl. Opt. 2016, 55, 7276–7286. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, Q.; Guo, Z.; Tao, Q.; Jiao, W.; Qu, S.; Gao, J. Multi-spectral characteristics of polarization retrieve in various atmospheric conditions. Opt. Commun. 2015, 339, 167–170. [Google Scholar] [CrossRef]
  3. Hu, T.W.; Shen, F.; Wang, K.P.; Guo, K.; Liu, X.; Wang, F.; Peng, Z.Y.; Cui, Y.M.; Sun, R.; Ding, Z.Z.; et al. Broad-band transmission characteristics of Polarizations in foggy environments. Atmosphere 2019, 10, 342. [Google Scholar] [CrossRef]
  4. Purohit, K.; Mandal, S.; Rajagoplan, A.N. Multilevel weighted enhancement for underwater image dehazing. J. Opt. Soc. Am. A 2019, 36, 1098–1108. [Google Scholar] [CrossRef] [PubMed]
  5. Xu, Q.; Guo, Z.; Tao, Q.; Jiao, W.; Wang, X.; Qu, S.; Gao, J. Transmitting characteristics of the polarization information under seawater. Appl. Opt. 2015, 54, 6584–6588. [Google Scholar] [CrossRef] [PubMed]
  6. Shen, F.; Zhang, B.; Guo, K.; Guo, Z. The Depolarization Performances of the Polarized Light in Different Scattering Media Systems. IEEE Photonics J. 2018, 10, 3900212. [Google Scholar] [CrossRef]
  7. Horstmeyer, R.; Ruan, H.; Yang, C. Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue. Nat. Photonics 2015, 9, 563–571. [Google Scholar] [CrossRef] [PubMed]
  8. Yoon, J.; Lee, K.; Park, J.C.; Park, Y.K. Measuring optical transmission matrices by wavefront shaping. Opt. Express 2015, 23, 10158–10167. [Google Scholar] [CrossRef] [PubMed]
  9. Shen, F.; Wang, K.P.; Tao, Q.Q.; Xu, X.; Wu, R.W.; Guo, K.; Zhou, H.P.; Yin, Z.P.; Guo, Z.Y. Polarization imaging performances based on different retrieving Mueller matrixes. Optik 2018, 153, 50–57. [Google Scholar] [CrossRef]
  10. Osnabrugge, G.; Amitonova, L.V.; Vellekoop, I.M. Blind focusing through strongly scattering media using wavefront shaping with nonlinear feedback. Opt. Express 2019, 27, 11673–11688. [Google Scholar] [CrossRef] [PubMed]
  11. Osnabrugge, G.; Horstmeyer, R.; PapaDoPoulos, I.N.; Judkewitz, B.; Vellekoop, I.M. Generalized optical memory effect. Optica 2017, 4, 886–892. [Google Scholar] [CrossRef]
  12. Haskel, M.; Stern, A. Modeling optical memory effects with phase screens. Opt. Express 2018, 26, 29231–29243. [Google Scholar] [CrossRef]
  13. Bromberg, Y.; Katz, O.; Silberberg, Y. Ghost imaging with a single detector. Phys. Rev. A 2009, 79, 1050–2947. [Google Scholar] [CrossRef]
  14. Ferri, F.; Magatti, D.; Lugiato, L.A.; Gatti, A. Differential Ghost Imaging. Phys. Rev. Lett. 2010, 104, 253603. [Google Scholar] [CrossRef] [PubMed]
  15. Li, D.; Xu, C.; Yan, L.; Guo, Z. High-Performance Scanning-mode Polarization based Computational Ghost Imaging (SPCGI). Opt. Express 2022, 30, 17909–17921. [Google Scholar] [CrossRef] [PubMed]
  16. Xu, C.; Li, D.; Guo, K.; Yin, Z.; Guo, Z. Computation ghost imaging with key-patterns for image encryption. Opt. Commun. 2023, in press. [Google Scholar]
  17. Li, D.; Xu, C.; Zhang, M.; Wang, X.; Guo, K.; Sun, Y.; Gao, J.; Guo, Z. Measuring glucose concentration in a solution based on the indices of polarimetric purity. Biomed. Opt. Express 2021, 12, 2447–2459. [Google Scholar] [CrossRef] [PubMed]
  18. Shen, F.; Zhang, M.; Guo, K.; Zhou, H.P.; Peng, Z.Y.; Cui, Y.M.; Wang, F.; Gao, J.; Guo, Z.Y. The Depolarization Performances of Scattering Systems Based on Indices of Polarimetric Purity. Opt. Express 2019, 27, 28337–28349. [Google Scholar] [CrossRef]
  19. Wang, X.Y.; Hu, T.W.; Li, D.K.; Guo, K.; Gao, J.; Guo, Z.Y. Performances of polarization-retrieve imaging stratified dispersion media. Remote Sens. 2020, 12, 2895. [Google Scholar] [CrossRef]
  20. Tao, Q.Q.; Sun, Y.X.; Shen, F.; Xu, Q.; Gao, J.; Guo, Z.Y. Active imaging with the AIDS of polarization retrieve in turbid media system. Opt. Commun. 2016, 359, 405–410. [Google Scholar] [CrossRef]
  21. Wang, P.; Li, D.; Wang, X.; Guo, K.; Sun, Y.; Gao, J.; Guo, Z. Analyzing polarization transmission characteristics in foggy environments based on the indices of polarimetric purity (Is). IEEE Access 2020, 8, 227703–227709. [Google Scholar] [CrossRef]
  22. Liu, S.; Chen, J.; Xun, Y.; Zhao, X.; Chang, C.H. A New Polarization Image Demosaicking Algorithm by Exploiting Inter-Channel Correlations with Guided Filtering. IEEE Trans. Image Process. 2020, 29, 7076–7089. [Google Scholar] [CrossRef]
  23. Li, N.; Teurnier, B.L.; Boffety, M.; Goudail, F.; Zhao, Y.; Pan, Q. No-Reference Physics-Based Quality Assessment of Polarization Images and Its Alication to Demosaicking. IEEE Trans. Image Process. 2021, 30, 8983–8998. [Google Scholar] [CrossRef] [PubMed]
  24. Tyo, J.S.; Rowe, M.P.; Pugh, E.N. Target detection in optically scattering media by polarization-difference imaging. Appl. Opt. 1996, 35, 1855–1870. [Google Scholar] [CrossRef]
  25. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001. [Google Scholar] [CrossRef]
  26. Liang, J.; Ren, L.Y.; Ju, H.J. Visibility enhancement of hazy images based on a universal polarimetric imaging method. J. Appl. Phys. 2014, 116, 173107. [Google Scholar] [CrossRef]
  27. Liang, J.; Ren, L.; Ju, H. Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization. Opt. Express 2015, 23, 26146–26157. [Google Scholar] [CrossRef]
  28. Liang, J.; Zhang, W.; Ren, L. Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion. Appl. Opt. 2016, 55, 8221–8226. [Google Scholar] [CrossRef]
  29. Huang, B.; Liu, T.; Hu, H. Underwater image recovery considering polarization effects of objects. Opt. Express 2016, 24, 9826–9838. [Google Scholar] [CrossRef] [PubMed]
  30. Hu, H.; Zhao, L.; Huang, B.; Li, X.; Wang, H.; Liu, T. Enhancing Visibility of Polarimetric Underwater Image by Transmittance Correction. IEEE Photonics J. 2017, 9, 6802310. [Google Scholar] [CrossRef]
  31. Liu, F.; Han, P.L.; Wei, Y.; Yang, K.; Huang, S.Z.; Li, X.; Zhang, G.; Bai, L.; Shao, X.P. Deeply seeing through highly turbid water by active polarization imaging. Opt. Lett. 2018, 43, 4903–4906. [Google Scholar] [CrossRef]
  32. Xu, Q.; Guo, Z.; Tao, Q.; Jiao, W.; Qu, S.; Gao, J. A novel method of retrieving the polarization qubits after being transmitted in turbid media. J. Opt. 2015, 17, 035606. [Google Scholar] [CrossRef]
  33. Wang, C.; Gao, J.; Yao, T.; Wang, L.; Sun, Y.; Xie, Z.; Guo, Z. Acquiring reflective polarization from arbitrary multi-layer surface based on Monte Carlo simulation. Opt. Express 2016, 24, 9397–9411. [Google Scholar] [CrossRef] [PubMed]
  34. Tao, Q.Q.; Guo, Z.Y.; Xu, Q.; Jiao, W.Y.; Wang, X.S.; Qu, S.L.; Gao, J. Retrieving the polarization information for satellite-to-ground light communication. J. Opt. 2015, 17, 085701. [Google Scholar] [CrossRef]
  35. Jin, K.H.; McCann, M.T.; Froustey, E.; Unser, M. Deep Conutional Neural Network for Inverse Problems in Imaging. IEEE Trans. Image Process. 2017, 26, 4509–4522. [Google Scholar] [CrossRef]
  36. Li, Y.; Xue, Y.; Tian, L. Deep speckle correlation: A deep learning aroach toward scalable imaging through scattering media. Optica 2018, 5, 1181–1190. [Google Scholar] [CrossRef]
  37. Li, S.; Deng, M.; Lee, J.; Sinha, A.; Barbastathis, G. Imaging through glass diffusers using densely connected conutional networks. Optica 2018, 5, 803–813. [Google Scholar] [CrossRef]
  38. Lyu, M.; Wang, H.; Li, G.W.; Zheng, S.S.; Situ, G.H. Learning-based lensless imaging through optically thick scattering media. Adv. Photon. 2019, 1, 036002. [Google Scholar] [CrossRef]
  39. Sun, L.; Shi, J.H.; Wu, X.Y.; Sun, Y.W.; Zeng, G.H. Photon-limited imaging through scattering medium based on deep learning. Opt. Express 2019, 27, 33120–33134. [Google Scholar] [CrossRef]
  40. Zhu, S.; Guo, E.L.; Gu, J.; Bai, L.F.; Han, J. Imaging through unknown scattering media based on physics-informed learning. Photon. Res. 2021, 9, B210–B219. [Google Scholar] [CrossRef]
  41. Li, D.; Lin, B.; Wang, X.; Guo, Z. High-Performance Polarization Remote Sensing with the Modified U-Net Based Deep-Learning Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5621110. [Google Scholar] [CrossRef]
  42. Li, X.; Li, H.; Lin, Y.; Guo, J.; Yang, J.; Yue, H.; Li, K.; Li, C.; Cheng, Z.; Hu, H.; et al. Learning-based denoising for polarimetric images. Opt. Express 2020, 28, 16309–16321. [Google Scholar] [CrossRef] [PubMed]
  43. Sun, R.; Sun, X.; Chen, F.; Song, Q.; Pan, H. Polarimetric imaging detection using a conutional neural network with three-dimensional and two-dimensional conutional layers. Appl. Opt. 2020, 59, 151–155. [Google Scholar] [CrossRef]
  44. Wang, Y.; Liu, Q.; Zu, H.; Liu, X.; Xie, R.C.; Wang, F. An end-to-end CNN framework for polarimetric vision tasks based on polarization-parameter-constructing network. arXiv 2020, arXiv:2004.08740. [Google Scholar]
  45. Lin, B.; Fan, X.; Guo, Z. Self-attention module in multi-scale improved U-net (SAM-MIU-net) motivating high-performance polarization scattering imaging. Optics Express 2023, 31, 3046–3058. [Google Scholar] [CrossRef]
  46. Hu, H.F.; Zhang, Y.B.; Li, X.B.; Lin, Y.; Cheng, Z.Z.; Liu, T. Polarimetric underwater image recovery via deep learning. Opt. Lasers Eng. 2020, 133, 106152. [Google Scholar] [CrossRef]
  47. Wen, S.J.; Zheng, Y.Q.; Lu, F.; Zhao, Q.P. Conutional demosaicing network for joint chromatic and polarimetric imagery. Opt. Lett. 2019, 44, 5646–5649. [Google Scholar] [CrossRef] [PubMed]
  48. Zhang, J.; Shao, J.; Chen, J.; Yang, D.; Liang, B.; Liang, R. PFNet: An unsupervised deep network for polarization image fusion. Opt. Lett. 2020, 45, 1507–1510. [Google Scholar] [CrossRef]
  49. Stokes, S.G.G. Mathematical and Physical Papers; Cambridge University Press: Cambridge, UK, 1901. [Google Scholar]
  50. Treibitz, T.; Schechner, Y.Y. Active Polarization Descattering. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 385–399. [Google Scholar] [CrossRef]
  51. Huang, G.; Liu, Z.; Maaten, L.; Weinberger, K.Q. Densely Connected Conutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
  52. Orhan, A.E.; Pitkow, X. Skip connections eliminate singularities. arXiv 2017, arXiv:1701.09175. [Google Scholar] [CrossRef]
  53. Katz, O.; Heidmann, P.; Fink, M.; Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photon. 2014, 8, 784–790. [Google Scholar] [CrossRef] [Green Version]
  54. Zheng, S.S.; Wang, H.; Dong, S.; Wang, F.; Situ, G.H. Incoherent imaging through highly nonstatic and optically thick turbid media based on neural network. Photon. Res. 2021, 9, B220–B228. [Google Scholar] [CrossRef]
  55. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers 2003, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1398–1402. [Google Scholar] [CrossRef]
  56. Zhao, Y.Z.; Li, Y.H.; He, W.J.; Liu, Y.; Fu, Y.G. Polarization scattering imaging experiment based on Mueller matrix. Opt. Commun. 2021, 490, 126892. [Google Scholar] [CrossRef]
Figure 1. Learning framework.
Figure 1. Learning framework.
Photonics 10 00204 g001
Figure 2. Schematic of the experimental setup.
Figure 2. Schematic of the experimental setup.
Photonics 10 00204 g002
Figure 3. (a) Original target; (b) Imaging by IX; (c) Imaging by I; (d) Imaging by DoP.
Figure 3. (a) Original target; (b) Imaging by IX; (c) Imaging by I; (d) Imaging by DoP.
Photonics 10 00204 g003
Figure 4. The test results of untrained targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Figure 4. The test results of untrained targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Photonics 10 00204 g004
Figure 5. The reconstructed results for untrained alphabetical targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Figure 5. The reconstructed results for untrained alphabetical targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Photonics 10 00204 g005
Figure 6. The reconstructed results for untrained graphic targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Figure 6. The reconstructed results for untrained graphic targets: (a) Ground truth; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Photonics 10 00204 g006
Figure 8. The test results of model-DoP for the untrained background materials. (a) Ground truth with target-background as Ink–Wood; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Figure 8. The test results of model-DoP for the untrained background materials. (a) Ground truth with target-background as Ink–Wood; (b) Scattering DoP images; (c) Images reconstructed by the Model-DoP.
Photonics 10 00204 g008
Figure 9. The test results of model-DoP for the untrained target and background materials. (a) Ground truth with target-background as Steel–Wood; (b) Scattering DoP images; (c) Reconstructed images by the Model-DoP.
Figure 9. The test results of model-DoP for the untrained target and background materials. (a) Ground truth with target-background as Steel–Wood; (b) Scattering DoP images; (c) Reconstructed images by the Model-DoP.
Photonics 10 00204 g009
Figure 10. The reconstructed results at different imaging distances by the Model-DoP trained in the imaging distance of d = 4 cm. (a) Ground truth; (b) d = 3.5 cm; (c) d = 4.0 cm; (d) d = 4.25 cm; (e) d = 4.5 cm; (f) d = 5.0 cm; (g) d = 5.5 cm.
Figure 10. The reconstructed results at different imaging distances by the Model-DoP trained in the imaging distance of d = 4 cm. (a) Ground truth; (b) d = 3.5 cm; (c) d = 4.0 cm; (d) d = 4.25 cm; (e) d = 4.5 cm; (f) d = 5.0 cm; (g) d = 5.5 cm.
Photonics 10 00204 g010
Figure 11. The comparison of Model-I, Model- IX, model-Q and Model-DoP. (a) Ground truth; (b) Model-IX; (c) Model-I; (d) Model-Q; (e) Model-DoP.
Figure 11. The comparison of Model-I, Model- IX, model-Q and Model-DoP. (a) Ground truth; (b) Model-IX; (c) Model-I; (d) Model-Q; (e) Model-DoP.
Photonics 10 00204 g011
Table 1. The average SSIM of the target reconstructions for Figures 5–7.
Table 1. The average SSIM of the target reconstructions for Figures 5–7.
TargetsDigitalAlphabeticalGraphic
SSIM0.78500.77010.7688
Table 2. Mueller matrix elements of different materials [19,56].
Table 2. Mueller matrix elements of different materials [19,56].
Materialm22m33
Paper0.2650.247
Wood0.2150.16
Ink0.8920.921
Steel0.9800.977
Table 3. The average SSIM for the reconstructed targets in Figure 10.
Table 3. The average SSIM for the reconstructed targets in Figure 10.
Different dd = 3.5d = 4.0d = 4.25d = 4.5d = 5.0d = 5.5
SSIM0.75570.77820.74430.73390.72050.6321
Table 4. The average SSIM for the reconstructed targets in Figure 11.
Table 4. The average SSIM for the reconstructed targets in Figure 11.
Different ModelModel-IXModel-IModel-QModel-DoP
SSIM0.74980.75010.75410.7746
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, B.; Fan, X.; Li, D.; Guo, Z. High-Performance Polarization Imaging Reconstruction in Scattering System under Natural Light Conditions with an Improved U-Net. Photonics 2023, 10, 204. https://doi.org/10.3390/photonics10020204

AMA Style

Lin B, Fan X, Li D, Guo Z. High-Performance Polarization Imaging Reconstruction in Scattering System under Natural Light Conditions with an Improved U-Net. Photonics. 2023; 10(2):204. https://doi.org/10.3390/photonics10020204

Chicago/Turabian Style

Lin, Bing, Xueqiang Fan, Dekui Li, and Zhongyi Guo. 2023. "High-Performance Polarization Imaging Reconstruction in Scattering System under Natural Light Conditions with an Improved U-Net" Photonics 10, no. 2: 204. https://doi.org/10.3390/photonics10020204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop