Polarized Intensity Ratio Constraint Demosaicing for the Division of a Focal-Plane Polarimetric Image

: Polarization is an independent dimension of light wave information that has broad application prospects in machine vision and remote sensing tasks. Polarization imaging using a division-of-focal-plane (DoFP) polarimetric sensor can meet lightweight and real-time application requirements. Similar to Bayer ﬁlter-based color imaging, demosaicing is a basic and important processing step in DoFP polarization imaging. Due to the differences in the physical properties of polarization and the color of light waves, the widely studied color demosaicing method cannot be directly applied to polarization demosaicing. We propose a polarized intensity ratio constraint demosaicing model to efﬁciently account for the characteristics of polarization detection in this work. First, we discuss the special constraint relationship between the polarization channels. It can be simply described as: for a beam of light, the sum of the intensities detected by any two vertical ideal analyzers should be equal to the total light intensity. Then, based on this constraint relationship and drawing on the concept of guided ﬁltering, a new polarization demosaicing method is developed. A method to directly use raw images captured by the DoFP detector as the ground truth for comparison experiments is then constructed to aid in the convenient collection of experimental data and extensive image scenarios. Results of both qualitative and quantitative experiments illustrate that our method is an effective and practical method to faithfully recover the full polarization information of each pixel from a single mosaic input image.


Background
Polarization, together with intensity, frequency, and phase, constitute the basic properties of light from the perspective of waves. Both intensity, which corresponds to brightness, and frequency, which corresponds to color or spectrum, have been widely researched and applied in the fields of vision and remote sensing. The research and application of polarization imaging and processing have developed gradually in recent years. As a new information dimension, polarization information has a significant role in computer vision and remote sensing tasks, providing an essential function in some respects. It has been widely used in tasks such as object detection [1,2], image haze removal [3,4], underwater image enhancement [5], and 3D reconstruction [6,7].
The polarization imaging methods mainly include division of time (DoT), division of amplitude (DoAM), division of aperture (DoAP), and division of focal plane (DoFP) [8]. However, DoT is not capable of real-time imaging, whereas DoAM and DoAP suffer from complex and heavy structures. The DoFP polarization imaging sensor is composed of a micropolarization array (MPA) oriented at 0°, 45°, 90°, and 135° (shown in Figure 1, left). Thus, it can capture linear polarization information in one shot and its structure is simple. However, the DoFP sensor trades polarization information at the expense of spatial resolution, which is similar to color detectors based on the Bayer filter. The raw image obtained by a DoFP or Bayer filter-based sensor is called a mosaic image (seen in Figure 1, right). The goal of demosaicing is to restore a full-size multi-channel image from a raw mosaic image. Bayer filter-based color image demosaicing has been widely studied and applied in recent decades, and it is an important component of color image processing. However, the research on polarized image demosaicing is still relatively scarce. Although we can learn about polarized image demosaicing from color image demosaicing, it is not exactly the same because color and polarization information have different constraints between adjacent pixels. Comprehensive color science research provides a priori information for color image demosaicing, such as the color difference model [9,10]. Polarized image demosaicing should also fully utilize the inherent physical prior knowledge of polarization detection, which is not fully considered in traditional interpolation methods and explains why they do not perform well on polarized mosaic images.

Related Work
Demosaicing originated from color image processing. Single-sensor color imaging technology with a color filter array (CFA) is widely used in the digital camera industry. The most popular and widely used CFA is the Bayer CFA, which was released in 1976 [11]. Polarization filter array (PFA) technology was patented in 1995 [12], but most of the practical implementations and technology advances were made between 2009 and now [13]. In recent years, some PFA cameras have appeared on the market, such as 4D Technology's PolarCam device [14] and Sony's IMX250MZR polarization-dedicated sensor (Tokyo, Japan) [15]. Although research on color image demosaicing has a long history, research into polarization demosaicing has only recently begun. In this section, we provide an overview of both color image and polarized image demosaicing.

Color Image Demosaicing
Color image demosaicing can be divided into spatial interpolation, frequency-based methods, and data-driven methods. Data-driven methods include sparse representation and neural networks. Spatial interpolation-based demosaicing interpolates along multiple directions to efficiently utilize both interchannel and intrachannel correlations [10,[16][17][18]. Frequency selection-based demosaicing takes advantage of the spectral characteristics of raw images [19][20][21]. Sparse representation-based demosaicing considers demosaicing as Bayer filter-based color image demosaicing has been widely studied and applied in recent decades, and it is an important component of color image processing. However, the research on polarized image demosaicing is still relatively scarce. Although we can learn about polarized image demosaicing from color image demosaicing, it is not exactly the same because color and polarization information have different constraints between adjacent pixels. Comprehensive color science research provides a priori information for color image demosaicing, such as the color difference model [9,10]. Polarized image demosaicing should also fully utilize the inherent physical prior knowledge of polarization detection, which is not fully considered in traditional interpolation methods and explains why they do not perform well on polarized mosaic images.

Related Work
Demosaicing originated from color image processing. Single-sensor color imaging technology with a color filter array (CFA) is widely used in the digital camera industry. The most popular and widely used CFA is the Bayer CFA, which was released in 1976 [11]. Polarization filter array (PFA) technology was patented in 1995 [12], but most of the practical implementations and technology advances were made between 2009 and now [13]. In recent years, some PFA cameras have appeared on the market, such as 4D Technology's PolarCam device [14] and Sony's IMX250MZR polarization-dedicated sensor (Tokyo, Japan) [15]. Although research on color image demosaicing has a long history, research into polarization demosaicing has only recently begun. In this section, we provide an overview of both color image and polarized image demosaicing.

Color Image Demosaicing
Color image demosaicing can be divided into spatial interpolation, frequency-based methods, and data-driven methods. Data-driven methods include sparse representation and neural networks. Spatial interpolation-based demosaicing interpolates along multiple directions to efficiently utilize both interchannel and intrachannel correlations [10,[16][17][18]. Frequency selection-based demosaicing takes advantage of the spectral characteristics of raw images [19][20][21]. Sparse representation-based demosaicing considers demosaicing as an inverse problem and exploits sparsity prior by decomposing each image patch into a sparse representation over a dictionary [22,23]. The neural network method uses a large amount of data to train a neural network to estimate the missing pixels [24]. Past experiments and studies have shown that the spatial domain method is more advantageous than the frequency domain method [9]. Although the data-driven method performs well, its recovery effect is extremely dependent on the relevance of training and verification data, meaning it is difficult to widely adapt to complex and changeable actual scenes.

Polarized Image Demosaicing
Polarized image demosaicing is different from color image demosaicing in two main ways. (1) CFA has three channels, so there are two G pixels in every 2 × 2 pixel block. That is, the sampling rate of G is twice that of R and B. In contrast, PFA has four channels, thus the sampling rate of each channel is the same. (2) The signal between CFA channels is constrained by spectral information, and the signal between PFA channels is constrained by polarization information. The first difference means that the common method of first restoring the double-sampled G channel in color image demosaicing is not applicable to polarized images. The second difference means that the color difference model [9,10] of color images does not apply to polarized images.
In recent years, some polarization demosaicing algorithms have been proposed with reference to the superior algorithms in color image demosaicing. Traditional bilinear and bicubic interpolation methods were first used in [25], a gradient-based interpolation method was proposed in [26], an intensity correlation interpolation method was proposed in [27], a polarization channel difference prior method was proposed in [28], and an edge-aware residual interpolation was proposed in [29]. A guided filter [30] works well for color image demosaicing [10], and it is also used in polarization demosaicing [31][32][33][34]. In this paper, a derivative guided filtering method is proposed, which is different from the original guided filter [30] and is more suitable for the constraint relationship between polarization channels. Learning-based image processing methods, neural networks [35,36], and sparse representation-based methods [37,38] have been used to solve this problem. However, such methods usually require datasets to support them. Thus, the quality of the processing result depends on the similarity between the image to be processed and the image training set.

Contribution
In this paper, we propose a polarized intensity ratio constraint (PIRC) demosaicing method to restore high-quality four-channel polarized images from one-channel mosaic observations captured by a single-chip DoFP polarized sensor. The physical constraint of the PIRC is simple: for a beam of light, the sum of the intensities detected by any two vertical ideal analyzers should be equal to the total light intensity. Based on this constraint, this paper draws on the guided filtering method and proposes a specific cost function for polarization demosaicing. This technique not only considers the texture relationship between pixels but also considers the relationship of polarization information between channels.
The main contributions of this paper are summarized as follows: (1) The actual physical constraints between polarization imaging channels are identified. (2) A PIRC polarization demosaicing method is proposed that considers both the constraints between channels and the relationship between pixels. (3) A method is designed that directly employs raw images as ground truth for comparison experiments, instead of using additional methods to obtain the ground truth. This process facilitates the convenient collection of experimental data with more extensive data scenarios. (4) Extensive experiments are carried out to demonstrate that our proposed method achieves state-of-the-art results.
The remainder of this paper is organized as follows. Section 2 presents the polarized intensity ratio constraint of the polarized image and details the proposed method, Section 3 shows the experiment results, Section 4 discusses the results, and Section 5 concludes the paper.

The Constraint of Detected Polarized Intensity
The special constraint relationship between the polarization channels is discussed before introducing our polarization demosaicing method. This relationship is typically overlooked by researchers. Natural light can be regarded as a superposition of a large number of single polarization-state monochromatic lights. A single polarization-state monochromatic plane light wave propagating along the z-axis can be expressed as: where E is the wave function, A = Aa is the amplitude vector of the electric field, a is the unit vector describing the vibration direction, A is the scalar describing the amplitude value, k = 2π/λ is the wavenumber, ω = 2π/T is the angular frequency, and T is the vibration period of the light wave. The light intensity I is equal to the square of the electric field amplitude: E can be decomposed to any two orthogonal components in the O-xy plane. If the decomposition is on the xy-axis, then Equation (1) can be expressed in scalar form: where A x = |A| · cos θ, A y = |A| · cos( π 2 − θ), and θ represents the angle between A and the x-axis. If the light intensity of each polarization direction is detected under ideal conditions, the following relationships are present: According to the above principle, when a beam of linearly polarized light passes through the analyzers with the optical axis directions of 0 • , 45 • , 90 • , and 135 • (the 0 • direction coincides with the x-axis), the transmitted light intensity should be: where a = cos 2 θ and c = cos 2 (θ − π 4 ). From Equation (5), we can obtain: If we consider the natural light formed by the noninterference superposition of light of multiple polarization states, Equation (5) can be expressed as: Remote Sens. 2022, 14, 3268

of 19
According to Equation (7), when natural light is incident, Equation (6) still holds: In the above discussion, the polarimetric extinction ratio and transmittance of the analyzer are assumed to be ideal. In reality, the analyzer is not an ideal one. In order to describe this error, a small offset term should be added to Equation (9): Equation (10) describes the constraint relationship between the polarized light intensity observed in the 0 • , 45 • , 90 • , and 135 • directions and the total light intensity, which is the starting point of the DoFP polarimetric demosaicing method proposed in this paper.

Polarized Intensity Ratio Constraint Demosaicing Method
Based on the constraint relationship between polarization channels discussed in Section 3, we propose a new polarized intensity ratio constraint (PIRC) demosaicing method. Our fundamental goal is the following: the image texture and polarization state of each pixel are maintained. Thus, the proposed PIRC demosaicing method is divided into two main steps. First, the intensity image is obtained. The recovered intensity image is expected to retain the truthful image texture; thus, the relationship between neighboring pixels must be fully considered. To achieve this, a directional gradient-based method is applied to interpolate each polarization channel, a tentative estimation value of each polarization channel is obtained, and the full intensity is half of the sum of four channels. Second, based on the full intensity image, each polarization channel is recovered by a mutated guided filter method. The full process is shown in Figure 2. In brief, we apply gradient filtering to the raw images in each channel to obtain tentative values of intensity with each polarization azimuth in any given pixel, and then apply the derived guided filtering to calculate the final estimated value.  In the subsequent description, I θ i,j without a cap indicates the actual detected polarized light intensity in the θ direction by the pixel (i, j);Î θ i,j with the cap "ˆ" is the tentative estimate value; and I θ i,j with the cap " " is the final estimate value.

Recover the Intensity Image by Image Gradient
Only one of the polarization directions is detected for each pixel (i, j) of the raw image. Based on the relationship in Equation (10), if its perpendicular direction is estimated, then the full intensity I i,j of location (i, j) can be estimated. For example, when the I 45 i,j is detected at the location (i, j), the full intensityÎ i,j can be obtained by adding the detected I 45 i,j to the tentatively estimatedÎ 135 i,j . As illustrated in Figure 1, for each pixel (i, j), the four diagonal adjacent pixels are in the perpendicular polarization direction, and the vertical and horizontal adjacent pixels are in two other polarization directions. In order to make full use of the relationship between adjacent pixels, three nondetected polarization directions should be tentatively estimated in each pixel. Thus, the full intensityÎ i,j can be estimated by: whereÎ 0 i,j ,Î 90 i,j , andÎ 135 i,j are the tentatively estimated polarization channel values. Since the recovered intensity image should convey the actual image texture, the image edges and gradients should be fully considered during interpolating. The fundamental idea is that the interpolation should be performed along the edge and not across the edge. We first evaluate the gradient of the raw image in four different directions, east (the horizontal direction), northeast (the diagonal direction with positive tangent), north (the vertical direction), and northwest (the diagonal direction with negative tangent). For each pixel (i, j) of the raw image, four gradient values are calculated on a 7 × 7 window using Equation (12).
The process of tentatively estimating the nondetected polarization intensityÎ θ i,j has two steps, diagonal interpolation and vertical and horizontal interpolation: Diagonal interpolation. In each pixel, the diagonal interpolation can interpolate the dual value (i.e., the value of the perpendicular direction) of the detected one. If the gradient in the NE direction is larger than the gradient in the NW direction, i.e., D NE > D NW , bicubic interpolation is applied to the target pixel along the NW direction. If the gradient in the NW direction is larger than the gradient in the NE direction, i.e., D NW > D NE , bicubic interpolation is applied to the target pixel along the NE direction. If there is an equal situation, i.e., D NE = D NW , the average of the two bicubic interpolation values is taken. Diagonal interpolation of all four polarization channels should be completed before the next step.
Vertical and horizontal interpolation. As shown in Figure 1, each polarization channel is detected every second row and every second column. Thus, when we do the bicubic interpolation in the N and E directions for the any target pixel (i, j), in only one direction the required adjacent pixel has detected value. Additionally, in another direction, only the estimated value from the above diagonal interpolation process can be used. For example, when I 45 i,j is detected on pixel (i, j), the adjacent detected I 135 can be used in both the NE and NW directions (the diagonal directions). Thus,Î 135 i,j can be estimated by diagonal interpolation. However, forÎ 0 i,j andÎ 90 i,j there is no detected I 0 in the E direction (the horizontal direction) or I 90 in the N direction (the vertical direction). Thus, in those directions, theÎ 0 andÎ 90 from their own diagonal interpolations are used. If the gradient in the E direction is larger than the gradient in the N direction, i.e., D E > D N , bicubic interpolation is applied to the target pixel along the N direction. If the gradient in the N direction is not larger than the gradient in the E direction, i.e., D N > D E , bicubic interpolation is applied to the target pixel along the E direction. If there is an equal situation, i.e., D N = D E , the average of the two is taken.

Interpolate Each Polarization Channel by Intensity Ratio Constraint
After obtaining the tentative estimate intensity imageÎ, each polarization channel I 0 , I 45 , I 90 , and I 135 can be calculated. Considering the constraint of Equation (10), a method derived from the guided filter technique [30] is proposed. This method allows each polarization channel I θ to adhere to the texture of the intensity image I. At the same time, the relationship between the channels is also fully retained, which ensures the correct recovery of polarization information.
In the proposed method, the intensity imageÎ is employed as a guidance image, which is used as a reference to exploit the image structures. The input sparse polarization image is accurately unsampled by the derived guided filter.
For each polarization channel image I θ , θ = 0 • , 45 • , 90 • , 135 • , we define the filter as: where I θ is the filtering output andÎ is the guidance image. Equation (13) assumes that I θ is a linear transform ofÎ in a window ω p,q centered at the pixel (p, q), whereas (a p,q , b p,q ) are the linear coefficients assumed to be constant in ω p,q . A square window is used for radius r, i.e., the side length is 2r + 1. This local linear model ensures that I θ has an edge only ifÎ has an edge, because ∇Î θ = a∇I. At the same time, Equation (13) is consistent with Equation (10).
To determine the linear coefficients (a p,q , b p,q ), we minimize the following cost function in the window ω p,q : where M θ i,j is a binary mask at the pixel (i, j), which is one for the sampled pixels (i.e., I θ i,j has the sampling value) and zero for the others. ε is a regularization parameter penalizing large b θ p,q values to ensure the bias term b θ p,q is not too large and is only used to fit the nonideal measured value, which is described in Equation (10). Equation (10) is the physical fact we deduce. In Equation (10), the coefficients a and c (analogous to a θ p,q in Equations (13) and (14)) determine the proportion of I θ in I, and the bias term ∆ (analogous to b θ p,q in Equations (13) and (14)) only characterizes a small error. Thus, in Equation (13), the output I θ should be mainly determined by the coefficient a θ p,q . In addition, the bias term b θ p,q representing the error should be small. That is, regularizing the coefficients of b θ p,q is appropriate, and it is consistent with physical facts. Regularizing the coefficients of b θ p,q instead of a θ p,q in the cost function is an important difference between our method and the original guided filter [30]. Compared experiment results are shown in Section 3.2.
Equation (14) has a closed-form solution. First, let the partial derivative of the function with respect to a θ p,q and b θ p,q be zero: Then, from Equation (16), b θ p,q can be determined: where i,jÎ i,j are the mean values of I θ i,j andÎ i,j , respectively, in the window ω p,q under the mask M θ i,j . Finally, by incorporating Equation (17) into Equation (15), a θ p,q can be obtained: In each pixel (i, j), the linear coefficients (a, b) are different in different overlapping windows ω p,q that cover (i, j). Thus, the average coefficients of all windows overlapping (13) can then be rewritten as: Based on Equation (19), each polarization channel I θ with the polarization direction θ = 0 • , 45 • , 90 • , 135 • can be interpolated. The algorithm's full steps are shown in Algorithm 1.

Algorithm 1 Polarized Intensity Ratio Constraint Demosaic for Division-of-Focal-Plane Polarimetric Image
Input: RAW mosaic polarization image I raw ;; 1: I raw → D E , D N , D NE , D NW , by Equation (12)

Dataset
Ground-truth polarization images were required for a full reference evaluation of the proposed method. One of the methods used to acquire a full-resolution polarization image was to install a polarizer in front of an ordinary camera and obtain four-channel polarization images with polarization angles of 0 • , 45 • , 90 • , and 135 • by rotating the polarizer. The literature [39] provided a dataset of polarization images of 10 scenes obtained by this method. Each scene included a set of 0 • , 45 • , 90 • , and 135 • polarized images in the nearinfrared band. The full-resolution images of each channel were down-sampled to generate an artificial mosaic image, which was then used as the input of the demosaicing algorithm.
In the polarizer rotation method for polarization imaging, it is necessary to ensure that the lighting conditions do not change and the target does not move during the process of shooting four independent images. As a result, this method is only suitable for shooting stationary objects indoors under stable lighting conditions and cannot be adapted to outdoor shooting, significantly limiting polarization image acquisition. Therefore, images collected with the DoFP polarization detector were also used as the dataset in our experiments.
However, such mosaic images cannot be directly used as ground truth. To address this, we treated each group of four pixels of the original mosaic image as one pixel, creating a synthesized pixel with four polarization channels. However, the resolution of the original image was reduced by half in this process; that is, the pixel size of the image was changed Therefore, the synthesized four-channel polarization image can be considered to be the same as the above-mentioned full-size polarization image as ground truth and can then be used as the input of the demosaicing algorithm after down-sampling (see Figure 3). of shooting four independent images. As a result, this method is only suitable for shooting stationary objects indoors under stable lighting conditions and cannot be adapted to outdoor shooting, significantly limiting polarization image acquisition. Therefore, images collected with the DoFP polarization detector were also used as the dataset in our experiments. However, such mosaic images cannot be directly used as ground truth. To address this, we treated each group of four pixels of the original mosaic image as one pixel, creating a synthesized pixel with four polarization channels. However, the resolution of the original image was reduced by half in this process; that is, the pixel size of the image was changed from M N × to 4 2 2 M N × × . Therefore, the synthesized four-channel polarization image can be considered to be the same as the above-mentioned full-size polarization image as ground truth and can then be used as the input of the demosaicing algorithm after down-sampling (see Figure 3). We collected a polarization image dataset using a Lucid Vision Labs Triton TRI050S-P DoFP polarization camera. Three sets of polarization images were collected in different situations, including seven stationary object scenes illuminated by indoor directional light We collected a polarization image dataset using a Lucid Vision Labs Triton TRI050S-P DoFP polarization camera. Three sets of polarization images were collected in different situations, including seven stationary object scenes illuminated by indoor directional light sources (named "still"), five indoor environment scenes illuminated by natural light (named "indoor"), and five outdoor environment scenes (named "outdoor"). The images are provided in Figure 4.

, x FOR PEER REVIEW
10 of 20 sources (named "still"), five indoor environment scenes illuminated by natural light (named "indoor"), and five outdoor environment scenes (named "outdoor"). The images are provided in Figure 4.

Evaluation Metrics
The well-known evaluation metrics peak signal-to-noise ratio (PSNR) [40] and correlated peak signal-to-noise ratio (CPSNR) were used to measure the accuracy of the polarization information between the reconstructed image and the corresponding ground truth. Between each couple of reference (R) and estimated (E) channels, the PSNR is de-

Evaluation Metrics
The well-known evaluation metrics peak signal-to-noise ratio (PSNR) [40] and correlated peak signal-to-noise ratio (CPSNR) were used to measure the accuracy of the polarization information between the reconstructed image and the corresponding ground truth. Between each couple of reference (R) and estimated (E) channels, the PSNR is defined as: where MSE(R, E) denotes the mean squared error between R and E in one channel. If multiple channels are calculated together when calculating the MSE: Equation (20) then becomes the CPSNR. The Stokes vector (S 0 , S 1 , S 2 ) degree of linear polarization (DoLP) and angle of linear polarization (AoLP) are usually used to characterize linear polarization information. The Stokes vector is calculated by Equation (22): DoLP and AoLP are respectively calculated by Equations (24) and (25): where arctan 2 (.) is a four-quadrant arctangent function.

Comparative Experiments
The proposed method was compared with the bicubic and bilinear baseline interpolation algorithms [25], and with the gradient-based interpolation method (GBI) [26], interpolation with intensity correlation (IPIC) [27], and edge-aware residual interpolation (EARI) [29]. The experiments were carried out on seven datasets; the four existing databases were PSD [39], JCPD [41], Qiu [42], and EARI [29], and the other three (i.e., still, indoor, and outdoor) were collected by us. Each dataset included several scenes, and the average PSNR and CPSNR values were calculated in each dataset.
Each of the methods compared in the experiments had a different processing scheme for the border pixels around the image. These schemes greatly affected the processing results of boundary pixels. In order to avoid the adverse effect of boundary pixels on the evaluation of the overall performance of the algorithm, we removed the image strips with a width of 4 pixels at the image boundary before calculating the PSNR. In other words, boundary pixels are not included in the PSNR calculation.
The results of these datasets (i.e., the average PSNR and CPSNR of each dataset) are provided in Tables 1-8. The results show that our method and the EARI [29] method alternately achieve state-of-the-art performance (the state-of-the-art results are bold in the tables). More detailed results for each image in all datasets are presented in Appendix A.  We performed our experiments using MATLAB on an Intel Core(TM) i7-8700 @3.20-GHz CPU with 16 GB RAM. The processing time of each method on images of different sizes is shown in Table 9. All methods were not specifically optimized for acceleration. It can be seen that our method outperforms EARI.

Controlled Experiments
The regularization parameter ε in Equation (14) can affect the performance of our proposed method. Therefore, we carried out controlled experiments using the dataset from [39] and our "indoor" dataset. As shown in Table 10, superior results were obtained when ε = 0.005. To compare the difference between our mutated guided filter and original guided filter, the (εb θ p,q ) 2 term in the cost function Equation (16) was replaced with (εa θ p,q ) 2 : Equation (26) is the cost function of the original guided filter. Then, other processing steps of PIRC remained the same, and the results with the different regularization parameter ε were obtained and presented in Table 11. Comparing Tables 10 and 11, it can be seen that our mutated guided filter has better performance than the original guided filter in this demosaicing task (the superior results are bold in each tables).

Application Experiments
We conducted visual application experiments to demonstrate the importance and effectiveness of PIRC demosaicing. As the polarization reflection characteristics of the surfaces of different materials are varied, polarization imaging can serve visual tasks such as target detection and scene segmentation. Here, we focused on the potential of distinguishing objects by the difference in polarization characteristics and did not consider target detection or image segmentation algorithms. Figure 5 show the pseudo-color images synthesized by the polarization images after PIRC demosaicing. The synthesis method directly normalized the calculated DOLP, AOPL, and I, and then filled them into the R, G, and B channels of the color image. This article provides a simple illustration of this process and does not discuss other more advanced fusion methods. target detection and scene segmentation. Here, we focused on the potential of distinguishing objects by the difference in polarization characteristics and did not consider target detection or image segmentation algorithms. Figure 5 show the pseudo-color images synthesized by the polarization images after PIRC demosaicing. The synthesis method directly normalized the calculated DOLP, AOPL, and I, and then filled them into the R, G, and B channels of the color image. This article provides a simple illustration of this process and does not discuss other more advanced fusion methods.

Discussion
In Tables 1-8, the results show that our method and the EARI [29] method alternately achieved state-of-the-art performance, and both of them outperformed the other methods. Their performances varied on different datasets, which is mainly due to the different texture conditions of the experimental images.
In Table 9, for different algorithms, the results show that the improvement of the accuracy also brought an increase in the calculation time. Bilinear interpolation and bicubic interpolation were the fastest; in contrast, EARI and our PIRC took more time. However, PIRC was still considerably faster than EARI, although their accuracy was similar. In the experiments of this article, we did not do a special acceleration optimization for all of those algorithms. It is expected that our method can meet the needs of real-time applications after the necessary acceleration optimization. Figure 5 is an outdoor scene. Figure 5a is the light intensity image, and Figure 5b is the pseudo-color image synthesized by polarization information. Figure 5c-g show the targets marked with colored boxes. They are water on the ground, buses, cars, sewer manhole covers, and cars hidden under the canopy, respectively. It can be observed that the targets that are difficult to distinguish in the light intensity image are clearly distinguished in the polarization information fusion image. For example, in Figure 5c, the surface of the stagnant water is smooth, and the specular reflection is obvious, thus the polarization characteristics are significantly different from the surrounding environment. In Figure 5e-g, the car's glass and the manhole cover on the road are a red tone due to the strong degree of polarization. These polarization characteristics effectively aid in visual tasks, such as detecting, recognizing, and tracking vehicles by UAV, intersection monitoring, and detecting road surface water by unmanned vehicles.

Conclusions
This work presented a new polarized intensity ratio constraint demosaicing method for dividing a focal-plane polarimetric image. The method could efficiently utilize both the interchannel and intrachannel correlations and retain the characteristics of polarization detection. Our method first restored the light intensity image following the edge and texture. It then further restored the image of each channel according to the unique constraint relationship between the polarization channels. We directly used the mosaic image obtained by the DoFP sensor as the ground truth for the comparison experiment, which could greatly facilitate data collection and enrich the source of experimental data. The experimental results demonstrated our proposed method was both effective and practical. The findings also showed how polarimetric imaging could benefit computer vision and remote sensing tasks. In the future, we will continue to improve imaging quality. Other future research directions include multi-frame demosaicing, polarized 3D reconstruction, polarized target detection, and polarized target tracking.  Acknowledgments: Thanks to Lapray, P.J. [39], Wen, S. [41], Qiu, S.M. [42], and Morimatsu, M. [29] for providing the public database.

Conflicts of Interest:
The authors declare no conflict of interest. Tables 1-8 [42], and Morimatsu, M.

Appendix A. Detailed Results for Each Image in All Datasets
[29] for providing the public database.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Detailed Results for Each Image in All Datasets
Tables 1-8 presented the average PSNR and CPSNR on each dataset. Here, Figures A1-A6 presented detailed results for each image in all datasets. The results show that our method and the EARI [29] method alternately achieved state-of-the-art performance.            Figure A6. The PSNR and CPSNR of each image on the "still" (number 1-7), "indoor" (number 8-12), and "outdoor" (number 8-17) datasets.