Next Article in Journal
Rayleigh Lidar Signal Denoising Method Combined with WT, EEMD and LOWESS to Improve Retrieval Accuracy
Next Article in Special Issue
Are Indices of Polarimetric Purity Excellent Metrics for Object Identification in Scattering Media?
Previous Article in Journal
Similarity Analysis between Contour Lines by Remotely Piloted Aircraft and Topography Using Hausdorff Distance: Application on Contour Planting
Previous Article in Special Issue
Exploring the Potential of Optical Polarization Remote Sensing for Oil Spill Detection: A Case Study of Deepwater Horizon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polarized Intensity Ratio Constraint Demosaicing for the Division of a Focal-Plane Polarimetric Image

1
Guangxi Key Lab of UAV Remote Sensing, Guilin University of Aerospace Technology, Guilin 541004, China
2
Spatial Information Integration and 3S Engineering Application Beijing Key Laboratory, Institute of Remote Sensing and Geographic Information System, Peking University, Beijing 100871, China
3
School of Environment of Natural Resources, Renmin University of China, Beijing 100872, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(14), 3268; https://doi.org/10.3390/rs14143268
Submission received: 19 April 2022 / Revised: 16 June 2022 / Accepted: 19 June 2022 / Published: 7 July 2022
(This article belongs to the Special Issue Advanced Light Vector Field Remote Sensing)

Abstract

:
Polarization is an independent dimension of light wave information that has broad application prospects in machine vision and remote sensing tasks. Polarization imaging using a division-of-focal-plane (DoFP) polarimetric sensor can meet lightweight and real-time application requirements. Similar to Bayer filter-based color imaging, demosaicing is a basic and important processing step in DoFP polarization imaging. Due to the differences in the physical properties of polarization and the color of light waves, the widely studied color demosaicing method cannot be directly applied to polarization demosaicing. We propose a polarized intensity ratio constraint demosaicing model to efficiently account for the characteristics of polarization detection in this work. First, we discuss the special constraint relationship between the polarization channels. It can be simply described as: for a beam of light, the sum of the intensities detected by any two vertical ideal analyzers should be equal to the total light intensity. Then, based on this constraint relationship and drawing on the concept of guided filtering, a new polarization demosaicing method is developed. A method to directly use raw images captured by the DoFP detector as the ground truth for comparison experiments is then constructed to aid in the convenient collection of experimental data and extensive image scenarios. Results of both qualitative and quantitative experiments illustrate that our method is an effective and practical method to faithfully recover the full polarization information of each pixel from a single mosaic input image.

1. Introduction

1.1. Background

Polarization, together with intensity, frequency, and phase, constitute the basic properties of light from the perspective of waves. Both intensity, which corresponds to brightness, and frequency, which corresponds to color or spectrum, have been widely researched and applied in the fields of vision and remote sensing. The research and application of polarization imaging and processing have developed gradually in recent years. As a new information dimension, polarization information has a significant role in computer vision and remote sensing tasks, providing an essential function in some respects. It has been widely used in tasks such as object detection [1,2], image haze removal [3,4], underwater image enhancement [5], and 3D reconstruction [6,7].
The polarization imaging methods mainly include division of time (DoT), division of amplitude (DoAM), division of aperture (DoAP), and division of focal plane (DoFP) [8]. However, DoT is not capable of real-time imaging, whereas DoAM and DoAP suffer from complex and heavy structures. The DoFP polarization imaging sensor is composed of a micropolarization array (MPA) oriented at 0°, 45°, 90°, and 135° (shown in Figure 1, left). Thus, it can capture linear polarization information in one shot and its structure is simple. However, the DoFP sensor trades polarization information at the expense of spatial resolution, which is similar to color detectors based on the Bayer filter. The raw image obtained by a DoFP or Bayer filter-based sensor is called a mosaic image (seen in Figure 1, right). The goal of demosaicing is to restore a full-size multi-channel image from a raw mosaic image.
Bayer filter-based color image demosaicing has been widely studied and applied in recent decades, and it is an important component of color image processing. However, the research on polarized image demosaicing is still relatively scarce. Although we can learn about polarized image demosaicing from color image demosaicing, it is not exactly the same because color and polarization information have different constraints between adjacent pixels. Comprehensive color science research provides a priori information for color image demosaicing, such as the color difference model [9,10]. Polarized image demosaicing should also fully utilize the inherent physical prior knowledge of polarization detection, which is not fully considered in traditional interpolation methods and explains why they do not perform well on polarized mosaic images.

1.2. Related Work

Demosaicing originated from color image processing. Single-sensor color imaging technology with a color filter array (CFA) is widely used in the digital camera industry. The most popular and widely used CFA is the Bayer CFA, which was released in 1976 [11]. Polarization filter array (PFA) technology was patented in 1995 [12], but most of the practical implementations and technology advances were made between 2009 and now [13]. In recent years, some PFA cameras have appeared on the market, such as 4D Technology’s PolarCam device [14] and Sony’s IMX250MZR polarization-dedicated sensor (Tokyo, Japan) [15]. Although research on color image demosaicing has a long history, research into polarization demosaicing has only recently begun. In this section, we provide an overview of both color image and polarized image demosaicing.

1.2.1. Color Image Demosaicing

Color image demosaicing can be divided into spatial interpolation, frequency-based methods, and data-driven methods. Data-driven methods include sparse representation and neural networks. Spatial interpolation-based demosaicing interpolates along multiple directions to efficiently utilize both interchannel and intrachannel correlations [10,16,17,18]. Frequency selection-based demosaicing takes advantage of the spectral characteristics of raw images [19,20,21]. Sparse representation-based demosaicing considers demosaicing as an inverse problem and exploits sparsity prior by decomposing each image patch into a sparse representation over a dictionary [22,23]. The neural network method uses a large amount of data to train a neural network to estimate the missing pixels [24]. Past experiments and studies have shown that the spatial domain method is more advantageous than the frequency domain method [9]. Although the data-driven method performs well, its recovery effect is extremely dependent on the relevance of training and verification data, meaning it is difficult to widely adapt to complex and changeable actual scenes.

1.2.2. Polarized Image Demosaicing

Polarized image demosaicing is different from color image demosaicing in two main ways. (1) CFA has three channels, so there are two G pixels in every 2 × 2 pixel block. That is, the sampling rate of G is twice that of R and B. In contrast, PFA has four channels, thus the sampling rate of each channel is the same. (2) The signal between CFA channels is constrained by spectral information, and the signal between PFA channels is constrained by polarization information. The first difference means that the common method of first restoring the double-sampled G channel in color image demosaicing is not applicable to polarized images. The second difference means that the color difference model [9,10] of color images does not apply to polarized images.
In recent years, some polarization demosaicing algorithms have been proposed with reference to the superior algorithms in color image demosaicing. Traditional bilinear and bicubic interpolation methods were first used in [25], a gradient-based interpolation method was proposed in [26], an intensity correlation interpolation method was proposed in [27], a polarization channel difference prior method was proposed in [28], and an edge-aware residual interpolation was proposed in [29]. A guided filter [30] works well for color image demosaicing [10], and it is also used in polarization demosaicing [31,32,33,34]. In this paper, a derivative guided filtering method is proposed, which is different from the original guided filter [30] and is more suitable for the constraint relationship between polarization channels. Learning-based image processing methods, neural networks [35,36], and sparse representation-based methods [37,38] have been used to solve this problem. However, such methods usually require datasets to support them. Thus, the quality of the processing result depends on the similarity between the image to be processed and the image training set.

1.3. Contribution

In this paper, we propose a polarized intensity ratio constraint (PIRC) demosaicing method to restore high-quality four-channel polarized images from one-channel mosaic observations captured by a single-chip DoFP polarized sensor. The physical constraint of the PIRC is simple: for a beam of light, the sum of the intensities detected by any two vertical ideal analyzers should be equal to the total light intensity. Based on this constraint, this paper draws on the guided filtering method and proposes a specific cost function for polarization demosaicing. This technique not only considers the texture relationship between pixels but also considers the relationship of polarization information between channels.
The main contributions of this paper are summarized as follows: (1) The actual physical constraints between polarization imaging channels are identified. (2) A PIRC polarization demosaicing method is proposed that considers both the constraints between channels and the relationship between pixels. (3) A method is designed that directly employs raw images as ground truth for comparison experiments, instead of using additional methods to obtain the ground truth. This process facilitates the convenient collection of experimental data with more extensive data scenarios. (4) Extensive experiments are carried out to demonstrate that our proposed method achieves state-of-the-art results.
The remainder of this paper is organized as follows. Section 2 presents the polarized intensity ratio constraint of the polarized image and details the proposed method, Section 3 shows the experiment results, Section 4 discusses the results, and Section 5 concludes the paper.

2. Materials and Methods

2.1. The Constraint of Detected Polarized Intensity

The special constraint relationship between the polarization channels is discussed before introducing our polarization demosaicing method. This relationship is typically overlooked by researchers. Natural light can be regarded as a superposition of a large number of single polarization-state monochromatic lights. A single polarization-state monochromatic plane light wave propagating along the z-axis can be expressed as:
E = A cos ( k z ω t )
where E is the wave function, A = A a is the amplitude vector of the electric field, a is the unit vector describing the vibration direction, A is the scalar describing the amplitude value, k = 2 π / λ is the wavenumber, ω = 2 π / T is the angular frequency, and T is the vibration period of the light wave. The light intensity I is equal to the square of the electric field amplitude:
I = | A | 2
E can be decomposed to any two orthogonal components in the O-xy plane. If the decomposition is on the xy-axis, then Equation (1) can be expressed in scalar form:
{ E x = A x cos ( k z ω t + δ x ) E y = A y cos ( k z ω t + δ y )
where A x = | A | cos θ , A y = | A | cos ( π 2 θ ) , and θ represents the angle between A and the x-axis. If the light intensity of each polarization direction is detected under ideal conditions, the following relationships are present:
I x + I y = A x 2 + A y 2 = ( | A | cos θ ) 2 + ( | A | cos ( π 2 θ ) ) 2 = | A | 2 ( cos 2 θ + sin 2 θ ) = I
According to the above principle, when a beam of linearly polarized light passes through the analyzers with the optical axis directions of 0°, 45°, 90°, and 135° (the 0° direction coincides with the x-axis), the transmitted light intensity should be:
{ I 0 = I cos 2 ( θ 0 ) = I cos 2 θ = a I I 45 = I cos 2 ( θ 45 ) = I cos 2 ( θ π 4 ) = c I I 90 = I cos 2 ( θ 90 ) = I cos 2 ( θ π 2 ) = ( 1 a ) I I 135 = I cos 2 ( θ 135 ) = I cos 2 ( θ π 4 π 2 ) = ( 1 c ) I
where a = cos 2 θ and c = cos 2 ( θ π 4 ) . From Equation (5), we can obtain:
I 0 + I 90 = I 45 + I 135 = I
If we consider the natural light formed by the noninterference superposition of light of multiple polarization states, Equation (5) can be expressed as:
{ I 0 = k N I k cos 2 θ k = k N a k I k I 45 = k N I k cos 2 ( θ k π 4 ) = k N c k I k I 90 = k N I k cos 2 ( θ k π 2 ) = k N ( 1 a k ) I k I 135 = k N I k cos 2 ( θ k π 4 π 2 ) = k N ( 1 c k ) I k
According to Equation (7), when natural light is incident, Equation (6) still holds:
I 0 + I 90 = I 45 + I 135 = k N I k = I
that is:
{ I 0 = a I I 45 = c I I 90 = ( 1 a ) I I 135 = ( 1 c ) I
In the above discussion, the polarimetric extinction ratio and transmittance of the analyzer are assumed to be ideal. In reality, the analyzer is not an ideal one. In order to describe this error, a small offset term should be added to Equation (9):
{ I 0 = a I + Δ 1 I 45 = c I + Δ 2 I 90 = ( 1 a ) I + Δ 3 I 135 = ( 1 c ) I + Δ 4
Equation (10) describes the constraint relationship between the polarized light intensity observed in the 0°, 45°, 90°, and 135° directions and the total light intensity, which is the starting point of the DoFP polarimetric demosaicing method proposed in this paper.

2.2. Polarized Intensity Ratio Constraint Demosaicing Method

Based on the constraint relationship between polarization channels discussed in Section 3, we propose a new polarized intensity ratio constraint (PIRC) demosaicing method. Our fundamental goal is the following: the image texture and polarization state of each pixel are maintained. Thus, the proposed PIRC demosaicing method is divided into two main steps. First, the intensity image is obtained. The recovered intensity image is expected to retain the truthful image texture; thus, the relationship between neighboring pixels must be fully considered. To achieve this, a directional gradient-based method is applied to interpolate each polarization channel, a tentative estimation value of each polarization channel is obtained, and the full intensity is half of the sum of four channels. Second, based on the full intensity image, each polarization channel is recovered by a mutated guided filter method. The full process is shown in Figure 2. In brief, we apply gradient filtering to the raw images in each channel to obtain tentative values of intensity with each polarization azimuth in any given pixel, and then apply the derived guided filtering to calculate the final estimated value.
In the subsequent description, I i , j θ without a cap indicates the actual detected polarized light intensity in the θ direction by the pixel (i, j); I ^ i , j θ with the cap “ ^ ” is the tentative estimate value; and I ¯ i , j θ with the cap “ ¯ ” is the final estimate value.

2.2.1. Recover the Intensity Image by Image Gradient

Only one of the polarization directions is detected for each pixel (i, j) of the raw image. Based on the relationship in Equation (10), if its perpendicular direction is estimated, then the full intensity I i , j of location (i, j) can be estimated. For example, when the I i , j 45 is detected at the location (i, j), the full intensity I ^ i , j can be obtained by adding the detected I i , j 45 to the tentatively estimated I ^ i , j 135 . As illustrated in Figure 1, for each pixel (i, j), the four diagonal adjacent pixels are in the perpendicular polarization direction, and the vertical and horizontal adjacent pixels are in two other polarization directions. In order to make full use of the relationship between adjacent pixels, three nondetected polarization directions should be tentatively estimated in each pixel. Thus, the full intensity I ^ i , j can be estimated by:
I ^ i , j = ( I i , j 45 + I ^ i , j 0 + I ^ i , j 90 + I ^ i , j 135 ) / 2
where I ^ i , j 0 , I ^ i , j 90 , and I ^ i , j 135 are the tentatively estimated polarization channel values.
Since the recovered intensity image should convey the actual image texture, the image edges and gradients should be fully considered during interpolating. The fundamental idea is that the interpolation should be performed along the edge and not across the edge. We first evaluate the gradient of the raw image in four different directions, east (the horizontal direction), northeast (the diagonal direction with positive tangent), north (the vertical direction), and northwest (the diagonal direction with negative tangent). For each pixel (i, j) of the raw image, four gradient values are calculated on a 7 × 7 window using Equation (12).
{ D E ( i , j ) = m = 1 , 1 , 3 n = 0 , 2 , 4 | I i + m , j + n r a w I i + m , j + n 2 r a w | D N E ( i , j ) = m = 2 , 0 , 2 n = 0 , 2 , 4 | I i + m , j + n r a w I i + m + 2 , j + n 2 r a w | D N ( i , j ) = m = 0 , 2 , 4 n = 1 , 1 , 3 | I i + m , j + n r a w I i + m 2 , j + n r a w | D N W ( i , j ) = m = 2 , 0 , 2 n = 2 , 0 , 2 | I i + m , j + n r a w I i + m + 2 , j + n + 2 r a w |
The process of tentatively estimating the nondetected polarization intensity I ^ i , j θ has two steps, diagonal interpolation and vertical and horizontal interpolation:
Diagonal interpolation. In each pixel, the diagonal interpolation can interpolate the dual value (i.e., the value of the perpendicular direction) of the detected one. If the gradient in the NE direction is larger than the gradient in the NW direction, i.e., D N E > D N W , bicubic interpolation is applied to the target pixel along the NW direction. If the gradient in the NW direction is larger than the gradient in the NE direction, i.e., D N W > D N E , bicubic interpolation is applied to the target pixel along the NE direction. If there is an equal situation, i.e., D N E = D N W , the average of the two bicubic interpolation values is taken. Diagonal interpolation of all four polarization channels should be completed before the next step.
Vertical and horizontal interpolation. As shown in Figure 1, each polarization channel is detected every second row and every second column. Thus, when we do the bicubic interpolation in the N and E directions for the any target pixel (i, j), in only one direction the required adjacent pixel has detected value. Additionally, in another direction, only the estimated value from the above diagonal interpolation process can be used. For example, when I i , j 45 is detected on pixel (i, j), the adjacent detected I 135 can be used in both the NE and NW directions (the diagonal directions). Thus, I ^ i , j 135 can be estimated by diagonal interpolation. However, for I ^ i , j 0 and I ^ i , j 90 there is no detected I 0 in the E direction (the horizontal direction) or I 90 in the N direction (the vertical direction). Thus, in those directions, the I ^ 0 and I ^ 90 from their own diagonal interpolations are used. If the gradient in the E direction is larger than the gradient in the N direction, i.e., D E > D N , bicubic interpolation is applied to the target pixel along the N direction. If the gradient in the N direction is not larger than the gradient in the E direction, i.e., D N > D E , bicubic interpolation is applied to the target pixel along the E direction. If there is an equal situation, i.e., D N = D E , the average of the two is taken.

2.2.2. Interpolate Each Polarization Channel by Intensity Ratio Constraint

After obtaining the tentative estimate intensity image I ^ , each polarization channel I ¯ 0 , I ¯ 45 , I ¯ 90 , and I ¯ 135 can be calculated. Considering the constraint of Equation (10), a method derived from the guided filter technique [30] is proposed. This method allows each polarization channel I θ to adhere to the texture of the intensity image I . At the same time, the relationship between the channels is also fully retained, which ensures the correct recovery of polarization information.
In the proposed method, the intensity image I ^ is employed as a guidance image, which is used as a reference to exploit the image structures. The input sparse polarization image is accurately unsampled by the derived guided filter.
For each polarization channel image I θ , θ = 0 , 45 , 90 , 135 , we define the filter as:
I ¯ i , j θ = a p , q θ I ^ i , j + b p , q θ , i , j ω p , q
where I ¯ θ is the filtering output and I ^ is the guidance image. Equation (13) assumes that I ¯ θ is a linear transform of I ^ in a window ω p , q centered at the pixel (p, q), whereas ( a p , q , b p , q ) are the linear coefficients assumed to be constant in ω p , q . A square window is used for radius r , i.e., the side length is 2 r + 1 . This local linear model ensures that I ¯ θ has an edge only if I ^ has an edge, because I ^ θ = a I ¯ . At the same time, Equation (13) is consistent with Equation (10).
To determine the linear coefficients ( a p , q , b p , q ), we minimize the following cost function in the window ω p , q :
E ( a p , q θ , b p , q θ ) = i , j ω p , q M i , j θ ( ( a p , q θ I ^ i , j + b p , q θ I i , j θ ) 2 + ( ε b p , q θ ) 2 )
where M i , j θ is a binary mask at the pixel ( i , j ), which is one for the sampled pixels (i.e., I i , j θ has the sampling value) and zero for the others. ε is a regularization parameter penalizing large b p , q θ values to ensure the bias term b p , q θ is not too large and is only used to fit the nonideal measured value, which is described in Equation (10). Equation (10) is the physical fact we deduce. In Equation (10), the coefficients a and c (analogous to a p , q θ in Equations (13) and (14)) determine the proportion of Iθ in I, and the bias term Δ (analogous to b p , q θ in Equations (13) and (14)) only characterizes a small error. Thus, in Equation (13), the output Iθ should be mainly determined by the coefficient a p , q θ . In addition, the bias term b p , q θ representing the error should be small. That is, regularizing the coefficients of b p , q θ is appropriate, and it is consistent with physical facts. Regularizing the coefficients of b p , q θ instead of a p , q θ in the cost function is an important difference between our method and the original guided filter [30]. Compared experiment results are shown in Section 3.2.
Equation (14) has a closed-form solution. First, let the partial derivative of the function with respect to a p , q θ and b p , q θ be zero:
E a p , q θ = 0 = 2 i , j ω p , q M i , j θ ( a p , q θ I ^ i , j + b p , q θ I i , j θ ) I ^ i , j
E b p , q θ = 0 = 2 i , j ω p , q M i , j θ ( a p , q θ I ^ i , j + ( 1 + ε ) b p , q θ I i , j θ )
Then, from Equation (16), b p , q θ can be determined:
b p , q = 1 ( 1 + ε ) 1 i , j ω p , q M i , j θ ( i , j ω p , q M i , j θ I i , j θ a p , q θ i , j ω p , q M i , j θ I ^ i , j ) = 1 ( 1 + ε ) ( I p , q θ a p , q θ u p , q )
where I p , q θ = 1 i , j ω p , q M i , j θ i , j ω p , q M i , j θ I i , j θ and u p , q = 1 i , j ω p , q M i , j θ i , j ω p , q M i , j θ I ^ i , j are the mean values of I i , j θ and I ^ i , j , respectively, in the window ω p , q under the mask M i , j θ .
Finally, by incorporating Equation (17) into Equation (15), a p , q θ can be obtained:
a p , q θ = i , j ω p , q M i , j θ I i , j θ I ^ i , j 1 ( 1 + ε ) i , j ω p , q M i , j θ I p , q θ I ^ i , j i , j ω p , q M i , j θ I ^ i , j 2 1 ( 1 + ε ) i , j ω p , q M i , j θ u p , q I ^ i , j
In each pixel (i, j), the linear coefficients (a, b) are different in different overlapping windows ω p , q that cover (i, j). Thus, the average coefficients of all windows overlapping (i, j) are calculated here, i.e., a ¯ i , j θ = 1 | ω | p , q ω i , j a p , q θ and b ¯ i , j θ = 1 | ω | p , q ω i , j b p , q θ . Equation (13) can then be rewritten as:
I ¯ i , j θ = a ¯ i , j θ I i , j + b ¯ i , j θ
Based on Equation (19), each polarization channel I ¯ θ with the polarization direction θ = 0 , 45 , 90 , 135 can be interpolated. The algorithm’s full steps are shown in Algorithm 1.
Algorithm 1 Polarized Intensity Ratio Constraint Demosaic for Division-of-Focal-Plane Polarimetric Image
Input: RAW mosaic polarization image I r a w ; ;
1: I r a w D E , D N , D N E , D N W , by Equation (12);
2: D E , D N , D N E , D N W & I r a w I ^ θ ,   I ^ θ I ^ , by the method in Section 2.2.2;
3: I ^ & I r a w I ¯ θ , by Equation (19).
Output: Four channels polarization images I ¯ θ , θ = 0 , 45 , 90 , 135 .

2.3. Experiment Settings

2.3.1. Dataset

Ground-truth polarization images were required for a full reference evaluation of the proposed method. One of the methods used to acquire a full-resolution polarization image was to install a polarizer in front of an ordinary camera and obtain four-channel polarization images with polarization angles of 0°, 45°, 90°, and 135° by rotating the polarizer. The literature [39] provided a dataset of polarization images of 10 scenes obtained by this method. Each scene included a set of 0°, 45°, 90°, and 135° polarized images in the near-infrared band. The full-resolution images of each channel were down-sampled to generate an artificial mosaic image, which was then used as the input of the demosaicing algorithm.
In the polarizer rotation method for polarization imaging, it is necessary to ensure that the lighting conditions do not change and the target does not move during the process of shooting four independent images. As a result, this method is only suitable for shooting stationary objects indoors under stable lighting conditions and cannot be adapted to outdoor shooting, significantly limiting polarization image acquisition. Therefore, images collected with the DoFP polarization detector were also used as the dataset in our experiments. However, such mosaic images cannot be directly used as ground truth. To address this, we treated each group of four pixels of the original mosaic image as one pixel, creating a synthesized pixel with four polarization channels. However, the resolution of the original image was reduced by half in this process; that is, the pixel size of the image was changed from M × N to M 2 × N 2 × 4 . Therefore, the synthesized four-channel polarization image can be considered to be the same as the above-mentioned full-size polarization image as ground truth and can then be used as the input of the demosaicing algorithm after down-sampling (see Figure 3).
We collected a polarization image dataset using a Lucid Vision Labs Triton TRI050S-P DoFP polarization camera. Three sets of polarization images were collected in different situations, including seven stationary object scenes illuminated by indoor directional light sources (named “still”), five indoor environment scenes illuminated by natural light (named “indoor”), and five outdoor environment scenes (named “outdoor”). The images are provided in Figure 4.

2.3.2. Evaluation Metrics

The well-known evaluation metrics peak signal-to-noise ratio (PSNR) [40] and correlated peak signal-to-noise ratio (CPSNR) were used to measure the accuracy of the polarization information between the reconstructed image and the corresponding ground truth. Between each couple of reference (R) and estimated (E) channels, the PSNR is defined as:
P S N R ( R , E ) = 10 log 10 ( ( max R ) 2 M S E ( R , E ) )
M S E ( R , E ) = 1 M N i = 1 M j = 1 N R i , j E i , j 2
where M S E ( R , E ) denotes the mean squared error between R and E in one channel. If multiple channels are calculated together when calculating the MSE:
M S E ( R , E ) = 1 K M N k = 1 K i = 1 M j = 1 N R i , j k E i , j k 2
Equation (20) then becomes the CPSNR.
The Stokes vector (S0, S1, S2) degree of linear polarization (DoLP) and angle of linear polarization (AoLP) are usually used to characterize linear polarization information. The Stokes vector is calculated by Equation (22):
{ S 0 = ( I 0 + I 45 + I 90 + I 135 ) / 2 S 1 = I 0 I 90 S 2 = I 45 I 135
DoLP and AoLP are respectively calculated by Equations (24) and (25):
D o L P = S 1 2 + S 2 2 S 0
A o L P = 1 2 arctan 2 ( S 2 , S 1 )
where arctan 2 ( . ) is a four-quadrant arctangent function.

3. Results

3.1. Comparative Experiments

The proposed method was compared with the bicubic and bilinear baseline interpolation algorithms [25], and with the gradient-based interpolation method (GBI) [26], interpolation with intensity correlation (IPIC) [27], and edge-aware residual interpolation (EARI) [29]. The experiments were carried out on seven datasets; the four existing databases were PSD [39], JCPD [41], Qiu [42], and EARI [29], and the other three (i.e., still, indoor, and outdoor) were collected by us. Each dataset included several scenes, and the average PSNR and CPSNR values were calculated in each dataset.
Each of the methods compared in the experiments had a different processing scheme for the border pixels around the image. These schemes greatly affected the processing results of boundary pixels. In order to avoid the adverse effect of boundary pixels on the evaluation of the overall performance of the algorithm, we removed the image strips with a width of 4 pixels at the image boundary before calculating the PSNR. In other words, boundary pixels are not included in the PSNR calculation.
The results of these datasets (i.e., the average PSNR and CPSNR of each dataset) are provided in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. The results show that our method and the EARI [29] method alternately achieve state-of-the-art performance (the state-of-the-art results are bold in the tables). More detailed results for each image in all datasets are presented in Appendix A.
We performed our experiments using MATLAB on an Intel Core(TM) i7-8700 @3.20-GHz CPU with 16 GB RAM. The processing time of each method on images of different sizes is shown in Table 9. All methods were not specifically optimized for acceleration. It can be seen that our method outperforms EARI.

3.2. Controlled Experiments

The regularization parameter ε in Equation (14) can affect the performance of our proposed method. Therefore, we carried out controlled experiments using the dataset from [39] and our “indoor” dataset. As shown in Table 10, superior results were obtained when ε = 0.005 .
To compare the difference between our mutated guided filter and original guided filter, the ( ε b p , q θ ) 2 term in the cost function Equation (16) was replaced with ( ε a p , q θ ) 2 :
E ( a p , q θ , b p , q θ ) = i , j ω p , q M i , j θ ( ( a p , q θ I ^ i , j + b p , q θ I i , j θ ) 2 + ( ε a p , q θ ) 2 )
Equation (26) is the cost function of the original guided filter. Then, other processing steps of PIRC remained the same, and the results with the different regularization parameter ε were obtained and presented in Table 11. Comparing Table 10 and Table 11, it can be seen that our mutated guided filter has better performance than the original guided filter in this demosaicing task (the superior results are bold in each tables).

3.3. Application Experiments

We conducted visual application experiments to demonstrate the importance and effectiveness of PIRC demosaicing. As the polarization reflection characteristics of the surfaces of different materials are varied, polarization imaging can serve visual tasks such as target detection and scene segmentation. Here, we focused on the potential of distinguishing objects by the difference in polarization characteristics and did not consider target detection or image segmentation algorithms.
Figure 5 show the pseudo-color images synthesized by the polarization images after PIRC demosaicing. The synthesis method directly normalized the calculated DOLP, AOPL, and I, and then filled them into the R, G, and B channels of the color image. This article provides a simple illustration of this process and does not discuss other more advanced fusion methods.

4. Discussion

In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, the results show that our method and the EARI [29] method alternately achieved state-of-the-art performance, and both of them outperformed the other methods. Their performances varied on different datasets, which is mainly due to the different texture conditions of the experimental images.
In Table 9, for different algorithms, the results show that the improvement of the accuracy also brought an increase in the calculation time. Bilinear interpolation and bicubic interpolation were the fastest; in contrast, EARI and our PIRC took more time. However, PIRC was still considerably faster than EARI, although their accuracy was similar. In the experiments of this article, we did not do a special acceleration optimization for all of those algorithms. It is expected that our method can meet the needs of real-time applications after the necessary acceleration optimization.
Figure 5 is an outdoor scene. Figure 5a is the light intensity image, and Figure 5b is the pseudo-color image synthesized by polarization information. Figure 5c–g show the targets marked with colored boxes. They are water on the ground, buses, cars, sewer manhole covers, and cars hidden under the canopy, respectively. It can be observed that the targets that are difficult to distinguish in the light intensity image are clearly distinguished in the polarization information fusion image. For example, in Figure 5c, the surface of the stagnant water is smooth, and the specular reflection is obvious, thus the polarization characteristics are significantly different from the surrounding environment. In Figure 5e–g, the car’s glass and the manhole cover on the road are a red tone due to the strong degree of polarization. These polarization characteristics effectively aid in visual tasks, such as detecting, recognizing, and tracking vehicles by UAV, intersection monitoring, and detecting road surface water by unmanned vehicles.

5. Conclusions

This work presented a new polarized intensity ratio constraint demosaicing method for dividing a focal-plane polarimetric image. The method could efficiently utilize both the interchannel and intrachannel correlations and retain the characteristics of polarization detection. Our method first restored the light intensity image following the edge and texture. It then further restored the image of each channel according to the unique constraint relationship between the polarization channels. We directly used the mosaic image obtained by the DoFP sensor as the ground truth for the comparison experiment, which could greatly facilitate data collection and enrich the source of experimental data. The experimental results demonstrated our proposed method was both effective and practical. The findings also showed how polarimetric imaging could benefit computer vision and remote sensing tasks. In the future, we will continue to improve imaging quality. Other future research directions include multi-frame demosaicing, polarized 3D reconstruction, polarized target detection, and polarized target tracking.

Author Contributions

Conceptualization, K.J.; methodology, K.J.; software, K.J.; validation, Y.L.; formal analysis, L.Y.; investigation, K.J.; resources, H.Z.; data curation, R.Z. and K.J.; writing—original draft preparation, K.J.; writing—review and editing, Y.L.; visualization, H.Z.; supervision, L.Y.; project administration, L.Y.; funding acquisition, L.Y. and F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2017YFB0503004 and 2017YFB0503003.

Data Availability Statement

The code is openly available at https://github.com/JKevinCH/PIRC (accessed on 16 April 2022).

Acknowledgments

Thanks to Lapray, P.J. [39], Wen, S. [41], Qiu, S.M. [42], and Morimatsu, M. [29] for providing the public database.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Detailed Results for Each Image in All Datasets

Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 presented the average PSNR and CPSNR on each dataset. Here, Figure A1, Figure A2, Figure A3, Figure A4, Figure A5 and Figure A6 presented detailed results for each image in all datasets. The results show that our method and the EARI [29] method alternately achieved state-of-the-art performance.
Figure A1. The PSNR of each image on the PSD [39] dataset.
Figure A1. The PSNR of each image on the PSD [39] dataset.
Remotesensing 14 03268 g0a1
Figure A2. The PSNR of each image on the JCPD [40] “indoor light” dataset.
Figure A2. The PSNR of each image on the JCPD [40] “indoor light” dataset.
Remotesensing 14 03268 g0a2
Figure A3. The PSNR and CPSNR of each image on the JCPD [40] “polar light” dataset.
Figure A3. The PSNR and CPSNR of each image on the JCPD [40] “polar light” dataset.
Remotesensing 14 03268 g0a3
Figure A4. The PSNR and CPSNR of each image on the Qiu [41] dataset.
Figure A4. The PSNR and CPSNR of each image on the Qiu [41] dataset.
Remotesensing 14 03268 g0a4
Figure A5. The PSNR of each image on the EARI [29] dataset.
Figure A5. The PSNR of each image on the EARI [29] dataset.
Remotesensing 14 03268 g0a5
Figure A6. The PSNR and CPSNR of each image on the “still” (number 1–7), “indoor” (number 8–12), and “outdoor” (number 8–17) datasets.
Figure A6. The PSNR and CPSNR of each image on the “still” (number 1–7), “indoor” (number 8–12), and “outdoor” (number 8–17) datasets.
Remotesensing 14 03268 g0a6

References

  1. Gurton, K.; Felton, M.; Mack, R.; LeMaster, D.; Farlow, C.; Kudenov, M.; Pezzaniti, L. MidIR and LWIR polarimetric sensor comparison study. In Proceedings of the SPIE Defense, Security, and Sensing, Orlando, FL, USA, 5–9 April 2010; Volume 7664. [Google Scholar]
  2. Zhou, Y.W.; Li, Z.F.; Zhou, J.; Li, N.; Zhou, X.H.; Chen, P.P.; Zheng, Y.L.; Chen, X.S.; Lu, W. High extinction ratio super pixel for long wavelength infrared polarization imaging detection based on plasmonic microcavity quantum well infrared photodetectors. Sci. Rep. 2018, 8, 15070. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Polarization-based vision through haze. Appl. Opt. 2003, 42, 511–525. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, W.F.; Lang, J.; Ren, L.Y. Haze-removal polarimetric imaging schemes with the consideration of airlight’s circular polarization effect. Optik 2019, 182, 1099–1105. [Google Scholar] [CrossRef]
  5. Liu, F.; Han, P.L.; Wei, Y.; Yang, K.; Huang, S.Z.; Li, X.; Zhang, G.; Bai, L.; Shao, X.P. Deeply seeing through highly turbid water by active polarization imaging. Opt. Lett. 2018, 43, 4903–4906. [Google Scholar] [CrossRef] [PubMed]
  6. Reda, M.; Zhao, Y.; Chan, J.C.-W. Polarization Guided Autoregressive Model for Depth Recovery. IEEE Photon. J. 2017, 9, 6803016. [Google Scholar] [CrossRef]
  7. Kadambi, A.; Taamazyan, V.; Shi, B.; Raskar, R. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3370–3378. [Google Scholar]
  8. Gruev, V.; Perkins, R.; York, T. CCD polarization imaging sensor with aluminum nanowire optical filters. Opt. Express 2010, 18, 19087–19094. [Google Scholar] [CrossRef]
  9. Li, X.; Gunturk, B.; Zhang, L. Image demosaicing: A systematic survey. In Proceedings of the Electronic Imaging, San Jose, CA, USA, 27–31 January 2008; Volume 6822. [Google Scholar]
  10. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Minimized-Laplacian Residual Interpolation for Color Image Demosaicking. In Proceedings of the IS&T/SPIE Electronic Imaging, San Francisco, CA, USA, 2–6 February 2014; Volume 9023. [Google Scholar]
  11. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065A, 5 March 1975. [Google Scholar]
  12. Rust, D.M. Integrated Dual Imaging Detector. U.S. Patent 5,438,414, 22 January 1995. [Google Scholar]
  13. Tokuda, T.; Sato, S.; Yamada, H.; Sasagawa, K.; Ohta, J. Polarisation-analysing CMOS photosensor with monolithically embedded wire grid polariser. Electron. Lett. 2009, 45, 228–229. [Google Scholar] [CrossRef]
  14. Brock, N.J.; Crandall, C.; Millerd, J.E. Snap-shot Imaging Polarimeter: Performance and Applications. In Proceedings of the SPIE Sensing Technology + Applications, Baltimore, MA, USA, 5–9 May 2014; Volume 9099. [Google Scholar]
  15. Mihoubi, S.; Lapray, P.-J.; Bigué, L. Survey of Demosaicking Methods for Polarization Filter Array Images. Sensors 2018, 18, 3688. [Google Scholar] [CrossRef] [Green Version]
  16. Paliy, D.; Katkovnik, V.; Bilcu, R.; Alenius, S.; Egiazarian, K. Spatially adaptive color filter array interpolation for noiseless and noisy data. Int. J. Imag. Syst. Tech. 2007, 17, 105–122. [Google Scholar] [CrossRef] [Green Version]
  17. Pekkucuksen, I.; Altunbasak, Y. Multiscale Gradients-Based Color Filter Array Interpolation. IEEE Trans. Image Process. 2013, 22, 157–165. [Google Scholar] [CrossRef]
  18. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual Interpolation for Color Image Demosaicking. In Proceedings of the 2013 20th IEEE International Conference on Image Processing (ICIP 2013), Melbourne, Australia, 15–18 September 2013; pp. 2304–2308. [Google Scholar]
  19. Alleysson, D.; Susstrunk, S.; Herault, J. Linear demosaicing inspired by the human visual system. IEEE Trans. Image Process. 2005, 14, 439–449. [Google Scholar] [PubMed] [Green Version]
  20. Dubois, E. Frequency-domain methods for demosaicking of Bayer-sampled color images. IEEE Signal Proc. Lett. 2005, 12, 847–850. [Google Scholar] [CrossRef]
  21. Leung, B.; Jeon, G.; Dubois, E. Least-Squares Luma-Chroma Demultiplexing Algorithm for Bayer Demosaicking. IEEE Trans. Image Process. 2011, 20, 1885–1894. [Google Scholar] [CrossRef] [PubMed]
  22. Mairal, J.; Elad, M.; Sapiro, G. Sparse representation for color image restoration. IEEE Trans. Image Process. 2008, 17, 53–69. [Google Scholar] [CrossRef] [Green Version]
  23. Moghadam, A.A.; Aghagolzadeh, M.; Kumar, M.; Radha, H. Compressive Framework for Demosaicing of Natural Images. IEEE Trans. Image Process. 2013, 22, 2356–2371. [Google Scholar] [CrossRef]
  24. Kokkinos, F.; Lefkimmiatis, S. Deep Image Demosaicking Using a Cascade of Convolutional Residual Denoising Networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 317–333. [Google Scholar]
  25. Gao, S.K.; Gruev, V. Bilinear and bicubic interpolation methods for division of focal plane polarimeters. Opt. Express 2011, 19, 26161–26173. [Google Scholar] [CrossRef]
  26. Gao, S.K.; Gruev, V. Gradient-based interpolation method for division-of-focal-plane polarimeters. Opt. Express 2013, 21, 1137–1151. [Google Scholar] [CrossRef]
  27. Zhang, J.C.; Luo, H.B.; Hui, B.; Chang, Z. Image interpolation for division of focal plane polarimeters with intensity correlation. Opt. Express 2016, 24, 20799–20807. [Google Scholar] [CrossRef]
  28. Wu, R.Y.; Zhao, Y.Q.; Li, N.; Kong, S.G. Polarization image demosaicking using polarization channel difference prior. Opt. Express 2021, 29, 22066–22079. [Google Scholar] [CrossRef]
  29. Morimatsu, M.; Monno, Y.; Tanaka, M.; Okutomi, M. Monochrome and Color Polarization Demosaicking Using Edge-Aware Residual Interpolation. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 2571–2575. [Google Scholar]
  30. He, K.M.; Sun, J.; Tang, X.O. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  31. Ahmed, A.; Zhao, X.J.; Gruev, V.; Zhang, J.C.; Bermak, A. Residual interpolation for division of focal plane polarization image sensors. Opt. Express 2017, 25, 10651–10662. [Google Scholar] [CrossRef] [PubMed]
  32. Li, N.; Zhao, Y.Q.; Pan, Q.; Kong, S.G. Demosaicking DoFP images using Newton’s polynomial interpolation and polarization difference model. Opt. Express 2019, 27, 1376–1391. [Google Scholar] [CrossRef] [PubMed]
  33. Jiang, T.C.; Wen, D.S.; Song, Z.X.; Zhang, W.K.; Li, Z.X.; Wei, X.; Liu, G. Minimized Laplacian residual interpolation for DoFP polarization image demosaicking. Appl. Opt. 2019, 58, 7367–7374. [Google Scholar] [CrossRef]
  34. Liu, S.M.; Chen, J.J.; Xun, Y.; Zhao, X.J.; Chang, C.H. A New Polarization Image Demosaicking Algorithm by Exploiting Inter-Channel Correlations With Guided Filtering. IEEE Trans. Image Process. 2020, 29, 7076–7089. [Google Scholar] [CrossRef]
  35. Zhang, J.C.; Shao, J.B.; Luo, H.B.; Zhang, X.Y.; Hui, B.; Chang, Z.; Liang, R.G. Learning a convolutional demosaicing network for microgrid polarimeter imagery. Opt. Lett. 2018, 43, 4534–4537. [Google Scholar] [CrossRef]
  36. Wen, S.J.; Zheng, Y.Q.; Lu, F.; Zhao, Q.P. Convolutional demosaicing network for joint chromatic and polarimetric imagery. Opt. Lett. 2019, 44, 5646–5649. [Google Scholar] [CrossRef]
  37. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  38. Huang, L.; Xiao, L.; Wei, Z. A Nonlocal Sparse Representation Method for Color Demosaicking. Acta Electron. Sin. 2014, 42, 66–73. [Google Scholar]
  39. Lapray, P.J.; Gendre, L.; Foulonneau, A.; Bigue, L. A database of polarimetric and multispectral images in the visible and NIR regions. In Proceedings of the SPIE Photonics Europe, Strasbourg, France, 22–26 April 2018; Volume 10677. [Google Scholar]
  40. Mahalanobis, A.; Vijaya Kumar, B.V.K.; Juday, R.D. Correlation Pattern Recognition; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  41. Wen, S.; Zheng, Y.; Lu, F. A Sparse Representation Based Joint Demosaicing Method for Single-Chip Polarized Color Sensor. IEEE Trans. Image Process. 2021, 30, 4171–4182. [Google Scholar] [CrossRef]
  42. Qiu, S.M.; Fu, Q.; Wang, C.L.; Heidrich, W. Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays. Comput. Graph. Forum 2021, 40, 77–89. [Google Scholar] [CrossRef]
Figure 1. Micropolarization array (a) and polarization of raw mosaic image (b).
Figure 1. Micropolarization array (a) and polarization of raw mosaic image (b).
Remotesensing 14 03268 g001
Figure 2. PIRC demosaicing process.
Figure 2. PIRC demosaicing process.
Remotesensing 14 03268 g002
Figure 3. Using a raw mosaic image as the ground truth for reference evaluation.
Figure 3. Using a raw mosaic image as the ground truth for reference evaluation.
Remotesensing 14 03268 g003
Figure 4. Polarization image dataset collected by our DoFP polarization camera.
Figure 4. Polarization image dataset collected by our DoFP polarization camera.
Remotesensing 14 03268 g004
Figure 5. Outdoor scene pseudo-color images synthesized by the polarization images after PIRC demosaicing. (a) is the light intensity image; (b) is the pseudo-color image synthesized by polarization information; (cg) show the targets marked with colored boxes; they are water on the ground, buses, cars, sewer manhole covers, and cars hidden under the canopy, respectively.
Figure 5. Outdoor scene pseudo-color images synthesized by the polarization images after PIRC demosaicing. (a) is the light intensity image; (b) is the pseudo-color image synthesized by polarization information; (cg) show the targets marked with colored boxes; they are water on the ground, buses, cars, sewer manhole covers, and cars hidden under the canopy, respectively.
Remotesensing 14 03268 g005
Table 1. The average PSNR and CPSNR of the comparative results on the PSD [39] dataset.
Table 1. The average PSNR and CPSNR of the comparative results on the PSD [39] dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°45.974347.718947.547647.092349.218849.5521
I45.713047.457647.203246.727849.498649.4521
I45°40.821041.562341.049540.905342.754842.5748
I135°44.140746.140445.091544.875248.002147.7359
I47.136748.992348.067048.085250.761850.5035
CPSNR44.021245.378844.765844.568346.862246.7302
Table 2. The average PSNR and CPSNR of the comparative results on the JCPD [41] “indoor light” dataset.
Table 2. The average PSNR and CPSNR of the comparative results on the JCPD [41] “indoor light” dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°37.830839.085839.068938.823141.945142.5955
I38.042639.289639.295939.038342.303442.1324
I45°38.052439.290839.311439.063941.813541.6400
I135°37.902739.141139.163238.921241.894041.6849
I41.020742.619542.251742.337645.645945.4912
CPSNR38.416039.697839.664239.452742.492142.3369
Table 3. The average PSNR and CPSNR of the comparative results on the JCPD [41] “polar light” dataset.
Table 3. The average PSNR and CPSNR of the comparative results on the JCPD [41] “polar light” dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°43.3966 44.2719 44.4778 44.2141 45.3170 45.9523
I41.8220 42.8536 43.0237 42.8074 44.2339 44.4133
I45°40.2640 41.2586 41.0635 41.0114 41.7744 42.1586
I135°42.2719 43.3034 43.4551 43.1951 44.4374 44.6750
I45.4879 46.8313 46.6407 46.6710 47.9939 48.3255
CPSNR42.250443.287243.298143.151044.239644.5935
Table 4. The average PSNR and CPSNR of the comparative results on the Qiu [42] dataset.
Table 4. The average PSNR and CPSNR of the comparative results on the Qiu [42] dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°45.4405 45.9100 46.1176 45.8781 47.740547.7238
I45.1386 45.5986 45.8165 45.5700 47.526847.4604
I45°43.8359 44.2798 44.4799 44.2611 46.311346.1561
I135°44.1497 44.5925 44.8264 44.5903 46.743746.4882
I47.5776 48.2071 48.2397 48.2368 50.200650.0745
CPSNR44.789645.256945.447045.236747.258347.1266
Table 5. The average PSNR and CPSNR of the comparative results on the EARI [29] dataset.
Table 5. The average PSNR and CPSNR of the comparative results on the EARI [29] dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°42.5052 43.6349 43.5218 43.1572 46.865346.5659
I41.5901 42.4932 42.4689 42.0089 44.5156 44.5318
I45°42.3522 43.4642 43.3320 42.9762 46.540346.2573
I135°41.5906 42.4905 42.4565 42.0234 44.344744.3395
I44.9012 46.2316 45.7717 45.7528 48.907048.7685
CPSNR42.416243.448843.331942.968145.871845.7766
Table 6. The average PSNR and CPSNR of the comparative results on the “still” dataset.
Table 6. The average PSNR and CPSNR of the comparative results on the “still” dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°42.6294 42.7586 43.1299 42.7421 43.3345 43.5794
I42.6440 42.7406 43.1475 42.7236 43.3126 43.5161
I45°42.3462 42.4418 42.8330 42.4047 43.0599 43.2954
I135°42.2083 42.3328 42.7339 42.3044 42.9512 43.1924
I46.7591 47.2200 47.5390 47.2189 47.6745 48.0496
CPSNR43.032543.173143.559543.149343.758644.0018
Table 7. The average PSNR and CPSNR of the comparative results on the “indoor” dataset.
Table 7. The average PSNR and CPSNR of the comparative results on the “indoor” dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°42.0446 42.7083 43.1710 42.9867 43.4097 43.9911
I42.7276 43.3628 43.8270 43.6124 44.3127 44.6737
I45°43.3046 43.9254 44.4055 44.1752 44.7508 45.1523
I135°42.4136 43.0817 43.5635 43.3420 43.8367 44.3540
I46.3890 47.4204 47.9224 47.7924 48.1915 48.8117
CPSNR43.125243.809344.283244.080144.605345.0960
Table 8. The average PSNR and CPSNR of the comparative results on the “outdoor” dataset.
Table 8. The average PSNR and CPSNR of the comparative results on the “outdoor” dataset.
BilinearBicubicGBIIPICEARIOurs
PSNRI90°28.1875 28.4304 28.3835 27.9342 29.619129.4768
I29.1214 29.3598 29.3973 28.9050 30.423430.3209
I45°28.7657 29.0162 29.0302 28.5602 30.243230.0793
I135°27.7849 28.0249 27.9010 27.4841 29.124728.9946
I31.6101 32.0732 31.7485 31.5990 33.238433.0928
CPSNR28.900129.165929.095428.674830.312330.1762
Table 9. Comparison of time cost.
Table 9. Comparison of time cost.
Image SizeProcessing Time(s)
BilinearBicubicGBIIPICEARIOurs
1244 × 10240.07370.09490.25420.33572.03281.3846
1024 × 10240.06130.08000.21900.27771.70201.1751
1024 × 7680.04440.05950.18560.23191.20740.8586
720 × 5400.02220.02860.07940.10210.56980.4334
Table 10. The average PSNR and CPSNR of the controlled experiment results on the PSD [39] dataset.
Table 10. The average PSNR and CPSNR of the controlled experiment results on the PSD [39] dataset.
ε PSNRCPSNR
I90°II45°I135°I
049.453449.266742.471047.550750.332446.6011
0.00149.567349.436742.558247.700950.482646.7139
0.00549.552149.452142.574847.735950.503546.7302
0.0149.493449.414342.570247.733950.498246.7172
0.149.440649.370042.561447.711950.483246.6978
149.379249.330542.556647.717750.482046.6857
1049.349049.304542.550547.699450.470846.6724
10049.359549.315842.553947.715050.478946.6801
Table 11. The results of the original guided filter with different ε on the PSD [33] dataset.
Table 11. The results of the original guided filter with different ε on the PSD [33] dataset.
ε PSNRCPSNR
I90°II45°I135°I
045.590045.707742.269845.775248.252845.0309
0.00145.598745.716642.272145.782148.260345.0367
0.0145.676345.795442.291845.843448.326945.0881
0.0546.007046.132042.369746.098048.606745.3019
0.146.323946.451642.427546.303948.856145.4872
148.857248.946542.434647.346550.280546.4396
1047.537747.282741.478045.652248.579545.1483
10046.621246.334241.061644.776547.731944.4821
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, L.; Jiang, K.; Lin, Y.; Zhao, H.; Zhang, R.; Zeng, F. Polarized Intensity Ratio Constraint Demosaicing for the Division of a Focal-Plane Polarimetric Image. Remote Sens. 2022, 14, 3268. https://doi.org/10.3390/rs14143268

AMA Style

Yan L, Jiang K, Lin Y, Zhao H, Zhang R, Zeng F. Polarized Intensity Ratio Constraint Demosaicing for the Division of a Focal-Plane Polarimetric Image. Remote Sensing. 2022; 14(14):3268. https://doi.org/10.3390/rs14143268

Chicago/Turabian Style

Yan, Lei, Kaiwen Jiang, Yi Lin, Hongying Zhao, Ruihua Zhang, and Fangang Zeng. 2022. "Polarized Intensity Ratio Constraint Demosaicing for the Division of a Focal-Plane Polarimetric Image" Remote Sensing 14, no. 14: 3268. https://doi.org/10.3390/rs14143268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop