Next Article in Journal
Highly Sensitive and Selective Defect WS2 Chemical Sensor for Detecting HCHO Toxic Gases
Next Article in Special Issue
ZYNQ-Based Visible Light Defogging System Design Realization
Previous Article in Journal
Mining the Micro-Trajectory of Two-Wheeled Non-Motorized Vehicles Based on the Improved YOLOx
Previous Article in Special Issue
Improving Recognition of Defective Epoxy Images in Integrated Circuit Manufacturing by Data Augmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multispectral Demosaicing Based on Iterative-Linear-Regression Model for Estimating Pseudo-Panchromatic Image

School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(3), 760; https://doi.org/10.3390/s24030760
Submission received: 14 December 2023 / Revised: 13 January 2024 / Accepted: 23 January 2024 / Published: 24 January 2024
(This article belongs to the Special Issue Image Processing in Sensors and Communication Systems)

Abstract

:
This paper proposes a method for demosaicing raw images captured by multispectral cameras. The proposed method estimates a pseudo-panchromatic image (PPI) via an iterative-linear-regression model and utilizes the estimated PPI for multispectral demosaicing. The PPI is estimated through horizontal and vertical guided filtering, with the subsampled multispectral-filter-array-(MSFA) image and low-pass-filtered MSFA as the guide image and filtering input, respectively. The number of iterations is automatically determined according to a predetermined criterion. Spectral differences between the estimated PPI and MSFA are calculated for each channel, and each spectral difference is interpolated using directional interpolation. The weights are calculated from the estimated PPI, and each interpolated spectral difference is combined using the weighted sum. The experimental results indicate that the proposed method outperforms the State-of-the-Art methods with regard to spatial and spectral fidelity for both synthetic and real-world images.

1. Introduction

Commercial cameras, which capture traditional red-green-blue-(RGB) images, typically record only three colors in the visible band, and they are commonly used for general landscapes and portraits, making them one of the most popular camera types. However, with the development of various industries, there is a growing need to record or identify objects that are not easily discernible in RGB images. To meet this demand, multispectral cameras have been developed. Multispectral imaging has become an increasingly important tool in various fields, such as remote sensing [1], agriculture [2], and biomedical imaging [3]. These imaging systems capture information from multiple spectral bands, thereby providing valuable information that is not visible in traditional grayscale or RGB imaging.
There are various methods for acquiring multispectral images, including rotating structures of different optical filters for each band. Although this approach can capture multispectral images with full resolution for each channel, it is unsuitable for capturing moving subjects. To address this issue, cameras employing the one-snapshot method are used. These cameras acquire mosaic images when a photograph is captured. The resulting mosaic image appears similar to the Bayer pattern [4] used in commercial RGB cameras, as shown in Figure 1a. The mosaic patterns of multispectral-filter arrays (MSFAs) vary depending on the manufacturer. The most commonly used pattern is a 4 × 4 array, which can be divided into two cases: one where there is a dominant channel, e.g., green, in Figure 1b [5], and another where all the channels have the same probability of appearance of 1 16 , as shown in Figure 1c [6].
A mosaic image is a two-dimensional image in which not every channel is measured at every pixel and, therefore, requires demosaicing to estimate unmeasured pixels. Since the introduction of the original snapshot camera for Bayer patterns, several demosaicing methods have been developed. There are three main traditional approaches: using the color-ratio domain, which assumes a constant ratio between colors in the local region [7]; using the color-difference domain, which assumes a constant difference between colors in the local region [8]; and using the residual domain [9] with guided filtering [10]. These methods first interpolate the dominant green channel and then interpolate the remaining R and B channels, using the interpolated G channel. Each channel is interpolated to restore high frequencies through edge estimation. Recent advances in demosaicing have led to the emergence of techniques based on deep learning, in addition to the aforementioned traditional methods. These methods typically use convolutional neural networks (CNNs) to train a network to generate an original image from a raw-image input. Gharbi et al. [11] proposed a joint-demosaicing-and-denoising method using residual networks.
Compared to Bayer filters, the MSFA is a relatively new technology. Therefore, demosaicing methods for MSFAs have been developed by modifying and advancing Bayer-pattern-based demosaicing methods. The simplest method of demosaicing MSFAs is to use weighted bilinear filters for each channel. However, this approach has the disadvantage of blurring images. To overcome this limitation, a method using the spectral-difference domain, which is similar to the color-difference domain in Bayer-pattern-based methods, was developed [12]. Additionally, the binary-tree-based-edge-sensing-(BTES) method [13] was developed, which first interpolates the centers of the unoccupied pixels. The multispectral-local-directional-interpolation-(MLDI) method [14] was also developed, which combines spectral-difference domains with BTES. However, the MLDI method was optimized for the proposed MSFA rather than a general MSFA, because the order of adjacent spectral bands must be offset to match the BTES order. Moreover, a method was developed for interpolating multispectral channels by creating a pseudo-panchromatic image (PPI) as a guide [15]. This method is suitable for any non-redundant 4 × 4 MSFA. In addition, a deep-learning-based multispectral-demosaicing method has been developed [16,17,18,19], which typically produces better results than traditional methods. However, deep-learning-based multispectral-demosaicing methods have a smaller dataset compared to deep-learning-based Bayer-filter-demosaicing methods. Consequently, it is not sufficient to train a complex network, and if the filter arrangement changes, the network must be retrained accordingly. In this paper, we propose a method to solve the problem that high frequencies are not accurately estimated when estimating PPI. Additionally, in conventional studies, only directional information about raw images was used in demosaicing; this is insufficient for estimating PPI. Because PPI is an image representing all channels, directional information about it must be included in the demosaicing process. Our approach builds on the following observations: (1) prior research [15] has demonstrated the usefulness of PPI for multispectral demosaicing; (2) PPI can be estimated from the high frequencies of MSFA; (3) guided filtering restores the high frequency components of a guide image while preserving details. To this end, we propose a method that uses guided filtering to estimate the PPI and then restores high frequencies for each channel by identifying edges according to the estimated PPI. Our approach is optimized for 4 × 4 MSFA patterns without a dominant band, but can be adapted to other patterns.
The main contributions of this study are as follows:
  • We propose a novel method for iterative-guided-filtering-pseudo-panchromatic-image-(IGFPPI) estimation that involves performing iterative guided filtering in both the horizontal and vertical directions, and combining the results.
  • The proposed guided-filtering technique is iterative and automatically determines the stopping criterion for each image.
  • We use the estimated IGFPPI to determine the weights of each channel, and we obtain the interpolated spectral-difference domain through a weighted sum of the difference between the IGFPPI and the spectral channels. Finally, we add the IGFPPI, to obtain the demosaicing result, and we follow the demosaicing order of the BTES method.
We conducted extensive experiments to compare the quantitative and qualitative results of the proposed method for the peak-signal-to-noise ratio (PSNR), the structural-similarity-index measure (SSIM) [20], and the spectral-angle-mapper-(SAM) [21] metrics to those of previously reported methods. The results indicated that the proposed method outperformed both traditional and deep-learning methods. In addition to using the synthesized data, we conducted experiments on actual images captured by IMEC cameras. The demosaicing results for these real-world images suggest that the proposed method performs well in practical situations.
The remainder of this paper is organized as follows: Section 2 presents related work. Section 3 describes the proposed method. Section 4 presents the experimental procedures and results. Section 5 presents our conclusions.

2. Related Work

The proposed algorithm is designed to be effective for multispectral cameras that acquire images in multispectral bands. This section presents an observational model that accurately describes the image-acquisition process using multispectral cameras. Our algorithm builds on the principles of guided image filtering and PPI estimation, which allows accurate demosaicing of multispectral images. Herein, we comprehensively review these methods.

2.1. Observation Model

The observation model of a multispectral camera can be expressed as follows:
I k c = Q λ = a b E ( λ ) R ( λ ) k T c ( λ ) ,
where I k c represents the acquired pixel of channel c at pixel k; Q ( · ) is the quantization function; a and b represent, respectively, the spectral minimum and maximum ranges of the multispectral camera; E ( λ ) represents the relative spectral power distribution of the light source; R ( λ ) k is the spectral-reflectance factor of a subject at pixel k; and T c ( λ ) represents the transmittance of the MSFA channel c.
From the observation model, a raw image of a multispectral camera with N channels is defined as follows:
I k M S F A = c = 1 N I k c M k c ,
where I k M S F A represents the raw image, I k c represents the full resolution of channel c at pixel k, and M c represents the binary mask, which is a special type of image comprising only 0 s and 1 s that is used to represent the MSFA channel c.

2.2. Pseudo-Panchromatic Image

Mihoubi et al. proposed PPI estimation as a guide for multispectral demosaicing [15]. The PPI I k M at pixel k is defined as the average image over all the channels of a multispectral image, as follows:
I k M = 1 N c = 1 N I k c .
They developed a two-step algorithm for estimating the PPI. The first step is to create an initial PPI of the low-frequency components from the raw image. The initial PPI I ¯ M is estimated using a simple Gaussian filter M, as follows:
I ¯ M = I M S F A M ,
M = 1 64 1 2 2 2 1 2 4 4 4 2 2 4 4 4 2 2 4 4 4 2 1 2 2 2 1 ,
where I M S F A represents a raw image. Second, a high-frequency component is added to the initial PPI. The high-frequency component is calculated under the assumption that the local difference of the initial PPI is similar to that of the raw image, where the local difference is the difference between the value of the arbitrary pixel k and the weighted average value of its eight nearest neighbors q N ˜ k with the same channel. The final PPI I ^ k M at pixel k is defined as follows:
I ^ k M = I k M S F A + q N ˜ k γ q I ¯ q M I q M S F A q N ˜ k γ q ,
where γ q is the weight calculated from the reciprocal of the difference between k and q in the raw image I M S F A .

2.3. Guided Filtering

Guided filtering is a powerful and versatile technique for image processing that has numerous applications including denoising, deblurring, edge-preserving smoothing, and tone mapping. It is particularly useful for images with textures, where traditional filters may not preserve important features. The guided filter is a linear form and can be expressed mathematically as
q l = a k I l + b k , l ω k ,
where I l represents the guidance image, q l represents the filtered image, a k and b k are the filter coefficients, and l is the pixel coordinate in a local window ω k centered at pixel k. For determining the filter coefficients, the cost function within the window is given as follows:
E ( a k , b k ) = l ω k ( a k I l + b k p l ) 2 ,
and its solution is given as
a k = l ω k I l p l μ k p ¯ k N ω σ k 2 , b k = p ¯ k a k μ k ,
where μ k , σ k 2 , and p ¯ k represent the mean and variance of the guidance image I and the mean of the filtering input p in the local window ω k , respectively, and where N ω represents the number of pixels in ω k .

3. Proposed Algorithm

In this section, we describe the proposed methods of the two main components. First, we explain the process of estimating the PPI from the raw image I M S F A . Then, we describe the process of performing directional multispectral demosaicing using the estimated PPI.

3.1. Iterative Guided Filtering for Estimating PPI

The proposed IGFPPI framework comprises three steps, as shown in Figure 2. First, a low-pass filter is applied to the MSFA to generate an initial image I ¯ . Then, subsampling is performed, followed by iterative guided filtering. Finally, upsampling is performed to obtain the estimated PPI image.
The initial PPI, which is denoted as I ¯ , includes the low-frequency components of all channels that contribute to the final PPI image. Equation (4) is used to obtain the initial estimate. Next, we perform subsampling on both I ¯ and I M S F A for each channel as a preprocessing step to restore the high frequencies of the final PPI. The subsampled versions of the raw image I M S F A and the initial PPI I ¯ in channel c are denoted as I ˙ c and I ¯ ˙ c , respectively. The sizes of I M S F A and I ¯ are ( W × H ) , and the sizes of I ˙ c and I ¯ ˙ c are ( W 4 × H 4 ) , where W and H represent the width and height of the image, respectively. We use the subsampled I c ˙ as the guidance image and the subsampled I ¯ ˙ c as the filtering input for the iterative guided filtering. Iterative guided filtering is performed separately in the horizontal and vertical directions. If the window size is increased to estimate more precise high frequencies, the estimate is closer to the MSFA image, which is the guide image. To prevent this, we calculate the horizontal and vertical directions separately, and the two results are combined to obtain the estimation. The window size used to calculate the linear coefficients is denoted as ( h × v ) ; horizontal guided filtering is used when h > v , and vertical guided filtering is used when v > h .
In the first iteration t = 0 , iterative guided filtering is performed in the vertical and horizontal directions, using the subsampled I c ˙ as the guidance image and the subsampled I ¯ ˙ 0 c as the filtering input. The equations for this process are as follows:
I ¯ ˙ 1 c , h ( i , j ) = a 0 c , h ( i , j ) I ˙ c ( i , j ) + b 0 c , h ( i , j ) , I ¯ ˙ 1 c , v ( i , j ) = a 0 c , v ( i , j ) I ˙ c ( i , j ) + b 0 c , v ( i , j ) .
The pixel coordinates are represented by ( i , j ) . For t > = 1 , the iterative guided filtering is repeated using the following expressions:
I ¯ ˙ t + 1 c , h ( i , j ) = a t c , h ( i , j ) I ˙ c ( i , j ) + b t c , h ( i , j ) , I ¯ ˙ t + 1 c , v ( i , j ) = a t c , v ( i , j ) I ˙ c ( i , j ) + b t c , v ( i , j ) ,
where ( a t c , h , b t c , h ) and ( a t c , v , b t c , v ) are the linear coefficients in the horizontal and vertical directions, respectively. The filtering inputs for iteration t + 1 are the outputs of iteration t, i.e., I ¯ ˙ t c , h and I ¯ ˙ t c , v , respectively.
Next, we describe the criterion block in Figure 2, which determines when the loop stops. The iterator has two conditions for stopping: (1) when each pixel stops changing, and (2) when the entire image stops changing. The loop stops when both of these conditions are satisfied.
The condition for each pixel to stop changing is determined by the following expressions:
d t c , h ( i , j ) = I ¯ ˙ t c , h ( i , j ) I ¯ ˙ t 1 c , h ( i , j ) , δ t c , h ( i , j ) = I ¯ ˙ t c , h ( i , j 1 ) I ¯ ˙ t c , h ( i , j + 1 ) , d t c , v ( i , j ) = I ¯ ˙ t c , v ( i , j ) I ¯ ˙ t 1 c , v ( i , j ) , δ t c , v ( i , j ) = I ¯ ˙ t c , v ( i 1 , j ) I ¯ ˙ t c , v ( i + 1 , j ) ,
where d t c , h ( i , j ) represents the absolute difference between the results of the horizontal loops of the previous and current step, and where d t c , v ( i , j ) represents the absolute difference between the results of the vertical loops of the previous and current step. These two values indicate changes in the image. As they converge to zero, there is little change in the pixels at position ( i , j ) . Additionally, δ t c , h ( i , j ) represents the horizontal change in the result of the current step’s horizontal iteration. A value close to zero indicates that there is no change in the horizontal direction. Similarly, δ t c , v ( i , j ) represents the vertical change in the result of the current step’s vertical iteration. The criterion for pixel change is determined by multiplying these two expressions, as follows:
D t c , h ( i , j ) = d t c , h ( i , j ) · δ t c , h ( i , j ) , D t c , v ( i , j ) = d t c , v ( i , j ) · δ t c , v ( i , j ) .
The pixel change stops when D t c , h ( i , j ) < ϵ p i x e l for the horizontal direction and when D t c , v ( i , j ) < ϵ p i x e l for the vertical direction, where ϵ p i x e l represents a predefined threshold.
The global condition under which the loop stops is calculated using the following expressions:
M A D c , h ( t ) = 1 W ˙ × H ˙ i = 1 H ˙ j = 1 W ˙ d t c , h ( i , j ) , M A D c , v ( t ) = 1 W ˙ × H ˙ i = 1 H ˙ j = 1 W ˙ d t c , v ( i , j ) ,
where W ˙ and H ˙ represent the width and height of the subsampled image, respectively. The mean absolute difference (MAD) is a measure of the extent to which the entire image changes and is calculated as the average absolute value of the difference between the results of the previous and current steps. Ye et al. determined the convergence based solely on the MAD value [22]. However, our focus is the convergence of the difference between the current and previous MADs to zero, rather than the value of the MAD approaching zero. This is because the MAD may not converge to zero, owing to the conditions that prevent each pixel from changing. The difference in MAD between the current and previous steps is calculated as follows:
Δ M A D c , h ( t ) = M A D c , h ( t ) M A D c , h ( t 1 ) , Δ M A D c , v ( t ) = M A D c , v ( t ) M A D c , v ( t 1 ) .
The final number of iterations is determined by finding the smallest value of t that satisfies both Δ M A D c , h ( t ) < ϵ g l o b a l and Δ M A D c , v ( t ) < ϵ g l o b a l , which is defined as T.
The process of weighting and summing the results obtained by guided filtering in the vertical and horizontal directions with the number of iterations obtained earlier is as follows:
I ^ ˙ c ( i , j ) = w c , h ( i , j ) I ¯ ˙ T c , h ( i , j ) + w c , v ( i , j ) I ¯ ˙ T c , v ( i , j ) w c , h ( i , j ) + w c , v ( i , j ) ,
where w c , h ( i , j ) and w c , v ( i , j ) are the weights in the horizontal and vertical directions, respectively, and are defined as follows:
w c , h ( i , j ) = 1 D T c , h ( i , j ) , w c , v ( i , j ) = 1 D T c , v ( i , j ) ,
where a small criteria value contributes to a large weight.
The final step involves guided upsampling of the subsampled channel I ^ ˙ c to generate the final PPI image. To achieve this, we set the window size for the linear coefficients in guided filtering to h = v , and we then upsample the image for each channel to the position of the raw image. The guided upsampling is expressed by the following equation:
I ^ P P I ( 4 i + m , 4 j + n ) = a T c ( i , j ) I ˙ c ( i , j ) + b T c ( i , j ) , ( m , n ) [ 0 , 1 , 2 , 3 ] 2 ,
where ( m , n ) [ 0 , 1 , 2 , 3 ] 2 determines the grid for upsampling and depends on the subsampled channel c. The indices ( m , n ) represent the position of a pixel within a 4 × 4 block. For example, if c = 1 in Figure 1c, ( m , n ) is ( 3 , 3 ) .
Δ s 1 c ( i , j ) = γ s 0 N W Δ s 0 c ( i 2 , j 2 ) + γ s 0 N E Δ s 0 c ( i 2 , j + 2 ) + γ s 0 S E Δ s 0 c ( i + 2 , j + 2 ) + γ s 0 S W Δ s 0 c ( i + 2 , j 2 ) γ s 0 N W + γ s 0 N E + γ s 0 S E + γ s 0 S W .
γ s 0 N W = 1 2 I ^ P P I ( i 2 , j 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i 1 , j 1 ) I ^ P P I ( i + 1 , j + 1 ) , γ s 0 N E = 1 2 I ^ P P I ( i 2 , j + 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i 1 , j + 1 ) I ^ P P I ( i + 1 , j 1 ) , γ s 0 S E = 1 2 I ^ P P I ( i + 2 , j + 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i + 1 , j + 1 ) I ^ P P I ( i 1 , j 1 ) , γ s 0 S W = 1 2 I ^ P P I ( i + 2 , j 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i + 1 , j 1 ) I ^ P P I ( i 1 , j + 1 ) .
Δ s 2 c ( i , j ) = γ s 1 N Δ s 1 c ( i 2 , j ) + γ s 1 E Δ s 1 c ( i , j + 2 ) + γ s 1 S Δ s 1 c ( i + 2 , j ) + γ s 1 W Δ s 1 c ( i , j 2 ) γ s 1 N + γ s 1 E + γ s 1 S + γ s 1 W .
γ s 1 N = 1 2 I ^ P P I ( i 2 , j ) I ^ P P I ( i , j ) + I ^ P P I ( i 1 , j ) I ^ P P I ( i + 1 , j ) , γ s 1 E = 1 2 I ^ P P I ( i , j + 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i , j + 1 ) I ^ P P I ( i , j 1 ) , γ s 1 S = 1 2 I ^ P P I ( i + 2 , j ) I ^ P P I ( i , j ) + I ^ P P I ( i + 1 , j ) I ^ P P I ( i 1 , j ) , γ s 1 W = 1 2 I ^ P P I ( i , j 2 ) I ^ P P I ( i , j ) + I ^ P P I ( i , j 1 ) I ^ P P I ( i , j + 1 ) .

3.2. Directional Multispectral Demosaicing

In this section, we present the proposed multispectral-demosaicing method that utilizes the outcomes of IGFPPI. The overall framework of the method is illustrated in Figure 3. We utilize the disparities between the estimated PPI and each channel, to generate the spectral-difference domain. We then perform directional interpolation of the unoccupied pixels in the spectral-difference domain. Finally, we add the interpolated image and the estimated PPI to obtain the final multispectral demosaicing result. In Figure 3, the masking block refers to the filtering of the raw image I M S F A to zero except for the corresponding channel position c.
The proposed directional-interpolation technique utilizes the interpolation order of the BTES method and weight calculation using the PPI. The BTES method first interpolates the center pixel in each step, resulting in four steps for a 4 × 4 MSFA, as shown in Figure 3. Here, the W C & W S block represents the weight calculation and weighted sum, where W C denotes the weight calculation and W S denotes the weighted sum. Let Δ s 1 c ( i , j ) represent the center pixel of the channel c requiring interpolation in the first step. The weight and weight-sum expressions in step 1 are given by (18) and (19), where s 0 refers to step 0, s 1 refers to step 1, and γ represents the weights.
The equations for step 2 are (20) and (21). Steps 3 and 4 are performed in the same manner as steps 1 and 2. Finally, the multispectral image is obtained by adding the estimated PPI to the spectral-difference image obtained through the order of BTES and directional interpolation, as follows:
I ^ c = I ^ P P I + Δ c .

4. Experiment Results

4.1. Metrics

To evaluate the quality of the demosaicing results, we used quantitative metrics, such as the PSNR, SSIM, and SAM.
The PSNR, which measures the logarithm of the average difference between the reference image and the estimated image, was calculated as follows:
P S N R ( x , x ^ ) = 10 log 10 M A X 2 M S E ( x , x ^ ) , M S E ( x , x ^ ) = x x ^ 2 W H ,
where M A X represents the maximum value of the image, M S E represents the mean squared error between the reference image x and the estimated image x ^ , and W and H represent the width and height of the image, respectively.
The SSIM was used to evaluate the similarity between the reference image x and the estimated image x ^ . It was calculated using the following equation:
S S I M ( x , x ^ ) = ( 2 μ x μ x ^ + c 1 ) ( 2 σ x x ^ + c 2 ) ( μ x 2 + μ x ^ 2 + c 1 ) ( σ x 2 + σ x ^ 2 + c 2 ) ,
where μ x and μ x ^ represent the means of the image vectors x and x ^ , respectively. The standard deviations of x and x ^ are represented by σ x and σ x ^ , respectively. The covariance between x and x ^ is represented by σ x x ^ , and c 1 and c 2 are constants used to prevent the denominator from approaching zero.
The SAM is commonly used to evaluate multispectral images. It represents the average of the angles formed by the reference and estimated image vectors and is calculated using the following formula:
S A M ( x , x ^ ) = cos 1 x · x ^ x x ^ .
For the PSNR and SSIM, larger values indicated better performance, and for the SAM, smaller values indicated better performance.

4.2. Dataset and Implementation Detail

In our experiments, we compared the proposed method to previously reported methods using the TokyoTech-31band (TT31) [23] and TokyoTech-59band (TT59) [24] datasets. The TT31 dataset included 35 multispectral images, each containing 31 spectral bands ranging from 420 to 720 nm. The TT59 dataset included 16 multispectral images with 59 spectral bands ranging from 420 to 1000 nm, with the bands spaced 10 nm apart. We excluded the popular CAVE [25] dataset from our experiments because it was used to train conventional deep-learning methods. To generate the synthetic dataset, we used IMEC’s “snapshot mosaic” multispectral camera sensor, i.e., XIMEA’s xiSpec [26]. We utilized the publicly available normalized transmittance of this camera [15] and the camera had the central spectral band λ c to be an MSFA of 4 × 4 arrays, consisting of 469, 480, 489, 499, 513, 524, 537, 551, 552, 566, 580, 590, 602, 613, 621, and 633 nm. The arrays were arranged in ascending order in Figure 1c. We used the normalized transmittance and D65 illuminant to obtain multispectral images for each band, in accordance with (1). The obtained images were then sampled using (2) to generate the raw MSFA images.
The pixel values of the synthesis datasets ranged from 0 to 1. The window size was set to h = 7 and v = 3 for horizontal guided filtering, h = 3 and v = 7 for vertical guided filtering, and h = 5 and v = 5 for the final guided upsampling. We also experimented with 10 4 for ϵ p i x e l , which determined the change in each pixel, and 10 3 for ϵ g l o b a l , which determined the change in the entire image.

4.3. Results for Synthesis Dataset and Real-World Image

For a quantitative evaluation of the proposed method, we compared it to six other methods. The first conventional method (CM1) was the BTES method [13], which prioritized the interpolation of the empty center pixel in the spatial domain of each channel. Interpolation was performed using a weighted sum, and the weights were calculated using the reciprocal of the difference between neighboring pixels. The second conventional method (CM2) was a spectral-difference-(SD) method that employed weighted bilinear filtering in the spectral-difference domain [27]. The third conventional method (CM3) was an iterative-spectral-difference-(ItSD) method that used weighted bilinear filtering in the spectral-difference domain [28]. The CM2 method was applied repeatedly for each channel. The fourth conventional method (CM4) was an MLDI method similar to the BTES method of [14], except that the interpolation was performed in the spectral domain instead of the spatial domain. The fifth conventional method (CM5) was a PPID method that estimated the PPI of a guide image [15] and performed interpolation in the spectral-difference domain based on the PPI. The sixth conventional method (CM6) was a mosaic-convolution-attention-network-(MCAN) method, in which the mosaic pattern was erased by generating an end-to-end demosaicing network [16]. This deep-learning method was implemented using the code published online by the author.
Figure 4 shows the results of the estimated PPIs as a guide image. Figure 4a displays the average of the original multispectral cube, Figure 4b shows the estimated PPI of PPID [15], and Figure 4c shows the estimated PPI of the proposed IGFPPI. The estimated PPI of PPID is blurred and has low contrast. However, the proposed IGFPPI restored high-frequency components better than PPID, and the contrast is also close to the original.
The results for the PSNR, SSIM, and SAM of TT31 are presented in Table 1, Table 2 and Table 3, respectively. In the tables, a dark-gray background indicates the best score and a light-gray background indicates the second-best score. Of the 35 images in the TT31 dataset, the proposed method had the best PSNR for 19 images and the second-best PSNR for 16 images. Additionally, it had the best SSIM for 20 images, the second-best SSIM for 15 images, the best SAM for 18 images, and the second-best SAM for 17. The average PSNR, SSIM, and SAM values for the TT31 dataset indicated that the proposed method outperformed the other methods.
Figure 5 and Figure 6 present the qualitative evaluation results for TT31, including those for the Butterfly and ChartCZP images, with the images cropped to highlight differences. We obtained red, green, and blue channels from the multispectral demosaicing image cube and represented them as the RGB images for qualitative evaluation. Figure 5a–h and Figure 6a–h show RGB images from which we extracted channel 16 for red, channel 6 for green, and channel 1 for blue from the multispectral image cube. Figure 5i–p and Figure 6i–p show the error maps of Figure 5a–h and Figure 6a–h. The results of CM1 show the blurriest image, and the results of CM2 and CM3 estimated high frequencies somewhat well, but artifacts can be seen. CM4 and CM6 nearly perfectly restored high frequencies in the resolution chart; however, the mosaic pattern was not entirely removed from the general color image. In CM6, demosaicing is performed using a network that erases the mosaic pattern for each channel. This method performs demosaicing on 16 channels of an MSFA; however, the arrangement is different from the paper of CM6. In the experimental results of this method, we can observe that only the evaluation metrics of chart images corresponding to monotone are of high score. This is because the mosaic pattern is easily erased in images where changes in all channels are constant, but the mosaic pattern is not erased in images where a large change occurs in a specific color. In general, the outcomes of CM5 and PM (referring to the proposed method) appeared to be similar. However, for images such as the resolution chart, PM exhibited superior high-frequency recovery and less color aliasing than CM5. Overall, the image produced by PM had fewer mosaic pattern artifacts and less color aliasing than those produced by the conventional methods.
For quantitative evaluation of the TT59 dataset, we computed the PSNR, SSIM, and SAM values, which are presented in Table 4, Table 5 and Table 6, respectively. Of the 16 images in the TT59 dataset, the proposed method had the best PSNR for 10 images, and the second-best PSNR for 4 images. Moreover, it had the best SSIM for 8 images, the second-best SSIM for 7 images, the best SAM for 12 images and the second-best SAM for 4 images. The average PSNR, SSIM, and SAM values for the TT59 dataset indicated that the proposed method achieved the best results.
The results for the TT59 dataset were similar to those for the TT31 dataset. In the gray areas, CM4 and CM6 effectively recovered the high frequencies. However, in the colored sections, MSFA pattern artifacts were introduced, resulting in grid-like artifacts. By comparison, CM5 and PM performed better overall, with PM recovering high frequencies better than CM5, as shown in the resolution chart.
Figure 7 shows the demosaicing results for different MSFA arrangements. Figure 7a–h shows the MSFAs in which adjacent spectra are grouped in a 2 × 2 shape. Figure 7i–p are the MSFAs of the original IMEC camera. The proposed method can be observed to be more robust and to have fewer artifacts than conventional methods. In particular, Figure 7c,d,f show grid artifacts where the black line of the butterfly is broken, whereas the proposed method shows reduced grid artifacts compared with other methods.
Table 7 presents a comparison of the execution times, with the desktop specifications of an Intel i7-11700k processor, 32 GB of memory, and an Nvidia RTX 3090 GPU. CM6 was tested using Pytorch, whereas the remaining methods were tested using MATLAB R2021a. To obtain the average execution times for all the datasets, we conducted timing measurements. We found that the method with the shortest execution time was CM1, followed by CM5, PM, CM4, CM2, CM6, and CM3.
In addition, as shown in Figure 8, the methods were tested on images captured using an IMEC camera. To qualitatively evaluate the multispectral image cube in the real world, we used the same method that was employed to evaluate the synthesis dataset. Channels 16, 6, and 1 of the multispectral image cube were extracted as the R, G, and B images, respectively, as shown in Figure 8. These results were similar to the experimental results obtained for the synthesis dataset. CM1, CM2, and CM3 exhibited blurred images and strong color aliasing, whereas CM4 exhibited MSFA pattern artifacts. Among the conventional methods, CM5 achieved the best results. CM6, which is a deep-learning method, performed well for the resolution chart. However, the proposed method exhibited better high-frequency recovery and less color aliasing.

5. Conclusions

We propose an IGFPPI method for PPI estimation and a directional-multispectral-demosaicing method using the estimated PPI obtained from IGFPPI. Guided filtering was used to estimate the PPI from the raw image of the MSFA, where a Gaussian filter was used to obtain the PPI of the low-frequency components, and horizontal and vertical guided filtering was used to estimate the high-frequency components. Using the estimated PPI, we performed directional interpolation in the spectral-difference domain to obtain the final demosaiced multispectral image.
In extensive experiments, among the methods tested, the proposed method achieved the best quantitative scores for the PSNR, SSIM, and SAM and exhibited the best restoration of high frequencies and the least color artifacts in a qualitative evaluation, with a reasonable computation time. The proposed method also achieved good results for real-world images. Furthermore, our proposed method can be adapted to perform multispectral demosaicing in the case of a periodic MSFA and when the spectral transmittance of the MSFA varies. In future research, we will focus on image-fusion demosaicing using both multispectral and color filter arrays.

Author Contributions

Conceptualization, K.J.; methodology, K.J.; software, K.J. and S.K.; validation, K.J., S.K. and M.G.K.; funding acquisition, M.G.K.; supervision, K.J. and M.G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2022R1A2C200289711).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetl. Ecol. Manag. 2010, 18, 281–296. [Google Scholar] [CrossRef]
  2. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  3. Wu, Y.; Zeng, F.; Zhao, Y.; Wu, S. Emerging contrast agents for multispectral optoacoustic imaging and their biomedical applications. Chem. Soc. Rev. 2021, 50, 7924–7940. [Google Scholar] [CrossRef] [PubMed]
  4. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065, 20 July 1976. [Google Scholar]
  5. Monno, Y.; Tanaka, M.; Okutomi, M. Multispectral demosaicking using adaptive kernel upsampling. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 3157–3160. [Google Scholar]
  6. Geelen, B.; Tack, N.; Lambrechts, A. A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic. In Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VII; SPIE: Bellingham, WA, USA, 2014; Volume 8974, pp. 80–87. [Google Scholar]
  7. Kimmel, R. Demosaicing: Image reconstruction from color CCD samples. IEEE Trans. Image Process. 1999, 8, 1221–1228. [Google Scholar] [CrossRef] [PubMed]
  8. Lu, W.; Tan, Y.P. Color filter array demosaicking: New method and performance measures. IEEE Trans. Image Process. 2003, 12, 1194–1210. [Google Scholar] [PubMed]
  9. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual interpolation for color image demosaicking. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 2304–2308. [Google Scholar]
  10. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  11. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Graph. (ToG) 2016, 35, 1–12. [Google Scholar] [CrossRef]
  12. Rathi, V.; Goyal, P. Generic multispectral demosaicking based on directional interpolation. IEEE Access 2022, 10, 64715–64728. [Google Scholar] [CrossRef]
  13. Miao, L.; Qi, H.; Ramanath, R.; Snyder, W.E. Binary tree-based generic demosaicking algorithm for multispectral filter arrays. IEEE Trans. Image Process. 2006, 15, 3550–3558. [Google Scholar] [CrossRef] [PubMed]
  14. Shinoda, K.; Ogawa, S.; Yanagi, Y.; Hasegawa, M.; Kato, S.; Ishikawa, M.; Komagata, H.; Kobayashi, N. Multispectral filter array and demosaicking for pathological images. In Proceedings of the 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 697–703. [Google Scholar]
  15. Mihoubi, S.; Losson, O.; Mathon, B.; Macaire, L. Multispectral demosaicing using pseudo-panchromatic image. IEEE Trans. Comput. Imaging 2017, 3, 982–995. [Google Scholar] [CrossRef]
  16. Feng, K.; Zhao, Y.; Chan, J.C.W.; Kong, S.G.; Zhang, X.; Wang, B. Mosaic convolution-attention network for demosaicing multispectral filter array images. IEEE Trans. Comput. Imaging 2021, 7, 864–878. [Google Scholar] [CrossRef]
  17. Liu, S.; Zhang, Y.; Chen, J.; Lim, K.P.; Rahardja, S. A Deep Joint Network for Multispectral Demosaicking Based on Pseudo-Panchromatic Images. IEEE J. Sel. Top. Signal Process. 2022, 16, 622–635. [Google Scholar] [CrossRef]
  18. Zhao, B.; Zheng, J.; Dong, Y.; Shen, N.; Yang, J.; Cao, Y.; Cao, Y. PPI Edge Infused Spatial-Spectral Adaptive Residual Network for Multispectral Filter Array Image Demosaicing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5405214. [Google Scholar] [CrossRef]
  19. Chen, Y.; Zhang, H.; Wang, Y.; Ying, A.; Zhao, B. ADMM-DSP: A Deep Spectral Image Prior for Snapshot Spectral Image Demosaicing. IEEE Trans. Ind. Inform. 2023; early access. [Google Scholar]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  21. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  22. Ye, W.; Ma, K.K. Color image demosaicing using iterative residual interpolation. IEEE Trans. Image Process. 2015, 24, 5879–5891. [Google Scholar] [CrossRef] [PubMed]
  23. Monno, Y.; Kikuchi, S.; Tanaka, M.; Okutomi, M. A practical one-shot multispectral imaging system using a single image sensor. IEEE Trans. Image Process. 2015, 24, 3048–3059. [Google Scholar] [CrossRef] [PubMed]
  24. Monno, Y.; Teranaka, H.; Yoshizaki, K.; Tanaka, M.; Okutomi, M. Single-sensor RGB-NIR imaging: High-quality system design and prototype implementation. IEEE Sens. J. 2018, 19, 497–507. [Google Scholar] [CrossRef]
  25. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed]
  26. Pichette, J.; Laurence, A.; Angulo, L.; Lesage, F.; Bouthillier, A.; Nguyen, D.K.; Leblond, F. Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging. Neurophotonics 2016, 3, 045003. [Google Scholar] [CrossRef] [PubMed]
  27. Brauers, J.; Aach, T. A color filter array based multispectral camera. In 12. Workshop Farbbildverarbeitung; Lehrstuhl für Bildverarbeitung: Ilmenau, Germany, 2006; pp. 55–64. [Google Scholar]
  28. Mizutani, J.; Ogawa, S.; Shinoda, K.; Hasegawa, M.; Kato, S. Multispectral demosaicking algorithm based on inter-channel correlation. In Proceedings of the 2014 IEEE Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 474–477. [Google Scholar]
Figure 1. Basic CFA and MSFA patterns: (a) Bayer pattern [4]. (b) MSFA with one dominant band [5]. (c) MSFA with no dominant band in IMEC camera [6]. The numbers are the band numbers.
Figure 1. Basic CFA and MSFA patterns: (a) Bayer pattern [4]. (b) MSFA with one dominant band [5]. (c) MSFA with no dominant band in IMEC camera [6]. The numbers are the band numbers.
Sensors 24 00760 g001
Figure 2. Proposed IGFPPI framework.
Figure 2. Proposed IGFPPI framework.
Sensors 24 00760 g002
Figure 3. Proposed framework for directional multispectral demosaicing.
Figure 3. Proposed framework for directional multispectral demosaicing.
Sensors 24 00760 g003
Figure 4. Experimental results for PPI estimation: (a) Original. (b) PPID. (c) IGFPPI.
Figure 4. Experimental results for PPI estimation: (a) Original. (b) PPID. (c) IGFPPI.
Sensors 24 00760 g004
Figure 5. Experimental results for TT31: (ah) Butterfly and (ip) error maps of (ah).
Figure 5. Experimental results for TT31: (ah) Butterfly and (ip) error maps of (ah).
Sensors 24 00760 g005
Figure 6. Experimental results for TT31: (ah) ChartCZP and (ip) error maps of (ah).
Figure 6. Experimental results for TT31: (ah) ChartCZP and (ip) error maps of (ah).
Sensors 24 00760 g006
Figure 7. Experimental results for various MSFAs: (ah) Demosaicing results for different arrangement MSFAs. (ip) Demosaicing results for original MSFA.
Figure 7. Experimental results for various MSFAs: (ah) Demosaicing results for different arrangement MSFAs. (ip) Demosaicing results for original MSFA.
Sensors 24 00760 g007
Figure 8. Real-world image.
Figure 8. Real-world image.
Sensors 24 00760 g008
Table 1. PSNR(DB) Comparision for TT31.
Table 1. PSNR(DB) Comparision for TT31.
CM1CM2CM3CM4CM5CM6PM
Butterfly32.2836.0737.5025.1739.9519.9441.85
Butterfly227.9130.6431.2823.5032.9015.8535.19
Butterfly334.0538.4040.5329.9643.8426.3943.73
Butterfly433.2237.5440.0732.6941.9227.7542.44
Butterfly533.2637.9241.3736.1643.6431.9643.89
Butterfly630.6134.9237.5232.8439.2727.4140.36
Butterfly733.8938.5941.6033.7243.4527.5643.51
Butterfly832.6736.8339.4133.4342.3629.5842.50
CD38.6939.3137.2824.3240.5424.8138.77
Character25.8330.6934.3231.0334.5527.0436.70
Chart2434.6538.4340.5028.0141.7123.3842.65
ChartCZP17.3421.3325.7927.3622.1432.1431.93
ChartDC33.9637.5439.5227.9140.6523.1842.26
ChartRes23.4227.2430.4234.1329.6138.2134.28
ChartSG36.2639.9042.1428.9643.5724.6345.07
Cloth26.9531.8934.7731.4135.9326.8435.77
Cloth231.6435.1637.4427.7539.5822.3738.68
Cloth332.0734.9736.3429.0937.3021.6437.76
Cloth429.6934.0736.5829.5937.7724.4439.57
Cloth534.3136.1137.0021.1337.5721.4139.62
Cloth638.5041.2541.9532.2143.4725.9045.54
Color35.3237.5138.1726.1840.2921.5941.16
Colorchart40.7742.9442.9927.2446.4723.3848.11
Doll24.9328.1029.9422.4431.8020.5930.06
Fan25.3328.5630.2724.9531.7720.9832.31
Fan226.8330.9132.9626.1734.8020.7334.58
Fan326.6230.6132.5026.1434.3121.0233.12
Flower41.9345.7947.0936.1148.9631.2548.60
Flower244.0046.5146.2730.8348.0227.3448.42
Flower342.6545.9846.6535.7848.6731.0848.53
Party29.4932.6233.4526.1034.6421.7533.39
Tape30.2432.6233.8421.1735.1916.3633.83
Tape231.3134.1835.4719.3436.1515.0334.72
Tshirts22.3527.0430.4727.3833.9220.5930.55
Tshirts225.2129.1532.2128.5434.7921.9232.91
Avg.31.6635.1837.0228.5438.6124.4639.21
Table 2. SSIM Comparision for TT31.
Table 2. SSIM Comparision for TT31.
CM1CM2CM3CM4CM5CM6PM
Butterfly0.9240.9590.9610.6560.9780.4410.984
Butterfly20.8480.9220.9270.7660.9520.4890.969
Butterfly30.9610.9730.9750.8510.9880.7750.990
Butterfly40.9440.9680.9740.9050.9840.8140.986
Butterfly50.9580.9800.9860.9280.9910.8510.991
Butterfly60.9100.9590.9680.8900.9780.7050.982
Butterfly70.9600.9800.9840.8920.9900.7270.990
Butterfly80.9270.9650.9730.9360.9890.9040.991
CD0.9830.9740.9570.8230.9840.8230.976
Character0.8750.9340.9590.9180.9790.8780.979
Chart240.9560.9700.9750.7840.9860.7150.987
ChartCZP0.3860.8020.9320.9210.7000.9740.976
ChartDC0.9680.9720.9760.8090.9900.7330.991
ChartRes0.8280.9070.9450.9580.9430.9750.976
ChartSG0.9760.9790.9820.8410.9930.7850.994
Cloth0.7750.9290.9530.9200.9650.8420.961
Cloth20.8430.9290.9480.6700.9620.3980.973
Cloth30.8470.9240.9370.7360.9470.3860.959
Cloth40.7320.9220.9490.7980.9560.5790.972
Cloth50.8310.9110.9220.4440.9290.5190.955
Cloth60.9240.9670.9720.8290.9800.5450.986
Color0.9630.9590.9560.6040.9840.4380.984
Colorchart0.9830.9810.9790.7680.9930.7150.994
Doll0.7630.8860.8970.6010.9280.5580.927
Fan0.7360.8940.9140.7820.9410.6050.936
Fan20.8570.9340.9430.7490.9620.5680.953
Fan30.8270.9320.9450.7780.9640.5720.950
Flower0.9690.9850.9870.8960.9910.7310.991
Flower20.9750.9850.9830.7540.9880.5920.988
Flower30.9780.9860.9860.8830.9900.7080.990
Party0.930.9560.9570.7780.9720.6200.971
Tape0.8790.9340.9410.5930.9560.4470.949
Tape20.830.9220.9370.6450.9500.4270.933
Tshirts0.6890.8770.9310.7660.9680.4670.965
Tshirts20.6760.8760.9350.7650.9630.4340.967
Avg.0.8690.9410.9560.7900.9630.6500.973
Table 3. SAM Comparison for TT31.
Table 3. SAM Comparison for TT31.
CM1CM2CM3CM4CM5CM6PM
Butterfly0.0260.0380.0340.1130.0220.1910.018
Butterfly20.0590.0870.0780.1360.0490.2900.045
Butterfly30.0410.0650.0590.0960.0350.1190.035
Butterfly40.0720.0970.0840.1140.0590.1280.058
Butterfly50.0420.0520.0440.0990.0340.1070.034
Butterfly60.0400.0500.0390.0640.0270.0670.027
Butterfly70.0330.0400.0330.0970.0250.0800.024
Butterfly80.0760.1170.0960.0920.0550.0900.051
CD0.0340.0480.0590.1530.0370.1760.043
Character0.0840.1550.1180.0880.0610.0950.057
Chart240.0480.0720.0640.1120.0390.1110.038
ChartCZP0.1980.2740.1410.1250.1490.0580.059
ChartDC0.0410.0660.0600.1010.0350.1060.035
ChartRes0.0500.0770.0490.0290.0340.0190.019
ChartSG0.0510.0840.0750.1080.0450.1190.043
Cloth0.1220.1700.1210.1280.0780.1390.088
Cloth20.0480.0550.0450.1270.0320.2450.029
Cloth30.0770.1010.0860.1980.0600.4120.055
Cloth40.0660.0730.0560.1200.0440.1510.037
Cloth50.0530.0580.0520.2830.0450.2610.042
Cloth60.0550.0620.0560.1390.0430.2450.042
Color0.0290.0390.0380.1680.0240.1720.024
Colorchart0.0420.0680.0690.1340.0390.1390.037
Doll0.0860.1100.1020.2700.0730.2460.078
Fan0.0740.1000.0780.1010.0470.1790.044
Fan20.0510.0740.0570.0880.0330.1240.032
Fan30.0610.0790.0620.1190.0380.1570.040
Flower0.0640.0830.0790.1880.0570.4190.061
Flower20.0590.0740.0750.2570.0590.3560.059
Flower30.0700.0890.0890.2230.0680.3850.073
Party0.0590.0800.0770.2120.0520.2150.058
Tape0.0300.0340.0300.1550.0230.1670.025
Tape20.0520.0740.0640.1550.0410.3070.044
Tshirts0.0990.1710.1310.1150.0510.1880.062
Tshirts20.0860.1290.1010.1100.0460.1920.051
Avg.0.0620.0870.0720.1380.0470.1840.045
Table 4. PSNR(DB) Comparision for TT59.
Table 4. PSNR(DB) Comparision for TT59.
CM1CM2CM3CM4CM5CM6PM
Butterfly26.2729.3831.2324.5732.3725.2235.04
Butterfly230.7834.1936.5832.3338.2134.741.89
Chart21.7525.4028.3734.0927.5841.3132.54
Chart223.0726.7129.9834.9230.3342.5937.55
Chart320.8424.0826.8732.9425.6041.9130.43
Cloth23.8627.3729.5926.2431.3726.0031.38
Cloth231.3334.3836.0127.1337.5229.1338.40
Cloth326.1529.1831.3527.7433.4327.9733.87
Doll27.3130.4632.3126.2633.8827.5934.48
Doll230.1033.7535.9230.2737.6631.8238.20
Fan29.2433.1634.8825.8736.3627.8536.53
Fan231.7935.2036.4328.1737.7030.1037.64
Fan326.6230.1832.0326.5233.4527.8933.02
Origami27.0030.1931.9525.9033.7028.1435.48
Paint25.0528.6130.822.8432.1324.6831.10
Spray24.8027.9530.2927.7130.6728.1833.95
Avg.26.6230.0132.1628.3433.2530.9435.09
Table 5. SSIM Comparision for TT59.
Table 5. SSIM Comparision for TT59.
CM1CM2CM3CM4CM5CM6PM
Butterfly0.8510.9290.9440.8260.9640.8730.972
Butterfly20.9270.9600.9700.9390.9830.9590.989
Chart0.8310.9030.9390.9820.9530.9900.983
Chart20.8510.9120.9430.9830.9680.9900.989
Chart30.8090.8890.9270.9810.9270.9930.977
Cloth0.6990.8880.9180.6240.9430.6390.941
Cloth20.8410.9320.9450.6250.9580.7840.971
Cloth30.8010.8980.9210.7840.9540.7960.959
Doll0.8450.9100.9230.7920.9550.8260.963
Doll20.8570.9370.9540.9000.9710.9240.975
Fan0.8450.9280.9360.6850.9560.7740.955
Fan20.8760.9370.9400.7500.9570.8080.952
Fan30.7430.8960.9150.7610.9380.8100.929
Origami0.8660.8820.8950.7080.9650.7780.970
Paint0.6420.8790.9160.6720.9350.7410.923
Spray0.7760.9030.9310.8630.9510.8840.970
Avg.0.8160.9110.9320.8050.9550.8480.964
Table 6. SAM Comparison for TT59.
Table 6. SAM Comparison for TT59.
CM1CM2CM3CM4CM5CM6PM
Butterfly0.0470.0670.0540.0840.0340.0670.028
Butterfly20.0400.0650.0530.0510.0280.0360.023
Chart0.0440.0560.0370.0190.0290.0100.014
Chart20.0370.0530.0360.0160.0230.0090.011
Chart30.0650.1110.0810.0320.0540.0150.025
Cloth0.1210.1580.1270.2290.0840.2130.077
Cloth20.0660.0790.0660.2440.0490.1610.042
Cloth30.1280.1900.1690.1810.1000.1630.088
Doll0.1310.1870.1730.1950.1100.1600.099
Doll20.2210.2830.2540.2610.1950.2310.189
Fan0.0730.1150.1040.1680.0610.1350.057
Fan20.0920.1360.1220.1690.0840.1290.078
Fan30.0750.0940.0750.1160.0510.0890.050
Origami0.0900.1810.1640.1450.0710.1220.049
Paint0.0640.0710.0550.1280.0420.1080.046
Spray0.0940.1290.1100.0850.0690.0670.054
Avg.0.0870.1230.1050.1330.0680.1070.058
Table 7. Computation Time of Methods.
Table 7. Computation Time of Methods.
CM1CM2CM3CM4CM5CM6PM
Avg.0.714 s1.229 s2.871 s1.068 s0.822 s1.998 s0.999 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jeong, K.; Kim, S.; Kang, M.G. Multispectral Demosaicing Based on Iterative-Linear-Regression Model for Estimating Pseudo-Panchromatic Image. Sensors 2024, 24, 760. https://doi.org/10.3390/s24030760

AMA Style

Jeong K, Kim S, Kang MG. Multispectral Demosaicing Based on Iterative-Linear-Regression Model for Estimating Pseudo-Panchromatic Image. Sensors. 2024; 24(3):760. https://doi.org/10.3390/s24030760

Chicago/Turabian Style

Jeong, Kyeonghoon, Sanghoon Kim, and Moon Gi Kang. 2024. "Multispectral Demosaicing Based on Iterative-Linear-Regression Model for Estimating Pseudo-Panchromatic Image" Sensors 24, no. 3: 760. https://doi.org/10.3390/s24030760

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop