Next Article in Journal
Non-Contact Geomagnetic Detection Using Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Teager Energy Operator
Previous Article in Journal
Mapping Matrix Design and Improved Belief Propagation Decoding Algorithm for Rate-Compatible Modulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images

1
Applied Research LLC, Rockville, MD 20850, USA
2
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(3), 308; https://doi.org/10.3390/electronics8030308
Submission received: 11 February 2019 / Revised: 27 February 2019 / Accepted: 6 March 2019 / Published: 11 March 2019
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Bayer pattern filters have been used in many commercial digital cameras. In National Aeronautics and Space Administration’s (NASA) mast camera (Mastcam) imaging system, onboard the Mars Science Laboratory (MSL) rover Curiosity, a Bayer pattern filter is being used to capture the RGB (red, green, and blue) color of scenes on Mars. The Mastcam has two cameras: left and right. The right camera has three times better resolution than that of the left. It is well known that demosaicing introduces color and zipper artifacts. Here, we present a comparative study of demosaicing results using conventional and deep learning algorithms. Sixteen left and 15 right Mastcam images were used in our experiments. Due to a lack of ground truth images for Mastcam data from Mars, we compared the various algorithms using a blind image quality assessment model. It was observed that no one algorithm can work the best for all images. In particular, a deep learning-based algorithm worked the best for the right Mastcam images and a conventional algorithm achieved the best results for the left Mastcam images. Moreover, subjective evaluation of five demosaiced Mastcam images was also used to compare the various algorithms.

1. Introduction

After an eight-month long journey, the NASA Mars Science Laboratory (MSL) Curiosity rover landed on Mars in 2012 [1]. The rover has several instruments characterizing the Martian surface and environment. The Alpha Particle X-Ray Spectrometer (APXS) [2] and the Laser Induced Breakdown Spectroscopy (LIBS) [3,4] are being used for rock sample analysis. The Mastcam multispectral imagers [5,6,7,8,9,10] can perform surface characterization by acquiring images with resolutions from a few mm/pixel in the foreground, to several m/pixel for distant features. There are two Mastcam multispectral imagers, each capable of imaging in nine different spectral bands, and separated by 24.2 cm for stereo imaging [1]. The right imager has a resolution that is three times better than that of the left and it is mainly used for near field image acquisition. The left imager has a field of view that is three times wider than that of the right and it is mainly used for rover navigation. Out of the nine bands for each camera, three are RGB bands, which are generated by using a Bayer pattern filter superimposed on the detector. Similar to many commercial digital cameras, the use of a Bayer pattern saves on the overall cost of the Mastcam.
The Bayer pattern filter was invented in 1976 [11]. There have been many debayering/demosaicing algorithms developed over the last few decades (see [12,13,14,15,16] and references therein). For demosaicing the RGB bands in Mastcam, the current data processing strategy uses the Malvar–He–Cutler (MHC) algorithm [17], an algorithm developed in 2004, because of its relative simplicity to implement in the rover camera’s control electronics. In [1], a Directional Linear Minimum Mean Square-Error Estimation (DLMMSE) [18] demosaicing algorithm was also evaluated and found to perform better than MHC in some situations.
Since 2012, deep neural networks, also known as deep learning, have gained a lot of attention. Many versions of deep learning algorithms have achieved great results in many applications. In [19], for example, one deep learning-based algorithm was proposed for joint demosaicing and denoising. We call this algorithm DEMOsaic-Net (DEMONET). The method achieved great performance in some tests. We also identified two deep learning based algorithms [20,21] that have open source codes.
The first key contribution of our research was to investigate whether or not there are better and more effective algorithms, developed after 2004, for demosaicing Mastcam images. A recent paper by us [22] initiated this effort and investigated the performance of several pixel-level fusion approaches, including equal weighting, unequal weighting, random weighting, and a fusion scheme known as alpha-trimmed mean filtering (ATMF) [23]. Several algorithms [24,25,26,27] with publicly available codes were identified and compared. We applied the various demosaicing algorithms to 31 Mastcam images, which were retrieved from the NASA Planetary Data System (PDS).
The second major contribution of our research was the comparison of conventional and deep learning based demosaicing algorithms for Mastcam images. In addition to those non-deep learning based algorithms [16,17,18,19,20,21,22,23,24,25,26,27], we also included four recent ones [28,29,30,31] that were demonstrated to have good performance in recent studies. Out of our comparative study for Mastcam applications, we made several observations. One observation was that the MHC algorithm still yielded reasonable performance in Mastcam images, but recent algorithms can do better. A second observation was that not all deep learning algorithms performed well. Only the DEMONET achieved better performance than other deep learning algorithms in Mastcam images. This clearly demonstrates the fact that the selection of demosaicing algorithms really depends on the application. One should not blindly pick one based on its performance for some other applications. A third observation is that DEMONET has the best performance only for right Mastcam images based on both subjective and objective evaluations. DEMONET has close performance to a conventional algorithm known as exploitation of color correlation (ECC) [28] for the left Mastcam images. Without a systematic study, it is possible that other demosaicing algorithms might be recommended instead of DEMONET for the right images and ECC for the left images to NASA or other agencies for future space mission imaging developments.
It should be noted that this paper is a natural extension of our earlier paper [22]. There are three major differences. First, we have used a blind image quality assessment model to generate objective performance metrics for the demosaiced Mastcam images from various algorithms. This is because we do not have ground truth Mastcam images in our study. This part is completely new and was not done in [22]. Second, we have added more demosaiced images in our subjective evaluations, which will help readers to appreciate the powerful results of some of the high performing approaches. Third, in addition to those methods in [22], we have included three deep learning based algorithms and four conventional algorithms [28,29,30,31] in our comparative study. Altogether there are 17 methods.
This paper is organized as follows. In Section 2, we summarize the various conventional and deep learning approaches. Section 3 provides background on Mastcam images and performance metrics, and summarizes the extensive experimental results. Actual Mastcam images were used in our experiments. Finally, concluding remarks and future research directions are provided in Section 4.

2. Demosaicing Algorithms and Performance Metrics

2.1. Algorithms

The following algorithms were evaluated in our experiments and they are briefly summarized below:
  • Linear Directional Interpolation and Nonlocal Adaptive Thresholding (LDI-NAT): This algorithm is simple but the non-local search is time consuming [16].
  • MHC: It is the Malvar–He–Cutler algorithm in [17]. This is the default method for demosaicing Mastcam images used by NASA. The algorithm is very efficient and simple to implement.
  • Directional Linear Minimum Mean Square-Error Estimation (DLMMSE): It is the Zhang and Wu algorithm in [18]. This method was investigated in Bell et al.’s paper [1].
  • Lu and Tan Interpolation (LT): It is from [24].
  • Adaptive Frequency Domain (AFD): It is a frequency domain approach from Dubois [25]. The algorithm can also be used for other mosaicking patterns.
  • Alternate Projection (AP): It is the algorithm from Gunturk et al. [26].
  • Primary-Consistent Soft-Decision (PCSD): It is Wu and Zhang’s algorithm from [27].
  • ATMF: This method is from [23]. At each pixel location, we demosaic pixels from seven methods; the largest and smallest pixels are removed and the mean of the remaining pixels are used. This method fuses the results from AFD, AP, LT, DLMMSE, MHC, PCSD, and LDI-NAT.
  • Demosaicnet (DEMONET): In [19], a feed-forward network architecture was proposed for demosaicing. There are D + 1 convolutional layers and each layer has W outputs and uses K × K size kernels. An initial model was trained using 1.3 million images from Imagenet and 1 million images from MirFlickr. Additionally, some challenging images were searched to further enhance the training model. Details can be found in [19]. It should be noted that we have also performed some training using only Mastcam images. However, the customized model was not good as compared to the original one. This is probably due to lack of training data, as we have less than 100 high quality Mastcam images.
  • Fusion using 3 best (F3) [22]: We only used F3 for Mastcam images. The mean of pixels from demosaiced images of LT, MHC, and LDI-NAT were used.
  • Bilinear: We used bilinear interpolation for Mastcam images because it is the simplest algorithm.
  • Sequential Energy Minimization (SEM) [21]: A deep learning approach based on sequential energy minimization was proposed in [21]. The performance was reasonable except that the computation takes a long time due to sequential optimization.
  • Deep Residual Network (DRL) [20]: A DRL algorithm is a deep learning based approach that was proposed for demosaicing based on a customized convolutional neural network (CNN) with a depth of 10 and a receptive field of size 21 × 21.
  • Exploitation of Color Correlation (ECC) [28]: The authors of [28] proposed a scheme that exploits the correlation between different color channels much more effectively than some of the existing algorithms.
  • Minimized-Laplacian Residual Interpolation (MLRI) [31]: This is a residual interpolation (RI)-based algorithm based on a minimized-Laplacian version.
  • Adaptive Residual Interpolation (ARI) [29]: ARI adaptively combines RI and MLRI at each pixel, and adaptively selects a suitable iteration number for each pixel, instead of using a common iteration number for all of the pixels.
  • Directional Difference Regression (DDR) [30]: DDR obtains the regression models using directional color differences of the training images. Once models are learned, they will be used for demosaicing.
It should be noted that F3 and ATMF are both pixel-level fusion methods. Details can be found in [22].

2.2. Performance Metrics

Since there is no ground truth, we used a blind quality assessment using the Natural Image Quality Evaluator (NIQE). A “completely blind” image quality assessment (IQA) model was recently developed [32] that captures the measurable deviations from expected statistical regularities observed on high-quality natural images. This package can be downloaded from Prof. A. C. Bovik’s website. Details can be found in [32,33].

3. Experimental Results

3.1. Data

Mastcam has two imagers as shown in Figure 1. The left imager has three times lower resolution than that of the right. The left is usually used for long range image acquisition and the right camera is for near field data collection.
Mastcam images from NASA’s PDS data repository were used in our experiments. From PDS, we retrieved 31 actual Mastcam images, which are in Bayer pattern. There are no ground truth Mastcam images. For visualization purposes, we included some demosaiced images using the MHC algorithm of the left and right Mastcam images, shown in Figure 2 and Figure 3, respectively.

3.2. Left Mastcam Image Demosaicing Results

Due to lack of ground truth Mastcam images, it is not possible to generate peak signal-to-noise ratio (PSNR) and cielab [34] metrics, which have been widely used in demosaicing research. Instead, we applied a blind image quality assessment model, called Natural Image Quality Evaluator (NIQE), to assess the demosaiced images. NIQE was developed by researchers at UT (University of Texas) Austin [32], and we have used NIQE in another project before [33]. Here, we customized the NIQE model by training it using high quality right Mastcam images. There are six bands of high quality non-RGB images in each Mastcam image cube.
All 16 left images were used by 17 demosaicing algorithms: AFD, AP, bilinear, DEMONET, DLMMSE, MHC, F3, ATMF, LDI-NAT, LT, PCSD, SEM, DRL, DEMONET, ECC, ARI, MLRI, and DDR. F3 means the mean of pixels from demosaiced images of three algorithms (LT, LDI-NAT, and MHC).
Table 1, Table 2 and Table 3 show the NIQE scores of various methods for the R, G, and B bands respectively. Table 4 shows the averaged NIQE scores of the R, G, and B results of different methods. Low NIQE scores mean better performance. Bold numbers in each table indicate the best performing methods. For the left images, we observed the following:
  • The MHC method, which was developed in 2004 and is currently being used by NASA, is mediocre in terms of average scores. Seven algorithms have better results.
  • The non-deep learning based method known as ECC achieved the best performance for left images in red and blue bands. However, MLRI has good performance in the green band.
  • From Table 4, DEMONET performed the best amongst the deep learning algorithms. Its averaged score (5.98) is close to that of ECC (5.51).
Figure 4 and Figure 5 further corroborate the above observations. Actually, ECC and DEMONET have comparable performance in all R, G, and B bands.
Figure 6, Figure 7 and Figure 8 visually compare the demosaiced outputs from the 17 methods. From Figure 6, one can see that bilinear, MHC, AP, LT, LDI-NAT, F3, and ATMF all have strong color distortions. Strong zipper artifacts can be seen in the results from AFD, AP, DLMMSE, PCSD, LDI-NAT, F3, and ATMF. The results of ECC and MLRI also have slight color distortions. The most perceptually pleasing results can be seen from the results of DEMONET, ARI, DRL, and SEM. From Figure 7, strong color distortions can be seen from the results of AD, LT, MHC, LDI-NAT, and bilinear. Zipper artifacts are strong in the results of AFD, ATMF, AP, DLMMSE, PCSD, and LDI-NAT. Minor color distortions can be seen in MLRI, ECC, and DDR. Good results can be found in DEMONET, ARI, DRL, and SEM. In Figure 8, we can see that AFD, AP, bilinear, DLMMSE, F3, ATMF, LDI-NAT, LT, and PCSD all have strong color and zipper artifacts. Perceptually good results can be seen in the results of DEMONET, ARI, DDR, DRL, ECC, SEM, and MLRI.
In terms of computational complexity, ARI is the slowest amongst the non-deep learning methods. It took approximately three minutes per image. For the deep learning based methods, SEM is the slowest because, for some big images, we needed to divide the image into quadrants and each quadrant takes a few minutes to process. In any event, computational efficiency is not a concern for this Mastcam project, as the demosaicing is done off-line. Image quality is the most important concern for this project.

3.3. Right Mastcam Image Demosaicing Results

Table 5, Table 6 and Table 7 summarize the NIQE scores of R, G, and B bands of various methods for 15 right images, respectively. Table 8 summarizes the averaged scores of R, G, and B from the individual tables. We made the following observations:
  • In general, the NIQE scores are lower in the right images than those in the left images. This is because the right images have three times higher resolution than those of the left. As a result, neighboring pixels have better correlation, and hence it is easier to demosaic in right images.
  • In right images, DEMONET has the best performance in all images.
  • MHC is again the mediocre algorithm, as there are seven other algorithms that performed better.
  • Among the three deep learning methods (SEM, DEMONET, DRL), DEMONET performed the best.
  • There are several non-deep learning based algorithms (ECC, ARI, MLRI) that performed better than two of the deep learning based methods (SEM and DRL).
  • Among the non-deep learning based algorithms, ECC is the best performing one.
Figure 9 plots the averaged NIQE scores of different methods for all the R, G, and B bands from all images. Figure 10 shows the averaged NIQE scores of all bands from all images. They further demonstrated that DEMONET is the best performing method for the right images.
In addition to the above objective evaluations, all the demosaiced images using various algorithms were subjectively evaluated. Here, we included two sets of right demosaiced images below for subjective evaluation. Figure 11 and Figure 12 show the two right Mastcam images. From Figure 11, it is hard to see any color distortions in all methods. However, one can observe some noticeable zipper artifacts in the results of AFD, AP, LT, DLMMSE, PCSD, LDI-NAT, ATMF, F3, and bilinear. The perceptually good algorithms include DEMONET, ARI, DDR, DRL, ECC, SEM, and MLRI. It can be seen that although the MHC algorithm was developed in 2004, it still performed reasonably well in this image. From Figure 12, we observe some large performance variations from different algorithms due to the presence of sharp edges in the scene. Strong color distortion can be seen from the results of AP, LT, LDI-NAT, and bilinear algorithms. Strong zipper artifacts are observed from the results of AFD, AP, LT, DLMMSE, PCSD, LDI-NAT, ATMF, F3, and bilinear algorithms. Perceptually good algorithms include MHC, DEMONET, ARI, DDR, DRL, ECC, SEM, and MLRI.
In short, the DEMONET method yielded the best subjective and objective performance in all right Mastcam images. Figure 13 shows a summary for all methods and left and right results are put together in one chart. It can be seen that right images have better performance than left due to the right camera’s spatial resolution being three times higher than the left.

4. Conclusions

In this paper, we present a comparative study of demosaicing algorithms for NASA Mastcam images using conventional and deep learning-based methods. Thirty-one Mastcam images were used. For the comparative studies, we concluded that the DEMONET algorithm performed the best for the right Mastcam images and a conventional algorithm known as ECC performed the best for the left images. Not all deep learning algorithms performed better than non-deep learning algorithms. The ECC also worked quite well in the right images. We think that the time might have come for NASA to consider using deep learning based demosaicing algorithms for the right Mastcam images and ECC for the left images in the near future.

Author Contributions

C.K. conceived the concept and wrote the paper. B.C. generated all the figures and tables. J.F.B. provided feedback, comments, and suggestions.

Funding

NASA Jet Propulsion Laboratory supported this research (contract number 80NSSC17C0035).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bell, J.F., III; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.; et al. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving. Earth Space Sci. 2017, 4, 396–452. [Google Scholar] [CrossRef] [Green Version]
  2. Ayhan, B.; Kwan, C.; Vance, S. On the Use of a Linear Spectral Unmixing Technique for Concentration Estimation of APXS Spectrum. J. Multidiscip. Eng. Sci. Technol. 2015, 2, 2469–2474. [Google Scholar]
  3. Wang, W.; Li, S.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Revisiting the Preprocessing Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover. In Proceedings of the 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
  4. Wang, W.; Ayhan, B.; Kwan, C.; Qi, H.; Vance, S. A Novel and Effective Multivariate Method for Compositional Analysis using Laser Induced Breakdown Spectroscopy. In Proceedings of the 35th International Symposium on Remote Sensing of Environment, Beijing, China, 22–26 April 2013. [Google Scholar]
  5. Dao, M.; Kwan, C.; Ayhan, B.; Bell, J.F., III. Enhancing Mastcam Images for Mars Rover Mission. In Proceedings of the 14th International Symposium on Neural Networks, Hokkaido, Japan, 21–26 June 2017; pp. 197–206. [Google Scholar]
  6. Ayhan, B.; Dao, M.; Kwan, C.; Chen, H.-M.; Bell III, J.F.; Kidd, R. A Novel Utilization of Image Registration Techniques to Process Mastcam Images in Mars Rover with Applications to Image Fusion, Pixel Clustering, and Anomaly Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2017, 10, 4553–4564. [Google Scholar] [CrossRef]
  7. Kwan, C.; Budavari, B.; Dao, M.; Ayhan, B.; Bell, J.F., III. Pansharpening of Mastcam images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 5117–5120. [Google Scholar]
  8. Kwan, C.; Dao, M.; Chou, B.; Kwan, L.M.; Ayhan, B. Mastcam Image Enhancement Using Estimated Point Spread Functions. In Proceedings of the IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York City, NY, USA, 19–21 October 2017. [Google Scholar]
  9. Malin, M.C.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Maki, J.N.; Bell, J.F., III; Cameron, J.F.; Dietrich, W.E.; Edgett, K.S.; et al. The Mars Science Laboratory (MSL) mast cameras and descent imager: I. Investigation and instrument descriptions. Earth Space Sci. 2017, 4, 2. [Google Scholar] [CrossRef] [PubMed]
  10. Qu, Y.; Guo, R.; Wang, W.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Anomaly Detection in Hyperspectral Images Through Spectral Unmixing and Low Rank Decomposition. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  11. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065, 20 July 1976. [Google Scholar]
  12. Li, X.; Gunturk, B.; Zhang, L. Image demosaicing: A systematic survey. In Proceedings of the SPIE Visual Communications and Image Processing, San Jose, CA, USA, 27–31 January 2008; p. 6822. [Google Scholar]
  13. Losson, O.; Macaire, L.; Yang, Y. Comparison of color demosaicing methods. Adv. Imaging Electron. Phys. 2010, 162, 173–265. [Google Scholar]
  14. Kwan, C.; Chou, B.; Kwan, L.-Y.M.; Budavari, B. Debayering RGBW color filter arrays: A pansharpening approach. In Proceedings of the IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 94–100. [Google Scholar]
  15. Kwan, C.; Chou, B. Further Improvement of Debayering Performance of RGBW Color Filter Arrays Using Deep Learning and Pansharpening Techniques. J. Imaging 2019. submitted. [Google Scholar]
  16. Zhang, L.; Wu, X.; Buades, A.; Li, X. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. Imaging 2011, 20. [Google Scholar]
  17. Malvar, H.S.; He, L.-W.; Cutler, R. High-quality linear interpolation for demosaciking of color images. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; pp. 485–488. [Google Scholar]
  18. Zhang, L.; Wu, X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005, 14, 2167–2178. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Graph. 2016, 35, 191. [Google Scholar] [CrossRef]
  20. Tan, R.; Zhang, K.; Zuo, W.; Zhang, L. Color image demosaicking via deep residual learning. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 14–17 July 2017; pp. 793–798. [Google Scholar]
  21. Klatzer, T.; Hammernik, K.; Knobelreiter, P.; Pock, T. Learning joint demosaicing and denoising based on sequential energy minimization. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Evanston, IL, USA, 13–15 May 2016; pp. 1–11. [Google Scholar]
  22. Kwan, C.; Chou, B.; Kwan, L.M.; Larkin, J.; Ayhan, B.; Bell, J.F., III; Kerner, H. Demosaicking enhancement using pixel-level fusion. Signal. Image Video Process. 2018, 12, 749–756. [Google Scholar] [CrossRef]
  23. Bednar, J.; Watt, T. Alpha-trimmed means and their relationship to median filters. IEEE Trans. Acoust. Speech Signal. Process. 1984, 32, 145–153. [Google Scholar] [CrossRef]
  24. Lu, W.; Tan, Y.-P. Color filter array demosaicking: New method and performance measures. IEEE Trans. Image Process. 2003, 12, 1194–1210. [Google Scholar] [PubMed]
  25. Dubois, E. Frequency-domain methods for demosaicking of Bayer-sampled color images. IEEE Signal. Proc. Lett. 2005, 12, 847–850. [Google Scholar] [CrossRef] [Green Version]
  26. Gunturk, B.K.; Altunbasak, Y.; Mersereau, R.M. Color plane interpolation using alternating projections. IEEE Trans. Image Process. 2002, 11, 997–1013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wu, X.; Zhang, N. Primary-consistent soft-decision color demosaicking for digital cameras. IEEE Trans. Image Process. 2004, 13, 1263–1274. [Google Scholar] [CrossRef]
  28. Jaiswal, S.P.; Au, O.C.; Jakhetiya, V.; Yuan, Y.; Yang, H. Exploitation of inter-color correlation for color image demosaicking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 1812–1816. [Google Scholar]
  29. Monno, Y.; Kiku, D.; Tanaka, M.; Okutomi, M. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking. Sensors 2017, 17, 2787. [Google Scholar] [CrossRef] [PubMed]
  30. Wu, J.; Timofte, R.; Van Gool, L. Demosaicing based on directional difference regression and efficient regression priors. IEEE Trans. Image Process. 2016, 25, 3862–3874. [Google Scholar] [CrossRef] [PubMed]
  31. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Beyond color difference: Residual interpolation for color image demosaicking. IEEE Trans. Image Process. 2016, 25, 1288–1300. [Google Scholar] [CrossRef] [PubMed]
  32. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a completely blind image quality analyzer. IEEE Signal. Process. Lett. 2013, 22, 209–212. [Google Scholar] [CrossRef]
  33. Kwan, C.; Budavari, B.; Bovik, A.; Marchisio, G. Blind quality assessment of fused WorldView-3 images by using the combinations of pansharpening and hypersharpening paradigms. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1835–1839. [Google Scholar] [CrossRef]
  34. Zhang, X.; Wandell, B.A. A spatial extension of cielab for digital color image reproduction. J. Soc. Inf. Disp. 1997, 5, 61–63. [Google Scholar] [CrossRef]
Figure 1. Mars rover Curiosity and its onboard cameras. Mastcam imagers act as eyes of the rover for rock sample selection and rover guidance.
Figure 1. Mars rover Curiosity and its onboard cameras. Mastcam imagers act as eyes of the rover for rock sample selection and rover guidance.
Electronics 08 00308 g001
Figure 2. Images from the left Mastcam.
Figure 2. Images from the left Mastcam.
Electronics 08 00308 g002
Figure 3. Right Mastcam images.
Figure 3. Right Mastcam images.
Electronics 08 00308 g003
Figure 4. Averaged NIQE scores of different bands for left images using all methods.
Figure 4. Averaged NIQE scores of different bands for left images using all methods.
Electronics 08 00308 g004
Figure 5. Averaged NIQE scores of R, G, and B bands for left images using all methods.
Figure 5. Averaged NIQE scores of R, G, and B bands for left images using all methods.
Electronics 08 00308 g005
Figure 6. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 1.
Figure 6. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 1.
Electronics 08 00308 g006
Figure 7. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 5.
Figure 7. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 5.
Electronics 08 00308 g007
Figure 8. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 6.
Figure 8. Subjective comparison of demosaiced images of different algorithms for left Mastcam image 6.
Electronics 08 00308 g008
Figure 9. Averaged NIQE scores of different bands using all methods for right images.
Figure 9. Averaged NIQE scores of different bands using all methods for right images.
Electronics 08 00308 g009
Figure 10. Averaged NIQE scores of different bands using all methods for right images.
Figure 10. Averaged NIQE scores of different bands using all methods for right images.
Electronics 08 00308 g010
Figure 11. Subjective comparison of demosaiced images of different algorithms for right Mastcam image 1.
Figure 11. Subjective comparison of demosaiced images of different algorithms for right Mastcam image 1.
Electronics 08 00308 g011
Figure 12. Demosaiced images of right Mastcam image 7.
Figure 12. Demosaiced images of right Mastcam image 7.
Electronics 08 00308 g012
Figure 13. Averaged NIQE scores (RGB scores are averaged) of different methods for left and right images.
Figure 13. Averaged NIQE scores (RGB scores are averaged) of different methods for left and right images.
Electronics 08 00308 g013
Table 1. Summary of Natural Image Quality Evaluator (NIQE) scores (lower means better). Sixteen left images: Red band.
Table 1. Summary of Natural Image Quality Evaluator (NIQE) scores (lower means better). Sixteen left images: Red band.
12345678910111213141516Average Score
AFD12.615712.348512.124213.330313.212613.532114.753115.155318.601010.979810.12709.93029.904317.950315.226120.028913.7387
AP18.201018.004819.486417.549115.852816.575318.152816.454616.827613.012111.993013.496414.757417.236918.449923.560816.8507
BILINEAR24.386122.744724.926121.513227.467623.554020.594332.926936.399919.842219.071218.730822.457827.498129.300322.232824.6029
DEMONET6.38886.28546.24014.99966.41775.20678.12587.03357.56842.37062.57992.78533.14336.04305.45169.79685.6523
DLMMSE9.49699.80208.98769.952610.63039.566111.904915.912519.06709.22438.83268.59228.364517.278016.612516.536611.9225
MHC8.80288.48808.49726.57407.72286.82188.054710.25749.14165.87096.05345.71026.49078.95298.466010.33267.8898
F39.94879.45869.81478.36039.71828.076810.387112.014610.61847.50447.69037.88607.932010.02709.559511.91339.4319
ATMF7.57447.56727.41946.64828.05667.04409.250012.574812.28746.51116.89256.48046.925010.603610.088110.14328.5041
LDI-NAT15.269613.841814.191813.039216.051911.443514.157518.631513.40459.937710.455510.732710.237713.788214.149317.918413.5782
LT16.299016.148914.765214.508815.427912.980316.463416.270515.011810.667910.567510.799611.042714.260514.510119.957614.3551
PCSD10.66269.561010.525811.191911.583111.347413.273915.118016.613110.391510.00339.489210.461916.121215.026616.024512.3372
ARI7.83077.79027.60986.49987.19665.52628.80727.51248.44962.74823.02673.14413.75016.77186.856010.46656.4991
DDR7.33246.53047.43495.41527.47105.39317.41636.78478.13942.88283.30343.51713.83576.08956.021211.08076.1655
DRL6.68656.43296.69295.47116.93946.18828.15367.31027.39542.53612.90833.19973.44996.20476.333610.79636.0437
ECC6.16525.94175.78334.49665.80084.65307.17875.78318.08932.07702.42032.79262.89015.08704.87118.83615.1791
SEM8.21607.30656.86506.08706.44516.49016.22546.54687.41974.25444.19074.65574.63396.31606.917413.95386.6577
MLRI6.26956.11106.39105.08626.42075.01278.06206.16978.52792.37142.67522.87813.23755.66025.51729.57735.6230
Table 2. Summary of NIQE scores (lower means better). Sixteen left images: Green band.
Table 2. Summary of NIQE scores (lower means better). Sixteen left images: Green band.
12345678910111213141516Average Score
AFD12.670312.631112.386513.831014.311112.921318.362016.204119.507411.541610.388710.230110.188818.593715.973917.749314.2182
AP20.179820.951521.500219.157319.576817.230018.329016.696317.460813.059712.696714.226614.972218.022419.421826.827418.1443
BILINEAR40.760839.298041.126541.965345.043834.169131.348641.975435.342329.919027.359825.346032.717437.410937.182829.199435.6353
DEMONET7.06406.94496.61785.58626.84685.31989.31027.29257.44482.52482.81983.14763.27916.39345.81909.95026.0226
DLMMSE9.565410.06269.23609.909612.21859.201614.190516.986419.50589.22228.62878.30508.042217.235715.929415.191212.0894
MHC11.040210.19169.65177.36569.36266.96279.15319.27009.39795.30815.77785.85436.20147.92908.020112.12868.3509
F313.749812.334212.867711.640912.395910.216713.808915.833412.25549.17159.09909.358910.173211.663311.236813.642411.8405
ATMF8.51148.43567.80417.13789.84307.369010.701513.050312.55986.60346.82576.77116.910210.25029.755510.48998.9387
LDI-NAT20.191819.665818.884517.408820.743615.131818.322423.770215.260812.310612.284812.072212.547716.158617.338620.024617.0073
LT22.762322.617319.802117.525418.099415.917918.135120.691916.198011.979912.396012.924413.247216.654817.315820.708417.3110
PCSD9.04328.88678.82229.674211.571810.191713.695014.833318.616110.16099.29678.90279.241016.637814.150014.223411.7467
ARI7.63217.76836.98526.15667.74255.747010.44367.27187.69832.96223.16193.44853.75856.57746.870412.19176.6510
DDR10.41098.92919.48757.54579.32956.94839.78738.29338.15833.84484.37014.73325.04397.13237.040015.19467.8905
DRL7.93687.45677.34356.30237.98345.71119.04797.70647.12322.80323.07073.49203.68216.61516.500114.04446.6762
ECC7.70066.88156.23645.09436.89105.60918.50255.70337.96902.65722.83573.23883.27775.29525.284112.58255.9849
SEM7.76936.87676.52796.10426.35076.27726.18816.75587.03274.35744.11554.71894.72576.50276.861913.61166.5485
MLRI6.39926.63645.78774.88046.23665.13778.49505.93108.00012.70792.66872.99623.27305.44215.592711.62185.7379
Table 3. Summary of NIQE scores (lower means better). Sixteen left images: Blue band.
Table 3. Summary of NIQE scores (lower means better). Sixteen left images: Blue band.
12345678910111213141516Average Score
AFD14.586413.923114.058515.344418.549715.546618.756817.355919.898113.643112.430412.056512.487918.738316.931218.709615.8135
AP19.267019.987121.341919.308819.334717.512419.215317.624718.252813.338513.246514.579315.262519.702520.280224.256318.2819
BILINEAR23.830820.814924.637121.664724.113319.812920.302540.554635.325318.336620.029318.060719.328532.738531.776720.663924.4994
DEMONET7.25676.65196.66495.95767.41585.67438.89967.85557.55873.15363.17063.42183.60737.12966.22359.92466.2854
DLMMSE10.935710.793710.031411.714715.281611.094418.277820.571920.298511.244910.01339.94149.334520.430117.637314.835013.9023
MHC8.76068.05858.00096.819410.92487.056211.130010.74839.01955.81845.98765.65565.63908.35358.169110.50548.1654
F313.722312.438513.077110.656315.88359.514514.197116.478711.84118.89339.05919.05199.568511.778612.563512.201311.9328
ATMF9.33189.23399.05478.764012.25868.244313.387014.783714.67028.49708.05848.01547.695912.876212.457510.924210.5158
LDI-NAT19.585018.198319.805716.845521.479813.404418.715224.555315.681012.065612.538413.302512.300017.883818.933218.000817.0809
LT18.308819.161418.415916.997318.727414.492116.709118.857115.742812.176912.215813.178412.766719.015318.069218.960916.4872
PCSD9.51059.63149.646010.868213.582210.794414.107116.885019.989111.22389.71919.36309.681118.133416.812214.645312.7870
ARI7.52367.54626.79146.19736.50275.23718.90517.17537.42352.82782.98633.23393.60536.12136.175811.16466.2136
DDR8.88348.50428.53487.19198.86747.00349.53438.61838.17614.00884.41674.70574.99417.51787.504212.11687.5361
DRL7.59947.77277.24246.29418.51546.125710.18548.44518.04373.44423.55343.81584.11407.36947.136210.23936.8685
ECC6.43156.12225.79495.13055.93104.96857.56946.11037.29932.46502.63892.96093.08645.12525.10819.21095.3721
SEM8.89067.87847.23676.67777.28276.81696.79557.30357.07054.95364.83675.49665.36676.95207.469614.45497.2177
MLRI6.78136.50326.16325.28716.42885.33778.14576.57697.54072.66122.81983.08843.25935.40855.51969.38035.6814
Table 4. Summary of NIQE scores (lower means better). Sixteen left images: Averaged over R, G, and B bands.
Table 4. Summary of NIQE scores (lower means better). Sixteen left images: Averaged over R, G, and B bands.
12345678910111213141516Average Score
AFD13.290812.967612.856414.168615.357814.000017.290616.238419.335512.054910.982010.738910.860318.427416.043718.829314.5901
AP19.215919.647820.776118.671818.254817.105918.565716.925217.513713.136812.645414.100814.997418.320619.384024.881517.7590
BILINEAR29.659227.619230.229928.381132.208225.845324.081838.485635.689222.699222.153420.712524.834632.549232.753324.032128.2459
DEMONET6.90326.62746.50765.51456.89345.40038.77857.39387.52402.68302.85683.11823.34326.52205.83139.89055.9867
DLMMSE9.999410.21949.418310.525612.71019.954014.791117.823619.62389.89719.15828.94628.580418.314616.726415.520912.6381
MHC9.53458.91278.71666.91969.33676.94699.445910.09199.18645.66585.93965.74006.11048.41188.218410.98898.1354
F312.473611.410411.919910.219212.66599.269312.797714.775611.57178.52318.61618.76569.224611.156311.119912.585611.0684
ATMF8.47258.41228.09277.516710.05277.552411.112813.469613.17257.20387.25897.08907.177011.243310.767010.51919.3195
LDI-NAT18.348817.235317.627315.764519.425113.326617.065022.319014.782111.437911.759612.035811.695115.943516.807018.647915.8888
LT19.123419.309217.661016.343817.418214.463417.102518.606515.650911.608311.726412.300812.352216.643516.631719.875716.0511
PCSD9.73889.35979.664710.578112.245710.777813.692015.612118.406110.59219.67309.25169.794716.964215.329614.964412.2903
ARI7.66217.70167.12886.28467.14735.50349.38537.31997.85712.84603.05833.27553.70466.49016.634111.27436.4546
DDR8.87567.98798.48576.71768.55606.44838.91267.89888.15803.57884.03014.31874.62466.91326.855112.79747.1974
DRL7.40767.22087.09306.02257.81276.00839.12907.82067.52072.92793.17743.50253.74876.72976.656611.69336.5295
ECC6.76586.31515.93824.90716.20765.07697.75025.86557.78592.39982.63172.99743.08475.16925.087810.20985.5120
SEM8.29207.35396.87656.28966.69286.52816.40306.86877.17434.52184.38104.95714.90886.59027.082914.00686.8080
MLRI6.48336.41696.11405.08466.36205.16278.23426.22598.02292.58022.72122.98763.25665.50365.543210.19315.6807
Table 5. Summary of NIQE scores (lower = better). Fifteen right images: Red band.
Table 5. Summary of NIQE scores (lower = better). Fifteen right images: Red band.
123456789101112131415Average Score
AFD8.93738.77598.41477.89499.989111.738612.843111.43139.53328.01147.86377.455710.99869.482810.48349.5902
AP11.763612.046511.540111.160412.184515.816513.906713.430110.414912.908912.693313.026013.833612.893814.193112.7874
BILINEAR28.812628.781030.459028.497825.933614.761519.079227.591830.657617.324515.828617.405818.177928.709629.688924.1140
DEMONET2.96753.06373.32103.27623.34052.82462.13942.48374.75632.91692.82993.19015.10963.44192.80213.2309
DLMMSE7.81388.38167.89047.48609.79229.74949.545411.84598.30497.31256.22696.03468.13707.948111.21078.5120
MHC5.35475.94825.44795.44817.31706.37246.15767.56585.67475.06774.82824.30085.17466.08247.66045.8934
F38.76408.72598.48007.75829.566510.242110.139411.71067.78578.30917.48297.19317.87758.03559.43048.7667
ATMF6.67436.65956.40385.80897.24397.04257.12959.22046.84005.24514.76064.70555.26185.91437.89906.4539
LDI-NAT9.29659.73999.34309.512010.881012.061211.246713.79348.231210.37159.11689.119510.64298.697410.942010.1997
LT10.600210.711510.202710.340210.637712.960611.876513.66568.947110.35099.54179.635111.10629.558612.083510.8145
PCSD8.47328.14328.20828.00989.347410.557610.641910.994311.56987.60427.00006.82018.152010.026510.62319.0781
ARI3.86433.88323.75843.63263.68483.72253.41764.01404.83343.63183.57333.76514.83283.47073.40063.8323
DDR3.57303.60523.48723.39583.70714.26833.31113.28844.49083.53413.58133.81455.29013.13493.64753.7420
DRL4.37634.67364.14374.30164.47724.68363.98703.60935.99204.25414.11154.37835.67723.92664.33814.4620
ECC3.36833.32923.38333.42803.69013.36303.10232.88015.08623.16622.95373.18224.19653.29783.20073.4418
SEM6.41756.75396.22707.03196.69293.91003.74605.06085.73264.13484.25215.43427.49465.58336.29335.6510
MLRI3.35073.34333.26493.12703.24363.76963.29133.17104.82233.44193.18553.32454.24313.17482.97233.4484
Table 6. Summary of NIQE scores (lower = better). Fifteen right images: Green band.
Table 6. Summary of NIQE scores (lower = better). Fifteen right images: Green band.
123456789101112131415Average Score
AFD10.22839.945010.04698.933111.074812.387412.505512.64139.67228.84468.17037.557311.452210.389612.049910.3932
AP13.224112.749612.694413.256014.859616.952415.347213.915712.470813.936113.526213.897115.219914.588913.997114.0423
BILINEAR35.411233.453133.537532.107632.081223.609228.290938.586832.847427.561119.297720.272523.693833.832438.229730.1875
DEMONET2.83002.88542.94073.01273.18933.01872.02522.36174.06453.02882.72332.87714.23833.32133.05903.0384
DLMMSE7.32047.55247.66677.45798.85289.68339.391311.38958.38607.07835.95585.45387.26028.194010.68068.1549
MHC4.77244.86424.87074.85215.17076.28275.29945.70755.11355.22094.77144.55386.75514.92485.83845.2665
F39.03629.02628.92318.046110.033811.155010.340412.50218.35959.82698.96428.08949.68398.908310.57399.5646
ATMF6.20756.47766.23495.79866.88417.57127.25229.10946.54745.95175.27395.06255.93986.05837.96806.5558
LDI-NAT12.031911.022211.785211.707012.959413.774613.047916.45399.738213.006811.517411.583313.325611.805313.621812.4920
LT12.654411.700312.464612.240513.823413.669013.647016.767410.626215.169314.286413.775414.763712.516014.649113.5168
PCSD7.91387.94618.34628.00298.97119.954610.070411.098010.96267.32336.25325.94377.27529.438510.35978.6573
ARI3.67733.66373.53793.39503.57904.56493.95923.71834.97294.12583.94384.23725.82583.47153.54834.0147
DDR4.60535.22214.75984.56654.37525.49724.00054.18475.49514.88474.86425.17857.26504.15814.70034.9171
DRL3.29933.49293.33973.27323.39594.28423.22012.99784.52873.55003.62914.10596.02383.21603.36213.7146
ECC3.01593.06253.11973.27233.25853.34372.67352.64504.38733.32672.98583.45015.67972.86422.86023.3297
SEM5.56445.97965.42046.43806.19743.77083.58294.65665.57454.29644.31395.33887.32005.44345.97845.3250
MLRI3.18723.18423.15593.34013.39983.86863.35373.10664.53083.66703.46633.70245.68013.26273.09023.5997
Table 7. Summary of NIQE scores (lower = better). Fifteen right images: Blue band.
Table 7. Summary of NIQE scores (lower = better). Fifteen right images: Blue band.
123456789101112131415Average Score
AFD12.547711.940912.41199.944212.837713.955614.296014.911310.446210.900710.09158.769113.174212.465113.838112.1687
AP13.529613.203412.596412.790214.725617.147515.900313.302212.503314.250313.101713.319914.840215.203014.652014.0710
BILINEAR23.864426.616027.504325.109328.088416.628220.468730.404827.461421.803414.291216.028216.431428.573832.383223.7105
DEMONET3.60253.32053.43253.32564.05893.02372.44992.78844.01083.35363.15383.30534.64683.88134.32893.5122
DLMMSE8.52648.75968.67078.54769.666310.578010.744612.52959.61878.13506.60766.24409.39139.825311.58189.2951
MHC6.24516.36906.55486.02057.25006.52426.06717.52407.12935.34664.54034.19235.38545.77237.20846.1419
F38.31028.37818.41158.21239.366011.400610.749414.11008.144910.02758.67818.141210.12949.225111.23379.6345
ATMF7.15936.98487.23416.50647.70068.67758.466410.69897.46796.66075.56555.23686.73287.26739.37647.4490
LDI-NAT11.183410.219810.184610.759812.704614.121914.028716.57068.709114.347412.092111.707413.688112.165613.394612.3919
LT11.815311.105711.225111.624413.290713.986313.811216.82079.965714.318312.941013.093614.359312.874514.045013.0184
PCSD9.22919.07939.80879.022710.327510.266610.631112.756210.89847.77546.87246.62997.911010.390011.74329.5561
ARI3.41403.43953.27913.47323.75453.89883.56563.81334.96504.03054.13044.26275.58593.76644.07673.9637
DDR4.22884.55504.25884.11134.36145.70414.73954.38664.48374.91524.59094.72786.65233.84314.43234.6660
DRL3.64023.61453.72523.55323.31485.04374.00883.42395.02214.45324.01474.15565.40633.38223.66864.0285
ECC3.30253.38673.26673.66313.79153.66482.96383.12584.70443.57193.53863.75695.11943.56563.74463.6777
SEM6.17576.44995.89486.70916.83964.39564.58515.27855.75544.84144.67645.64567.55006.02287.02625.8564
MLRI3.44773.60873.39603.60053.81014.04093.45293.45574.49913.82033.63433.81924.90233.57893.84693.7942
Table 8. Summary of NIQE scores (lower = better). Fifteen right images. Average of RGB bands.
Table 8. Summary of NIQE scores (lower = better). Fifteen right images. Average of RGB bands.
123456789101112131415Average Score
AFD10.571110.220610.29118.924111.300512.693913.214912.99479.88399.25228.70857.927411.875010.779212.123810.7174
AP12.839112.666512.277012.402213.923216.638815.051413.549311.796313.698413.107113.414314.631214.228614.280713.6336
BILINEAR29.362829.616730.500328.571628.701118.333022.612932.194530.322122.229716.472517.902219.434430.371933.433926.0040
DEMONET3.13333.08993.23143.20483.52962.95572.20482.54464.27723.09982.90233.12424.66493.54823.39673.2605
DLMMSE7.88698.23128.07597.83059.437110.00369.893811.92168.76997.50866.26345.91088.26288.655811.15778.6540
MHC5.45745.72715.62445.44036.57926.39315.84146.93255.97255.21174.71334.34905.77175.59326.90245.7673
F38.70348.71018.60498.00559.655410.932610.409712.77438.09679.38788.37517.80799.23038.723010.41279.3220
ATMF6.68046.70736.62436.03807.27627.76377.61609.67626.95185.95255.20005.00165.97816.41338.41456.8196
LDI-NAT10.837310.327310.437610.659612.181713.319212.774415.60608.892912.575210.908810.803412.552210.889412.652811.6945
LT11.690011.172511.297511.401712.583913.538613.111615.75129.846413.279512.256412.168013.409711.649713.592512.4499
PCSD8.53878.38958.78778.34529.548710.259610.447811.616111.14367.56766.70856.46467.77949.951710.90879.0972
ARI3.65193.66213.52523.50033.67284.06213.64743.84854.92383.92943.88254.08835.41483.56953.67523.9369
DDR4.13574.46084.16864.02454.14795.15654.01703.95324.82324.44474.34544.57366.40253.71204.26004.4417
DRL3.77193.92703.73623.70933.72934.67053.73863.34375.18094.08573.91844.21335.70243.50833.78964.0684
ECC3.22893.25953.25663.45453.58003.45722.91322.88364.72603.35493.15943.46304.99853.24253.26853.4831
SEM6.05256.39455.84746.72636.57674.02553.97144.99865.68754.42424.41425.47287.45495.68326.43265.6108
MLRI3.32853.37873.27233.35593.48453.89303.36603.24444.61743.64313.42873.61544.94183.33883.30313.6141

Share and Cite

MDPI and ACS Style

Kwan, C.; Chou, B.; Bell III, J.F. Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images. Electronics 2019, 8, 308. https://doi.org/10.3390/electronics8030308

AMA Style

Kwan C, Chou B, Bell III JF. Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images. Electronics. 2019; 8(3):308. https://doi.org/10.3390/electronics8030308

Chicago/Turabian Style

Kwan, Chiman, Bryan Chou, and James F. Bell III. 2019. "Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images" Electronics 8, no. 3: 308. https://doi.org/10.3390/electronics8030308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop