Next Article in Journal
Nano-grating Assisted Light Absorption Enhancement for MSM-PDs Performance Improvement: An Updated Review
Previous Article in Journal
Inter-Cavity Coupling Strength Control in Metal/Insulator Multilayers for Hydrogen Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

First Demonstration of Calibrated Color Imaging by the CAOS Camera

School of Engineering, University College Cork, T12 K8AF Cork, Ireland
*
Author to whom correspondence should be addressed.
Photonics 2021, 8(12), 538; https://doi.org/10.3390/photonics8120538
Submission received: 23 September 2021 / Revised: 25 November 2021 / Accepted: 26 November 2021 / Published: 28 November 2021

Abstract

:
The Coded Access Optical Sensor (CAOS) camera is a novel, single unit, full spectrum (UV to short-wave IR bands), linear, high dynamic range (HDR) camera. In this paper, calibrated color target imaging using the CAOS camera and a comparison to a commercial HDR CMOS camera is demonstrated for the first time. The first experiment using a calibrated color check chart indicates that although the CMOS sensor-based camera has an 87 dB manufacturer-specified HDR range, unrestricted usage of this CMOS camera’s output range greatly fails quality color recovery. On the other hand, the intrinsically linear full dynamic range operation CAOS camera color image recovery generally matches the restricted linear-mode commercial CMOS sensor-based camera recovery for the presented 39.5 dB non-HDR target that also matches the near 40 dB linear camera response function (CRF) range of the CMOS camera. Specifically, compared to the color checker chart manufacturer provided XYZ values for the calibrated target, percentage XYZ mean errors of 8.3% and 10.9% are achieved for the restricted linear range CMOS camera and CAOS camera, respectively. An alternate color camera assessment gives CIE ΔE00 mean values of 4.59 and 5.7 for the restricted linear range CMOS camera and CAOS camera, respectively. Unlike the CMOS camera lens optics and its photo-detection electronics, no special linear response optics and photo-detector designs were used for the experimental CAOS camera, nevertheless, a good and equivalent color recovery was achieved. Given the limited HDR linear range capabilities of a CMOS camera and the intrinsically wide linear HDR capability of a CAOS camera, a combined CAOS-CMOS mode of the CAOS smart camera is prudent and can empower HDR color imaging. Applications for such a hybrid camera includes still photography imaging, especially for quantitative imaging of biological samples, valuable artworks and archaeological artefacts that require authentic color data generation for reliable medical decisions as well as forgery preventing verifications.

1. Introduction

There are many natural and human-made scenes with HDR near-continuous linear irradiance variation image content. To capture and display this linear HDR irradiance information, both linear HDR cameras as well as linear HDR displays are needed [1,2,3]. Specifically, to enable low contrast authentic target quantitative information detection embedded within these HDR scenes, it is critical that the camera have a linear camera response function (CRF) over the entire irradiance detection range. Recent results have shown that although classic silicon CMOS sensor-based cameras have excellent positive attributes for visible light operations such as high sensitivity, small pixel sizes, low power consumption, compact size, and large pixel counts [4], they can have a difficult time recovering low contrast (<6 dB step) targets within a linear HDR scene such as within a 90 dB range [5]. It is also known that both motion picture digital cinema production studios (e.g., use of 16 bits encoding for up-to 180 dB HDR scenes) [6] and display manufacturers (e.g., Apple Liquid Crystal HDR = 120 dB color display) [7] are looking to transition from classic 8-bit limited dynamic range sRGB or equivalent standard color images to 10 bits and higher color image generation and viewing to produce higher quality color scenes using a wider color gamut of the CIE 1931 XYZ color space. In fact, it is known that today’s color image capture and generation technologies do not have the capability to produce all perceivable colors called the human gamut in the CIE XYZ color space [8]. In addition, recent studies indicate that for HDR displays, conventional (gamma encoding) mapping “increasingly diverges from a perceptually uniform mapping, so 8-bit gray-levels are increasingly inadequate” [9]. Thus, it would be desirable to have an HDR linear response camera to enable the future potential for a near full human gamut color image capture and display. In addition, such a linear HDR camera if operating over a full spectrum from UV to short-wave IR (SWIR), i.e., 350 nm to 2600 nm can empower quantitative imaging of biological samples, precious artworks and archaeological artefacts that need authentic color data generation for accurate medical decisions and forgery prevention.
To provide an overall perspective on robust imager design using computational processing, early works using moving spatial masks such as via a spinning disk date back to 1949, when simultaneous capture of multiple optical wavelength channels using a single point photodetector was used to make a noise-robust infrared spectrometer [10,11]. Efforts to realize irradiance mappers using these spatial coding concepts evolved in the 1960s and 1970s [12,13,14,15]. In the late 1990s, with the emergence of the Texas Instruments (TI) digital micro-mirror device (DMD) as a highly programmable spatial mask generator, researchers proposed and demonstrated a variety of DMD-based imaging systems [16,17,18] including those based on the use of single point photodetectors for classical imaging [19,20] as well as compressive sensing imaging [21]. In particular, works popularly called “single pixel imaging” linked to the compressive camera described in [21] continue to evolve using the DMD to implement various compressive sensing algorithms for imaging [21,22,23,24].
Motivated by advances in the TI DMD technology with device pixel counts reaching 2 million micromirrors and micromirror temporal on-off modulation rates reaching 25 KHz with large (e.g., 170 × 1024 pixels) region-of-interest (ROI) frame rates of 50 KHz, proposed and demonstrated is a 177 dB HDR linear response [25] full spectrum capability high dynamic range camera called the coded access optical sensor or CAOS camera that is designed on the principles of the HDR RF mobile wireless cellular phone and data network. In the CAOS design, each pixel in the selected optical image space is treated like an independent cellular phone transmitting its unique data on a specific time-frequency RF code. Using the current DMD and the passive operational mode of the CAOS camera where the light source entering the camera aperture is temporally unmodulated, the unique CAOS RF frequency code assigned to a given pixel in the imaged irradiance can be designed to be as high as 25 KHz, providing ample temporal bandwidth for low noise high dynamic range photo-detection for the time-frequency image coded signal. With many (e.g., 4096) optical image RF coded data signals simultaneously detected by a few (e.g., 2) optical-to-RF antenna transducers, i.e., large area moderately high-speed point photodetectors, the time-frequency RF coded signals carrying all the harvested selected image pixel information are generated. Much like RF receivers in mobile phones, these RF signals from the point photodetectors are digitized and next undergo low noise filtering with time correlation signal processing and RF spectrum analysis using digital signal processing (DSP) techniques. These processing steps are used to recover the full optical spectrum image data over a linear HDR with signal-to-noise ratio (SNR) control for both pixel crosstalk reduction and HDR optimization. In effect, this single camera unit can simultaneously engage multiple spectral-band sensitive point detectors including using a DSLR-mode CMOS/CCD silicon sensor to form a full spectrum smart CAOS camera that can provide UV-SWIR imaging capabilities.
Most importantly, since the point photodetectors in the CAOS camera operate as linear response detectors with linear correlation and spectrum analysis signal processing low noise operations, a fundamentally linear camera with high linear dynamic range is possible. Given these capabilities with broadband optical spectrum operations, a hybrid CAOS-CMOS camera [26] for linear high dynamic range color imaging is a potential application where the linear dynamic range limitations (e.g., <60 dB) of the CMOS camera can be offset by the higher (i.e., >60 dB) linear dynamic range capabilities of the pixels-of-interest CAOS camera. In effect, the CMOS camera provides the fast access (e.g., 30 ms global frame time) high pixel count limited dynamic range color image while the CAOS camera recovers the much smaller pixel count (e.g., 1000) selective higher linear dynamic range pixel irradiances of the color image. CAOS image extraction time depends on the specific CAOS mode most effective for the given imaging scenario. Considering that the frequency modulated (FM) code division access multiplexed (CDMA) mode [25] of the CAOS camera is extracting 1024 CAOS pixels with an FM carrier rate of 25 KHz and 10 FM cycles allocated per CDMA bit, a (1024 × 10)/25 = 0.4096 s CAOS frame time is achieved. In other words, there is a tradeoff between the CAOS pixel count requiring high linear dynamic range access and the CAOS pixels frame recovery time. Hence, the hybrid CAOS-CMOS camera using the current DMD technology is suited for non-dynamic or slower speed applications such as high linear dynamic range color still photography.
Previously, basic uncalibrated CAOS white light color imaging was demonstrated without deploying camera color correction matrix and coordinate matrix transformations for color constancy and s-RGB format 8-bit displaying [27,28]. Given the potential of the CAOS camera for high quality quantitative color imaging including for human viewing, the present paper takes the first step towards this journey by designing and demonstrating the first calibrated white light color target CAOS imaging that engages color constancy matrix transformations. Furthermore, given the CAOS smart camera approach combines both CAOS and CMOS camera modes, the present paper also demonstrates CMOS-mode color imaging for comparison with CAOS color imaging. The rest of the paper provides details of these experiments and their data analysis highlighting the linear operation of the CAOS camera that is critical for linear HDR quantitative color imaging.

2. CMOS and CAOS Camera Color Imaging Testing Methods

Classic CMOS and CCD sensor cameras follow a specific signal processing chain when producing color images for human vision [29]. Consistent with prior camera testing methods, the methodology described next is deployed for testing both the CMOS camera and the CAOS camera. A 24 patch color checker calibrated TE 188 test chart model from Image Engineering (Kerpen, Germany) is chosen. This commercial vendor also provided the LG3 LED light source used in the experiments. The vendor uses a spectrophotometer to provide the CIE XYZ values for each patch of the TE 188 target using the LG3 5000 K illumination at 5000 lux. The following detailed steps are followed in the camera testing operations:
(1)
Use the camera under test to image the LG3 lightbox source illuminated test area without the transmissive color chart target placed in the lightbox. Confirm ≥ 95% spatial response uniformity of camera output over the white light screen test area given the LG3 lightbox is specified with a ≥95% spatial response uniformity over its illumination area. It has been pointed out earlier that camera spatial response uniformity is important for color imaging camera testing [30].
(2)
Using the TE 188 color checker chart placed at the LG3 lightbox illuminated test area, use the camera under test to take the three red (R), green (G), and blue (B) primary color images in time sequence using the selected Thorlabs (Newton, NJ, USA) R, G, B color filters models FD1R, FD1G, FD1B, respectively. The raw RGB pixel data is next averaged over its specific test patch area to provide averaged raw RGB data values Rraw, Graw, and Braw for the 24 test patches.
(3)
The test chart must include a white color patch that is needed for white balancing the raw RGB image data to introduce color constancy [31]. The averaged raw RGB data for the white patch is used to compute the camera raw RGB data weight balancing factors labelled as wR = 1/[Rraw (white)], wG = 1/[Graw (white)], wB = 1/[Braw (white)]. For example, with camera provided white patch raw vector [R, G, B] = [5 7 9], the white balancing weights are wR, = 1/5, wG = 1/7, wB =1/9. One implements the white balancing operation on the raw tri-simulus RGB data by using the formulas RWB = wR Rraw, GWB = wG Graw, and BWB = wB Braw.
(4)
Using the camera acquired RGB image data with the pre-calibrated CIE XYZ 3-D color space data provided by Image Engineering for the 24 patches of the TE-188 chart under LG3 illumination, next compute via an irradiance independent least squares regression optimization technique [32] the deployed camera color correction matrix that maps the camera white balanced RGB values to the CIE XYZ color space standard. The basics of the computation of the camera under test color correction matrix are as follows:
-
Let A represent the 3 × N matrix of experimentally acquired white balanced RGB (RWB, GWB, BWB) camera outputs for N known color patches. Let B represent the 3 × N matrix of the known corresponding tristimulus X, Y, Z values provided by the commercial vendor using the measured spectral power distribution of the illuminant, the spectral function of each patch, and the CIE color matching functions. Using the known A and B matrices, compute the color correction matrix using the formula Φ = (AAT)−1.
-
Find the XYZ values for unknown (or new color patches) using the same illumination and deployed color filters for the test camera by deploying the computed 3 × 3 color correction Φ-matrix and using the following equation:
[ X Y Z ]   =   [ Φ 11 Φ 12 Φ 13 Φ 21 Φ 22 Φ 23 Φ 31 Φ 32 Φ 33 ] [ R W B   G W B B W B ]
For example, with the TE188 24 color patches chart, 23 colors are used to compute a Φ color correction matrix, and the one color left in the test chart is used as the test color transformed with the computed Φ matrix to generate the test color XYZ values. These experimentally determined XYZ values for this color are compared with the provided ground truth XYZ values provided by the commercial vendor for the deployed test color. Such a testing and comparison method is called leave-one-out cross-validation (LOOCV) [33] and its implementation using the TE188 24 color patches test chart is explained as follows:
LOOCV is a particular case of leave-p-out cross-validation with p = 1. LOOCV involves using one observation as the validation set and the remaining observations as the training set. This process is repeated for all observations, N and generates N test results. Given that the TE188 color checker target has 24 average RGB patch values, engaging LOOCV implies that one color from the chart is taken out and used for validation. This means that the other 23 colorss’ average RGB values and the corresponding X, Y, Z Image Engineering-provided ground truth values are used to find the color transformation matrix. Next one uses this specific Φ color correction matrix to find the X, Y, Z values for the left out one color, given its average RGB patch value provided by the test camera. Hence one can compare the camera provided X, Y, Z values with the ground truth X, Y, Z values for the left out one color. This color patch validation process one test color at a time is repeated for all 24 patches in the test camera viewed pre-calibrated color test chart. Completion of this process generates 24 XYZ values observed for all 24 colors of the TE 188 target seen by the test camera.
(5)
Both CIE XYZ and CIE L*a*b* are used for color camera performance analysis. First to assess visual observation via calibration error, Image Engineering-provided ground truth XYZ values are compared with the test camera measured XYZ values called X′, Y′, Z′ using the following root mean square (RMS) error and percentage error metrics [34]:
RMS   Error   =   Δ X 2 + Δ Y 2 + Δ Z 2 3 %   Error   =   100 × Δ X 2 + Δ Y 2 + Δ Z 2 X 2 + Y 2 + Z 2
where ΔX = XX′, ΔY = YY′ and ΔZ = ZZ′.
(6)
In addition, to calculate Delta E (CIE 2000) that is often considered for assessing the calibration error in terms of visual performance, we convert the XYZ values to L*a*b* values [35]. This conversion requires a reference white Xr, Yr, Zr which in our case is:
Xr = 95.047
Yr = 100
Zr = 108.833
L   =   116   f y 16 a   =   500   ( f x f y ) b   =   200   ( f y f z )
where:
f x   =   { x r 3                       i f   x r > ϵ   κ x r + 16 116     o t h e r w i s e f y   =   { y r 3                       i f   y r > ϵ   κ y r + 16 116     o t h e r w i s e f z   =   { z r 3                       i f   y r > ϵ   κ z r + 16 116     o t h e r w i s e x r   =   X X r y r   =   Y Y r z r   =   Z Z r ϵ   =   0.008856 κ   =   903.3  
Given Image Engineering-provided ground truth L*a*b* values and the test camera measured L*a*b* values, we calculate the ΔE00 (CIE 2000) [36,37] which measures the difference or distance between two colors. A small value for ΔE00 indicates that the colors are similar while a high value means that the colors are different.
(7)
In order to visually observe both the Image Engineering-provided TE188 chart XYZ values and the test camera captured TE188 target 24 color patches on a standard 8-bit computer display, the following procedure is implemented [38]:
Find the chromaticity coordinates of an RGB system (xR, yR), (xG, yG) and (xB, yB) where:
xR = XR/(XR + YR + ZR); yR = YR/(XR + YR + ZR)
xG = XG/(XG + YG + ZG); yG = YG/(XG + YG + ZG)
xB = XB/(XB + YB + ZB); yB = YB/(XB + YB + ZB)
Here XR, YR, ZR are the X, Y, and Z values of the red primary color patch, XG, YG, ZG are the X, Y, Z values of the green primary color patch, and XB, YB, ZB are the X, Y, Z values of the blue primary color patch. Given the XYZ values of the reference white patch for the LG3 light source as XW, YW, ZW, one can write:
X R   =   x R / ( y R   Z R )   =   ( 1     x R     y R ) / y R X G   =   x G / ( y G   Z G )   =   ( 1     x G     y G ) / y G X B   =   x B / ( y B   Z B )   =   ( 1     x B     y B ) / y B
Next, the following equations are used to find the RGB values for the test camera and Image Engineering-measured XYZ values:
[ S R S G S B ]   =   [ X R X G X B 1 1 1 Z R Z G Z B ] 1 [ X W X W X W ]   and   M   =   [ S R X R S G X G S B X B S R S G S B S R Z R S G Z G S B Z B ]
The computed M color coordinate matrix is used to find M−1 and then used to find the measured linear RGB values for display operations via the equation given next:
[ R L i n e a r G L i n e a r B L i n e a r ]   =   M 1 [ X   Y Z ]
(8)
Implemented is Gamma encoding and 8-bit scale conversion for the computed linear RGB values to display via the s-RGB standard on a commercial 8-bit color computer display. This operation is done using the following equations with a 2.4 gamma rating for the display [39]:
i f   R L i n e a r , G L i n e a r , B L i n e a r   0.003031308   R s R G B   =   12.92 × R L i n e a r G s R G B   =   12.92 × G L i n e a r B s R G B   =   12.92 × B L i n e a r O t h e r w i s e   i f R L i n e a r , G L i n e a r , B L i n e a r > 0.003031308 R s R G B   1.055 × [ R L i n e a r ] ( 1 / 2.4 ) 0.055 G s R G B   1.055 × [ G L i n e a r ] ( 1 / 2.4 ) 0.055 B s R G B   1.055 × [ B L i n e a r ] ( 1 / 2.4 ) 0.055
As the deployed color display is rated as a standard 8-bit display, the earlier equations provided RsRGB, GsRGB and BsRGB values are multiplied by 255 and rounded off to nearest integer value to provide the final sRGB values that form the final 8-bit color images.
(9)
As the final step in the test camera pre-calibrated color imaging evaluation, in order to visually observe and compare the ground truth and test camera 24 color patches on standard 8-bit computer display, the final 8-bit s-RGB data for the two comparative images is fed to a display. One should note that for regular camera color imaging operations with unknown color scenes versus a comparative color checking operation using LOOCV with a known color checker, one deploys a single color correction matrix derived using a large (e.g., 140) set of known colors using Calibrite Digital ColorChecker SG [40] that can enable accurate and robust recovery of unknown colors on a per pixel basis [41].

3. CMOS Camera Color Imaging Experiment

The CMOS camera is set up in the laboratory to image the TE188 color chart under LG3 lightbox illumination. This camera uses a Thorlabs monochrome CMOS sensor model Quantalux S2100-M with 5.04 μm pixel pitch, 2.1 Mpixels and up-to an 87 dB HDR rating with a 16-bit (i.e., 0 to 65,535 levels) vp output. The camera is fitted with a C-mount model GMZ18108 1.8 cm to 10.8 cm high quality multi-lens element color autofocus zoom lens that images the target screen illumination zone placed 234.7 cm from the sensor end of the camera. As the CMOS sensor size and target screen size are known, a demagnification factor of 95.75 is realized from target to sensor plane using the autofocus zoom lens. The LG3 illumination is set to 5000 lux and the CMOS camera with an exposure time of 4 ms is used to image the illumination screen without the color checker target. Figure 1 shows the captured CMOS camera image that confirms a high-quality spatial illumination uniformity of 96% given that the LG3 is specified with a ≥95% uniformity. In other words, pixel irradiance data analysis shows that 96% of the pixels in the CMOS camera imaged LG3 screen zone without placement of the color chart has the same irradiance values within a 0.25% irradiance fluctuation level set by the LG3 illumination controls. The lighter zone in Figure 1 is the illuminated LG3 screen zone where later the transmissive TE188 color target is placed for imaging. The average CMOS sensor output vp(avg) is ~40,000 in a maximum 16-bit output keeping the CMOS sensor well under saturation for a robust spatial uniformity test.
The TE188 color checker chart is placed on the LG3 illumination screen and the CMOS camera takes the R, G, B images of the color target using a rotating color filter wheel fitted with the R, G and B optical filters. The HDR CMOS sensor exposure time is set to 50 ms as all three images brightest patches (i.e., the A4 white patch) are not saturating the 16-bit CMOS sensor output. Specifically, the highest CMOS pixel vp value recorded is 46,801, indicating no saturated pixels in the three images. Unrestricted use of all the CMOS sensor 16-bit vp outputs are deployed for the Section 2 described color image processing for 8-bit display.
Figure 2a shows the reconstructed 8-bit color images from the TE188 chart provided calibrated XYZ values for the 24 color patches while Figure 2b shows the reconstructed 8-bit color image using the unrestricted full 16-bit CMOS sensor output (i.e., vp values ranging from 0 to 216 − 1 = 65,535. When comparing with the actual ground truth patch colors shown in Figure 2a, the image in Figure 2b shows poor color recovery of the target patches. This poor recovery is due to the unrestricted use of the HDR CMOS vp outputs in the 16-bit range. Recent experimental work with this specific CMOS sensor camera has shown an overall non-linear CRF over the 16-bit full range [5,41], hence one should expect a poor color recovery as proved by the current experiment. Earlier experiments for this CMOS sensor have also shown a highly improved linear response for CMOS sensor when maximum vp output < 40,000 [5,42]. Clearly, linear CRF operations of a camera are vital to accurate color imaging. In order to demonstrate this point in the context of the experiments in this paper, the CMOS camera exposure time is reduced to 30 ms giving a restricted maximum vp ≤ 38,045 regime. This vp maximum restricted RGB data set is deployed to engage the linear region of the CMOS sensor’s CRF with Table 1 and Table 2 and Figure 3 showing the XYZ values, L*a*b* values, and recovered color image, respectively.
As expected, compared to the Figure 2b recovered color image using the unrestricted full 16-bit HDR CMOS sensor output data that uses a nonlinear CRF, the Figure 3b color image with restricted CMOS sensor data processing using only smaller but linear zone of the CRF creates a greatly improved color image when compared to the ground truth image. This experiment also shows that although many CMOS cameras may be specified as HDR cameras, their high-quality color image recovery is restricted to a non-HDR zone as achieving continuous linear photo-pixel operation over a HDR range is highly limited due to various factors that include silicon CMOS array sensor device fabrication physics limits to optoelectronic and electronic circuit readout signal noise issues.

4. CAOS Camera Color Imaging Experiment

The CAOS camera design shown in Figure 4 is implemented in the laboratory using the following key components: National Instruments (Austin, TX, USA) 16-bit Analog-to-Digital Converter (ADC) model 6366, LG3 5000 K Lightbox, Vialux (Chemnitz, Germany) DMD model V-700, DELL (Round Rock, TX, USA) Latitude model 5480 laptop for control and DSP, and Thorlabs components that include model PDA100A2 silicon point photo-detectors (PDs) set at 40 dB electronic gain, 1.62 cm diameter Iris A1, 5.08 cm diameter 10 cm focal length front imaging lens L1, 5.08 cm diameter spherical mirrors SM1 and SM2 with 3.81 cm focal lengths, and 2.54 cm diameter FD1 model RGB color filters within the F1 filter wheel. The inter-component distances are: L1: DMD of 10.4 cm; SM1/SM2:DMD of 9.8 cm; SM1/SM2:PD1/PD2 of 6.3 cm and target:L1 of 265 cm. A factor of 25.5 demagnification takes place between target plane and DMD plane and a demagnification factor of 1.56 between the DMD and PD1/PD2 planes.
The basics of the CAOS camera operations are as follows: Light from the target is imaged on to the DMD plane using lens L1 that is a basic visible light spherical lens. The on/off binary tilt state micromirrors in the DMD are programmed in the CAOS code-division multiple access (CDMA)-imaged light time-frequency encoding mode such that each CAOS pixel on the DMD has its unique 4096-bits Walsh time sequence code [25]. The encoded image zone on the DMD consists of 4060 = 70 × 58 CAOS pixels with each CAOS pixel containing 13 × 13 micromirrors. Each micromirror is 13.68 μm × 13.68 μm size. The first experimental step for the CAOS camera is to conduct the camera spatial uniformity check by imaging the white light screen on LG3 without placing the TE188 target. This is done without using any color filter inside the F1 color filter wheel. A 1 KHz CDMA bit rate is used for an ADC rate of 1 Msps with the LG3 illumination at 65.8 Klux. The recovered CAOS camera image is shown in Figure 5 using a color bar for irradiance normalized gray-scaling with a mean scaled irradiance of 0.76. Irradiance data analysis from the CAOS camera Figure 5 image for the LG3 white screen indicates a 95% uniformity suited for TE188 color chart imaging. In other words, pixel irradiance data analysis shows that 95% of the pixels in the CAOS camera imaged LG3 screen zone without placement of the color chart has the same irradiance values within a 0.25% (i.e., 1/4000) irradiance fluctuation level set by the LG3 illumination controls.
Next, the TE188 color chart is imaged by the CAOS camera with the F1 color filter wheel containing the R, G, B filters and CAOS encoding using a 16 KHz CDMA bit rate with an ADC rate of 1.6 Msps with the LG3 illumination at 39.5 Klux.
Table 3 shows the CAOS camera provided experimental data along with the Image Engineering provided calibrated XYZ values-based comparative error metrics data. In addition, Table 4 shows the CAOS camera color imaging experimental data table including patch averaged raw R, G, B values and the TE188 chart provided calibrated L*a*b* values-based comparative error metrics data.
XYZ mean error of 10.9 is achieved for the CAOS camera. The alternate color camera assessment gives ΔE00 mean value of 5.7 for the CAOS camera.
Figure 6 shows the reconstructed 8-bit color images from the Image Engineering-provided calibrated XYZ values for the 24 color patches versus the CAOS camera-captured image. Review of Table 3 and Table 4 and Figure 6 indicates a color patch recovery of the presented 39.5 dB dynamic range color checker target to be similarly good quality in performance to the restricted CMOS camera mode recovery. In particular, these comparisons indicate the robustness of the intrinsically linear operation CAOS camera that allows unrestricted usage of its output that is highly desirable when operating with HDR color targets. Given the CAOS smart camera design engages both the CMOS and CAOS modes of the camera system, a combined CAOS-CMOS mode would combine the best attributes of CMOS and CAOS. One should note that none of the lens and mirror optics as well as the large area silicon point detectors and electronics in the CAOS camera assembly were optimized for linear and uniform gain operations over the visible color band. On the contrary, the commercial CMOS camera uses a high quality commercial multi-element zoom lens designed for color imaging. These aspects can be linked to the slight mismatch in color imaging performance metrics for the restricted range CMOS camera and the unrestricted range CAOS camera.

5. Conclusions

Presented is the first experimental proof-of-concept color imaging CAOS camera. For first stage comparison studies with the CAOS camera, a commercial CMOS-sensor based camera with a monochrome CMOS sensor specified with up-to an 87 dB dynamic range is tested in the laboratory. R, G, B color filters deployed time-sequentially provide raw RGB image data via the two cameras under test. The experiments use a commercial imaging test house LG3 LED white light 5000 K spectrum source that illuminates a calibrated 24 color patches target with a 39.5 dB maximum instantaneous dynamic range that matches the restricted linear CRF region of the CMOS camera. After raw RGB data white balancing and camera color correction matrix computations for the two cameras for color constancy processing, CIE standard XYZ values are computed for the observed target. Compared to the TE188 chart provided XYZ values for the calibrated target, percentage XYZ mean errors of 8.3% and 10.9% are achieved for the restricted linear range CMOS camera and the unrestricted range CAOS camera, respectively. An alternate color camera assessment gives ΔE00 mean values of 4.59 and 5.7 for the restricted linear range CMOS camera and unrestricted range CAOS camera, respectively.
For visual display on a standard computer display, the four different experimental XYZ data are used to produce comparative 8-bit color images for the 24-patch target using XYZ to sRGB LG3 source color coordinate matrix-based conversion with gamma encoding. Results indicate that although the CMOS sensor has an HDR range, unrestricted usage of sensor output fails color recovery and the intrinsically linear operation unrestricted range CAOS camera color image recovery generally matches the restricted linear-mode commercial CMOS sensor-based camera recovery for the presented non-HDR target. It is to be noted that unlike the CMOS camera lens optics and its photo-detection electronics, no special linear response optics and photo-detector designs were used for the CAOS camera experiment. Nevertheless, an equivalent color recovery was achieved indicating that given the limited HDR linear range capabilities of a CMOS camera and the intrinsically wide linear HDR capability of a CAOS camera, a combined hybrid CAOS-CMOS mode is an optimal approach to achieve linear HDR color imaging. Hence, the CAOS smart camera that combines CAOS and CMOS modes can empower high quality HDR color imaging such as for HDR color still photography. Future work relates to using a calibrated HDR color target with a linearity optimized CAOS camera design for color imaging tests and deployment of this CAOS camera for testing of HDR colored human-made [43] and natural scenes.

Author Contributions

N.A.R. did the conceptualization, experimental design, experiments and overall manuscript writing; N.A. did the software for validation, color image processing and color data formal analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be provided on request.

Acknowledgments

The authors thank student M.A. Mazhar for additional experimental data support. N.A. is also with Forman Christian College University, Lahore.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McCann, J.J.; Rizzi, A. The Art and Science of HDR Imaging; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  2. Reinhard, E.; Ward, G.; Pattanaik, S.; Debevec, P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting; Elsevier Morgan Kaufmann Publisher: Burlington, MA, USA, 2005. [Google Scholar]
  3. Banterle, F.; Artusi, A.; Debattista, K.; Chalmers, A. Advanced High Dynamic Range Imaging—Theory and Practice; CRC Press, Taylor & Francis Group: Oxfordshire, UK, 2011. [Google Scholar]
  4. Gouveia, L.C.P.; Choubey, B. Advances on CMOS image sensors. Sens. Rev. 2016, 36, 231–239. [Google Scholar] [CrossRef] [Green Version]
  5. Riza, N.A.; Mazhar, M.A. Robust Low Contrast Image Recovery over 90 dB Scene Linear Dynamic Range using CAOS Camera. In Proceedings of the IS & T Electronic Imaging; Society for Imaging Science and Technology: Burlingame, CA, USA, 2020; pp. 1–10. [Google Scholar]
  6. Academy of Motion Picture Arts and Sciences (AMPAS). S-2008–001. Academy Color Encoding Specification (ACES) Version 1.0; Academy of Motion Picture Arts and Sciences: Los Angeles, CA, USA, 2008. [Google Scholar]
  7. Apple. Pro Display XDR. Available online: https://www.apple.com/ie/pro-display-xdr (accessed on 20 February 2021).
  8. Hoffmann, G. CIE Color Space; Technical Report; University of Applied Sciences: Emden, Germany, 2000. [Google Scholar]
  9. Vargas, A.; Johnson, P.; Kim, J.; Hoffman, D. A perceptually uniform tone curve for OLED and other high dynamic range displays. J. Vis. 2014, 14, 83. [Google Scholar] [CrossRef]
  10. Golay, M.J.E. Multi-Slit Spectrometry. J. Opt. Soc. Am. 1949, 39, 437–444. [Google Scholar] [CrossRef]
  11. Fellgett, P.I. —les principes généraux des méthodes nouvelles en spectroscopie interférentielle-A propos de la théorie du spectromètre interférentiel multiplex. J. Phys. Radium 1958, 19, 187–191. [Google Scholar] [CrossRef] [Green Version]
  12. Gottlieb, P. A television scanning scheme for a detector-noise limited system. IEEE Trans. Inform. Theory 1968, 14, 428–433. [Google Scholar] [CrossRef]
  13. Ibbett, R.N.; Aspinall, D.; Grainger, J.F. Real-Time Multiplexing of Dispersed Spectra in Any Wavelength Region. Appl. Opt. 1968, 7, 1089–1093. [Google Scholar] [CrossRef]
  14. Decker, J.A., Jr.; Harwitt, M.O. Sequential Encoding with Multislit Spectrometers. Appl. Opt. 1968, 7, 2205–2209. [Google Scholar] [CrossRef]
  15. Decker, J.A., Jr. Hadamard-Transform Image Scanning. Appl. Opt. 1970, 9, 1392–1395. [Google Scholar] [CrossRef]
  16. Kearney, K.; Ninkov, Z. Characterization of a digital micro-mirror device for use as an optical mask in imaging and spectroscopy. Proc. SPIE 1998, 3292, 81–92. [Google Scholar]
  17. Castracane, J.; Gutin, M. DMD-based bloom control for intensified imaging systems. Proc. SPIE 1999, 3633, 234–242. [Google Scholar]
  18. Nayar, S.; Branzoi, V.; Boult, T. Programmable imaging using a digital micro-mirror array. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2004, 1, 436–443. [Google Scholar]
  19. Sumriddetchkajorn, S.; Riza, N.A. Micro-electro-mechanical system-based digitally controlled optical beam profiler. Appl. Opt. 2002, 41, 3506–3510. [Google Scholar] [CrossRef]
  20. Riza, N.A.; Mughal, M.J. Optical power independent optical beam profiler. Opt. Eng. 2004, 43, 793–797. [Google Scholar] [CrossRef]
  21. Takhar, D.; Laska, J.N.; Wakin, M.; Duarte, M.; Baron, D.; Sarvotham, S.; Kelly, K.; Baraniuk, R.G. A new compressive imaging camera architecture using optical-domain compression. Proc. SPIE 2006, 6065, 606509. [Google Scholar] [CrossRef]
  22. Durán, V.; Clemente, P.; Fernández-Alonso, M.; Tajahuerce, E.; Lancis, J. Single-pixel polarimetric imaging. Opt. Lett. 2012, 37, 824–826. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Shin, J.; Bosworth, B.T.; Foster, M.A. Single-pixel imaging using compressed sensing and wavelength-dependent scattering. Opt. Lett. 2016, 41, 886–889. [Google Scholar] [CrossRef]
  24. Zhang, Z.; Liu, S.; Peng, J.; Yao, M.; Zheng, G.; Zhong, J. Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements. Optica 2018, 5, 315–319. [Google Scholar] [CrossRef]
  25. Riza, N.A.; Mazhar, M.A. 177 dB Linear Dynamic Range Pixels of Interest DSLR CAOS Camera. IEEE Photonics J. 2019, 11, 1–10. [Google Scholar] [CrossRef]
  26. Riza, N.A.; La Torre, J.P.; Amin, M.J. CAOS-CMOS camera. Opt. Express 2016, 24, 13444–13458. [Google Scholar] [CrossRef]
  27. Riza, N.A.; Mazhar, M.A. Demonstration of CAOS Smart Camera Imaging for Color and Super Blue Moon Targets. In Proceedings of the OSA Advanced Photonics Congress on Sensors, Zurich, Switzerland, 2–5 July 2018. paper SeW2E.3. [Google Scholar]
  28. Riza, N.A.; Mazhar, M.A.; Ashraf, N. Solar Limb Darkening Color Imaging of the Sun with the Extreme Brightness Capability CAOS Camera. In London Imaging Meeting; Society for Imaging Science and Technology: London, UK, 2020; Volume 2020, pp. 69–73. [Google Scholar]
  29. Ramantha, R.; Snyder, W.E.; Yoo, Y.; Drew, M.S. Color Image Processing Pipeline: A general survey of digital still cameras. IEEE Signal Process. Mag. 2005, 22, 34–43. [Google Scholar] [CrossRef]
  30. Pointer, M.R.; Attridge, G.G.; Jacobson, R.E. Practical camera characterization for colour measurement. Imaging Sci. J. 2001, 49, 63–80. [Google Scholar] [CrossRef]
  31. Ebner, M. Color Constancy; Kriss, M.A., Ed.; Wiley-IS&T Series in IS&T; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  32. Bastani, P.; Funt, B. Simplifying irradiance independent color calibration. Proc. SPIE 2014, 9015, 90150N. [Google Scholar]
  33. Celisse, A. Optimal cross-validation in density estimation with the L2-loss. Ann. Stat. 2014, 42, 1879–1910. [Google Scholar] [CrossRef]
  34. Hyndman, R.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef] [Green Version]
  35. Lindblom, B. XYZ to Lab. Available online: http://www.brucelindbloom.com/index.html?Eqn_XYZ_to_Lab.html (accessed on 15 January 2021).
  36. Fraser, B.; Murphy, C.; Bunting, F. Real World Color Management; Peachpit Press: San Francisco, CA, USA, 2004. [Google Scholar]
  37. Lindblom, B. Delta E (CIE 2000 B). Available online: http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE2000.htm (accessed on 15 January 2021).
  38. Lindblom, B. RGB/XYZ Matrices. Available online: http://www.brucelindbloom.com/Eqn_RGB_XYZ_Matrix.html (accessed on 15 January 2021).
  39. IEC 61966-2-1:1999. IEC Webstore; International Electrotechnical Commission: Geneva, Switzerland, 1999. [Google Scholar]
  40. Imatest LLC. Calibrite Digital ColorChecker SG. Available online: https://store.imatest.com/test-charts/color-charts/colorchecker-sg.html (accessed on 15 January 2021).
  41. Imatest LLC. Color Correction Matrix. Available online: http://www.imatest.com/docs/colormatrix (accessed on 15 January 2021).
  42. Riza, N.A.; Ashraf, N. Calibration Empowered Minimalistic MultiExposure Image Processing Technique for Camera Linear Dynamic Range Extension. In Proceedings of the IS & T Elect. Imaging Conference; Society for Imaging Science and Technology: Burlingame, CA, USA, 2020; pp. 2131–2136. [Google Scholar]
  43. Riza, N.A.; Mazhar, M.A. Robust Testing of Displays using the Extreme Linear Dynamic Range CAOS Camera. In Proceedings of the 2019 IEEE 2nd British and Irish Conference on Optics and Photonics (BICOP), London, UK, 11–13 December 2019. [Google Scholar]
Figure 1. CMOS camera image of the illumination screen to confirm a camera spatial response uniformity of 96% desirable for camera color imaging testing.
Figure 1. CMOS camera image of the illumination screen to confirm a camera spatial response uniformity of 96% desirable for camera color imaging testing.
Photonics 08 00538 g001
Figure 2. Reconstructed 8-bit color images from (a) TE188 chart provided calibrated XYZ values for the 24 color patches and (b) Un-Restricted Full 16-bit CMOS Sensor Output Usage CMOS camera operations engaging a nonlinear CRF.
Figure 2. Reconstructed 8-bit color images from (a) TE188 chart provided calibrated XYZ values for the 24 color patches and (b) Un-Restricted Full 16-bit CMOS Sensor Output Usage CMOS camera operations engaging a nonlinear CRF.
Photonics 08 00538 g002
Figure 3. Reconstructed 8-bit color images from (a) Image Engineering provided calibrated XYZ values for the 24 color patches and (b) Restricted 16-bit CMOS Sensor Output Usage CMOS camera linear CRF operations.
Figure 3. Reconstructed 8-bit color images from (a) Image Engineering provided calibrated XYZ values for the 24 color patches and (b) Restricted 16-bit CMOS Sensor Output Usage CMOS camera linear CRF operations.
Photonics 08 00538 g003
Figure 4. Basic CAOS camera design implemented in the laboratory.
Figure 4. Basic CAOS camera design implemented in the laboratory.
Photonics 08 00538 g004
Figure 5. CAOS camera image of the illumination screen to confirm a camera spatial response uniformity of 95% desirable for camera color imaging testing.
Figure 5. CAOS camera image of the illumination screen to confirm a camera spatial response uniformity of 95% desirable for camera color imaging testing.
Photonics 08 00538 g005
Figure 6. Reconstructed 8-bit color images from (a) TE188 chart provided calibrated XYZ values for the 24 color patches and (b) CAOS camera.
Figure 6. Reconstructed 8-bit color images from (a) TE188 chart provided calibrated XYZ values for the 24 color patches and (b) CAOS camera.
Photonics 08 00538 g006
Table 1. Restricted linear CRF zone 16-bit CMOS sensor output usage CMOS camera color imaging experimental data table including patch averaged raw R, G, B Values and the TE188 chart provided calibrated XYZ values-based comparative error metrics data. CMOS sensor Exposure Time = 30 ms.
Table 1. Restricted linear CRF zone 16-bit CMOS sensor output usage CMOS camera color imaging experimental data table including patch averaged raw R, G, B Values and the TE188 chart provided calibrated XYZ values-based comparative error metrics data. CMOS sensor Exposure Time = 30 ms.
Raw ValuesEstimatedReal (Theoretical)
RGBXYZX YZRMS%
A1dark skin1665.063690.631485.270.140.120.090.140.120.090.00221.91
B1light skin4498.5714,979.845808.800.440.430.360.490.450.360.036.17
C1blue sky1208.356426.064931.850.170.170.290.150.170.300.016.38
D1foliage580.556437.22367.160.100.140.040.070.120.040.0220.69
E1blue flower4106.6515,479.1213,703.650.490.460.790.470.440.790.023.50
F1bluish green2074.9123,971.3310,455.570.430.560.640.410.550.640.012.72
A2orange5998.909353.95833.500.440.340.060.460.360.070.024.54
B2purplish blue904.736560.2311,425.790.200.190.640.210.200.690.037.22
C2moderate red4471.013337.522797.740.310.190.160.340.210.160.028.61
D2purple667.831459.026238.310.100.070.340.110.060.340.013.11
E2yellow green4142.8730,837.503639.990.570.720.270.540.730.330.046.34
F2orange yellow6024.1314,545.681355.440.490.450.110.500.440.100.013.03
A3blue151.93518.059545.910.090.050.530.090.040.500.026.20
B3green556.269678.52297.830.130.200.050.090.210.060.0216.20
C3red6436.24520.18513.380.390.180.020.350.170.020.0312.22
D3yellow4884.4924,368.69321.450.520.620.100.500.570.050.048.71
E3magenta6392.001251.799251.630.470.240.500.410.200.500.0410.45
F3cyan2358.2926,752.0517,481.680.530.661.040.500.621.010.034.22
A4white7373.1434,331.5918,062.250.870.911.080.951.001.090.076.85
B4neutral 655103.0426,305.3112,254.680.640.690.740.650.690.740.011.26
C4neutral 393189.2316,336.237453.540.400.430.450.380.410.440.024.28
D4neutral 211817.739042.794101.630.220.240.250.210.220.240.016.65
E4neutral 10992.744331.112038.160.110.120.120.100.100.110.0114.29
F4neutral 3401.191495.90815.890.040.040.050.030.030.040.0133.74
RMSE Percentage Difference
MeanMedian95%Max MeanMedian95%Max
0.0220.0180.0410.069 8.306.3620.0220.69
Table 2. Restricted linear CRF zone 16-bit CMOS sensor output usage CMOS camera color imaging experimental data table including patch averaged raw R, G, B Values and the TE188 chart provided calibrated L*a*b* values-based comparative error metrics data. CMOS sensor Exposure Time = 30 ms.
Table 2. Restricted linear CRF zone 16-bit CMOS sensor output usage CMOS camera color imaging experimental data table including patch averaged raw R, G, B Values and the TE188 chart provided calibrated L*a*b* values-based comparative error metrics data. CMOS sensor Exposure Time = 30 ms.
PatchCamera Measured RGB ValuesCamera MeasuredLab Image Engg. Reference
RGBLabLab∆E00
A11665.063690.631485.2741.4317.9411.9241.2316.5511.790.96
B14498.5714,979.845808.8071.6410.0413.2572.5818.3714.176.49
C11208.356426.064931.8548.871.66−16.4548.39−7.26−19.368.36
D1580.556437.22367.1644.63−27.0338.3841.93−37.0735.235.71
E14106.6515,479.1213,703.6573.6915.19−25.1072.1114.68−27.821.90
F12074.9123,971.3310,455.5779.69−28.11−2.9479.21−32.76−2.982.04
A25998.909353.95833.5065.0836.2662.1566.3636.5361.821.07
B2904.736560.2311,425.7950.7211.18−52.1452.186.59−54.093.55
C24471.013337.522797.7450.7755.3310.5453.3854.4214.973.49
D2667.831459.026238.3131.4331.93−54.4929.4645.42−57.346.15
E24142.8730,837.503639.9988.09−27.9453.3388.56−35.8246.065.27
F26024.1314,545.681355.4472.9318.7459.6771.9723.7562.022.81
A3151.93518.059545.9126.4742.63−83.8022.9561.04−86.928.20
B3556.269678.52297.8352.36−38.4948.6653.39−67.7542.3210.73
C36436.24520.18513.3850.0987.0362.9748.7677.6358.602.45
D34884.4924,368.69321.4582.84−17.4581.2080.20−10.3292.365.28
E36392.001251.799251.6355.9085.78−29.9151.9485.25−37.504.56
F32358.2926,752.0517,481.6884.75−21.67−23.4683.03−22.34−24.481.25
A47373.1434,331.5918,062.2596.351.75−5.78100.000.000.005.94
B45103.0426,305.3112,254.6886.64−4.870.8286.33−0.410.435.70
C43189.2316,336.237453.5471.62−3.881.6869.85−0.29−0.135.15
D41817.739042.794101.6356.03−2.341.7954.04−0.280.003.76
E4992.744331.112038.1640.911.351.3538.27−0.22−0.153.53
F4401.191495.90815.8924.554.24−1.0421.140.22−0.615.79
E
MeanMedian0.95Max
4.594.858.3410.73
Table 3. CAOS camera color imaging experimental data table including patch averaged raw R, G, B values and the TE188 chart provided calibrated XYZ values-based comparative error metrics data.
Table 3. CAOS camera color imaging experimental data table including patch averaged raw R, G, B values and the TE188 chart provided calibrated XYZ values-based comparative error metrics data.
Raw ValuesEstimated Real (Theoretical)
RGBXYZX YZRMS%
A10.1730.0780.0640.120.110.080.140.120.090.0111.82
B10.5190.3270.2980.430.400.350.490.450.360.049.83
C10.1140.1490.2700.160.170.310.150.170.300.013.79
D10.0410.1510.0160.090.130.040.070.120.040.0110.93
E10.4280.3390.6600.460.430.730.470.440.790.035.76
F10.1910.4850.4440.370.470.540.410.550.640.0814.54
A20.6930.2060.0390.410.310.050.460.360.070.0412.13
B20.1000.1570.6490.230.210.710.210.200.690.024.84
C20.5450.0800.1610.310.190.160.340.210.160.027.52
D20.0620.0310.3520.100.070.380.110.060.340.0210.83
E20.4630.7420.1900.570.720.310.540.730.330.023.88
F20.6300.3180.0540.430.390.090.500.440.100.0512.17
A30.0140.0100.4920.100.060.520.090.040.500.026.69
B30.0330.2380.0110.120.200.050.090.210.060.0212.41
C30.8450.0080.0210.430.19−0.020.350.170.020.0624.69
D30.5980.6380.0090.570.680.120.500.570.050.0818.65
E30.7350.0340.4950.460.240.500.410.200.500.048.99
F30.2370.6060.8230.510.620.950.500.621.010.034.60
A40.8170.8700.9100.890.961.080.951.001.090.044.15
B40.6030.6540.6710.680.730.810.650.690.740.057.09
C40.3800.4280.4300.440.470.520.380.410.440.0716.07
D40.2140.2410.2340.240.260.280.210.220.240.0418.12
E40.1070.1080.1010.110.120.120.100.100.110.0214.46
F40.0160.0350.0220.030.030.030.030.030.040.0117.28
RMSE Percentage Difference
MeanMedian95%Max MeanMedian95%Max
0.0340.0340.0770.082 10.8810.8818.5724.69
Table 4. CAOS camera color imaging experimental data table including patch averaged raw R, G, B values and the TE188 chart provided calibrated L*a*b* values-based comparative error metrics data.
Table 4. CAOS camera color imaging experimental data table including patch averaged raw R, G, B values and the TE188 chart provided calibrated L*a*b* values-based comparative error metrics data.
PatchCamera Measured RGB ValuesCamera MeasuredCamera Measured RGB Values
RGBLabLab∆E00
A10.1730.0780.06438.9517.1412.6541.2316.5511.792.09
B10.5190.3270.29869.5914.4410.2772.5818.3714.173.84
C10.1140.1490.27048.340.60−20.2948.39−7.26−19.367.27
D10.0410.1510.01643.29−31.3434.6441.93−37.0735.232.63
E10.4280.3390.66071.2715.12−24.9272.1114.68−27.821.66
F10.1910.4850.44474.10−24.40−2.7179.21−32.76−2.985.26
A20.6930.2060.03962.5438.8864.9166.3636.5361.823.30
B20.1000.1570.64952.9614.20−54.9252.186.59−54.094.88
C20.5450.0800.16151.0456.339.0253.3854.4214.973.96
D20.0620.0310.35232.1230.87−57.5329.4645.42−57.347.91
E20.4630.7420.19087.86−26.0247.8588.56−35.8246.064.80
F20.6300.3180.05468.9617.3659.9871.9723.7562.024.28
A30.0140.0100.49229.0339.16−79.1122.9561.04−86.929.37
B30.0330.2380.01152.06−43.0344.9353.39−67.7542.328.33
C30.8450.0080.02151.0694.49118.7948.7677.6358.6015.94
D30.5980.6380.00985.98−18.2080.8880.20−10.3292.366.62
E30.7350.0340.49556.1980.58−30.0351.9485.25−37.504.68
F30.2370.6060.82382.88−19.65−20.8983.03−22.34−24.481.90
A40.8170.8700.91098.28−2.97−2.58100.000.000.004.72
B40.6030.6540.67188.40−3.29−1.2686.33−0.410.434.36
C40.3800.4280.43074.39−4.13−0.4369.85−0.29−0.136.08
D40.2140.2410.23458.43−3.540.8354.04−0.280.005.99
E40.1070.1080.10141.28−0.892.3938.27−0.22−0.153.69
F40.0160.0350.02221.62−11.485.6121.140.22−0.6113.35
E
MeanMedian95%Max
5.704164.7577712.749515.9409
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Riza, N.A.; Ashraf, N. First Demonstration of Calibrated Color Imaging by the CAOS Camera. Photonics 2021, 8, 538. https://doi.org/10.3390/photonics8120538

AMA Style

Riza NA, Ashraf N. First Demonstration of Calibrated Color Imaging by the CAOS Camera. Photonics. 2021; 8(12):538. https://doi.org/10.3390/photonics8120538

Chicago/Turabian Style

Riza, Nabeel A., and Nazim Ashraf. 2021. "First Demonstration of Calibrated Color Imaging by the CAOS Camera" Photonics 8, no. 12: 538. https://doi.org/10.3390/photonics8120538

APA Style

Riza, N. A., & Ashraf, N. (2021). First Demonstration of Calibrated Color Imaging by the CAOS Camera. Photonics, 8(12), 538. https://doi.org/10.3390/photonics8120538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop