Next Article in Journal
A Constrained Coding-Aware Routing Scheme in Wireless Ad-Hoc Networks
Next Article in Special Issue
Continuous Distant Measurement of the User’s Heart Rate in Human-Computer Interaction Applications
Previous Article in Journal
Dynamic Parameter Identification for a Manipulator with Joint Torque Sensors Based on an Improved Experimental Design
Previous Article in Special Issue
Low-Complexity and Hardware-Friendly H.265/HEVC Encoder for Vehicular Ad-Hoc Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Discrimination in Color Vision Deficiency by Image Re-Coloring

1
Department of Electrical Engineering, Advanced Institute of Manufacturing with High-Tech Innovation, National Chung Cheng University, Chiayi 621, Taiwan
2
Department of Electrical Engineering, National Chung Cheng University, Chiayi 621, Taiwan
3
Asian Institute of TeleSurgery/IRCAD-Taiwan, Changhua 505, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(10), 2250; https://doi.org/10.3390/s19102250
Submission received: 19 April 2019 / Revised: 8 May 2019 / Accepted: 13 May 2019 / Published: 15 May 2019

Abstract

:
People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. An image re-coloring algorithm based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The experimental results show that our approach is useful for the recognition and separation of the CVD confusing colors in natural scene images. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed method.

1. Introduction

Most human beings have the ability of color vision perception, which senses the frequency of the light reflected from object surfaces. However, color vision deficiency (CVD) is a common genetic condition [1]. It is in general not a fatal or serious disease, but still brings inconvenience to most patients. People with color vision deficiency (or so-called color blindness) cannot observe the colorful world due to the damage of color reception nerves. Whether caused by genetic problems or chemical injury, the damaged nerves are not able to distinguish certain colors. There are a few common types of color vision deficiency such as protanomaly (red weak), deuteranomaly (green weak) and tritanomaly (blue weak). They can be detected and verified easily by some special color patterns (e.g., Ishihara plates [2]), but, unfortunately, cannot be cured by medical surgery or other treatments. Compared to the human population, people with color vision deficiency are still a minority, and they are sometimes ignored and restricted by our society.
In many places, colorblind people are not allowed to have a driver’s license. A number of careers in engineering, medicine and other related fields have set some restrictions on the ability of color perception. The display and presentation of most media on devices and in many forms do not specifically take color vision deficiency into consideration. Although the weakness in distinguishing different colors does not obviously affect people’s learning and cognition, there is still a challenge in terms of color-related industries. In this work, we propose an approach to assist people with color vision deficiency to tell the difference among the confusing colors as much as possible. A simple yet reasonable technique, “color reprint”, is developed and used to represent the CVD-proof colors. The algorithm does not only preserve the naturalness and details of the scenes, but also possess real-time processing capability. It can, therefore, be implemented on low-cost or portable devices, and brought to everyday life.
Human color vision is based on three light-sensitive pigments [3,4]. It is trichromatic and presented in three dimensions. The color stimulus is specified by the power contained at each wavelength. Normal trichromacy is because that the retina contains three classes of cone photo-pigment neural cells, L-, M-, and S-cones. A range of wavelengths of the light stimulate each of these receptor types at various degrees. For example, yellowish green light stimulates both L- and M-cones equally strongly, but S-cones weakly. Red light stimulates more L-cones than M-cones, and S-cones hardly at all. Our brain combines the information from each type of cone cells, and responds to different wavelengths of the light as shown in Table 1. The color processing is carried out in two stages. First, the stimulus from the cones is recombined to form two color-opponents and luminance. Second, an adaptive signal regulation processes within the operating range and stabilizes the illumination changes of the object appearance. When any kind of sensitive pigments is broken or loses the functionality [1], people can only view a part of the visible spectrum compared to those with normal vision capability [5] (see Figure 1).
There are studies about the molecular genetics of human color vision in the literature. Nathans et al. have described the isolation and sequencing of genomic and complementary DNA clones which encode the apoproteins of the red, green and blue pigments [4]. With newly refined methods, the number and ratio of genes are re-examined in men with normal color vision. A recent report reveals that many males have more pigment genes on the X chromosome than previously studied, and many have more than one long-wave pigment gene [7]. The loss of characteristic sensitivities of the red and green receptors introduced into the transformed sensitivity curves also indicates the appropriate degrees of luminosity deficit for deuteranopes and protanopes [8].
Color vision deficiency is mainly caused by two reasons: natural genetic factors and impaired nerves or brain. A protanope suffers from a lack of the L-cone photo-pigment, and is unable to discriminate reddish and greenish hues since the red–green opponent mechanism cannot be constructed. A deuteranope does not have sufficient M-cone photo-pigment, so the reddish and greenish hues are not distinguishable. People with tritanopia do not have the S-cone photo-pigment, and, therefore, cannot discriminate yellowish and bluish hues [9]. The literature shows that more than 8% of the world population suffer from color vision deficiency (see Table 2). For color vision correction, gene therapy which adds the missing genes is sufficient to restore full color vision without further rewiring of the brain. It has been tested on a monkey with colorblindness since birth [10]. Nevertheless, there are also non-invasive alternatives available by means of computer vision techniques.
In [12], Huang et al. propose a fast re-coloring technique to improve the accessibility for the impaired color vision. They design a method to derive an optimal mapping to maintain the contrast between each pair of the representative colors [13]. In a subsequent work, an image re-coloring algorithm for dichromats using the concept of key color priority is presented [14]. A color blindness plate (CBP) is presented by Chen et al., which is a satisfactory way to test color vision in the computer vision community [15]. The approach is adopted to demonstrate normal color vision, as well as red–green color vision deficiency. Rasche et al. propose a method to preserve the image details while reducing the gamut dimension, and seek a color to gray mapping to maintain the contrast and luminance consistency [16]. They also describe a method which allows the re-colored images to deliver the content with increased information to color-deficient viewers [17]. In [18], Lau et al. present a cluster-based approach to optimize the transformation for individual images. The idea is to preserve the information from the source space as much as possible while maintaining the natural mapping as faithfully as possible. Lee et al. develop a technique based on fuzzy logic and correction of digital images to improve the visual quality for individuals with color vision disturbance [19]. Similarly, Poret et al. design a filter based on the Ishihara color test for color blindness correction [20].
Most algorithms for color transformation aim to preserve the color information in the original image while maintaining the re-colored image as naturally as possible. This might be different from some image processing and computer vision tasks; the images appearing natural after enhancement is an important issue for color vision deficiency correction. It is not only to keep the image details intact, but also to maintain the colors as smooth as those without the re-coloring process. These conditions re-range in the color distribution space to let the colorblind people to discriminate different colors [21,22]. Moreover, it is generally agreed that color perception is subjective and will not be exactly the same for different people. In this work, the proposed method is carried out on color vision deficiency simulation tools and adopts human tests for evaluation. We use RMS (root mean squares) to calculate the change after re-coloring, and HDR-VDP (visual difference predictor) [23] to compare the visibility and quality of subjective human feeling. Our algorithms not only present the naturalness and details of the images, but process almost in real-time.

2. Approach

In this paper, a technique called color warping (CW) is proposed for effective image re-coloring. It uses the orientation of the eigenvectors of the color vision deficiency simulation results to warp the color distribution. In general, the acquired images are presented in the RGB color space for display. This is, however, not suitable for color vision-related processing. For human color perception related tasks, the images are first transformed to the λ , Y-B, R-G color space based on the CIECAM02 model [24]. It consists of a transformation from RGB to LMS [25] using
L M S = 0.7328 0.4296 0.1624 0.7036 1.6975 0.0061 0.0030 0.0136 0.9834 R G B
followed by a second transformation from LMS to λ , Y-B, R-G with
λ Y B R G = 0.6 0.4 0.0 0.24 1.05 0.7 1.2 1.6 0.4 L M S .
Since the above transformations are linear, it is easily to verify the relationship between the RGB and λ , Y-B, R-G color spaces is given by
λ Y B R G = 0.3479 0.5981 0.3657 0.0074 0.1130 1.1858 1.1851 1.5708 0.3838 R G B
and
R G B = 1.2256 0.2217 0.4826 0.9018 0.3645 0.2670 0.0936 0.8072 0.0224 λ Y B R G .
A flowchart of the proposed method is illustrated in Figure 2. The “Eigen-Pro” stage represents the eigenvector processing. The color warping is the key idea of this work, and the color constraints are used to make the distortion decrease after the color space transformation.

2.1. Color Transform

The physical property of the light used for color perception is the distribution of the spectral power [26]. In principle, there are many distinct spectral colors, and the set of all physical colors may be thought of as a large-dimensional vector space. A better alternative to the commonly adopted tristimulus coordinates for the spectral property of the light is to use L-, M-, and S-cone cells coordinates as a 3-space. To form a model for human perceptual color space, we can consider all the resulting combinations as a subset of the 3-space. The property of the cones covers the region away from the origin corresponding to the intensity of the S, M and L lights proportionately. A digital image acquisition device consists of different elements [27,28]. The characteristics of the light and the material of the observed object determine the physical properties of its color [27,29,30]. For color transformation, Huang et al. [31] present a method to warp images to the CIELab color space by rotating a matrix. Dana et al. [32] and Swain et al. [33] propose to use an antagonist space which does not take the non-linear human eye response into consideration. Instead, we transform the color space to (WS, RG, BY) based on the electro-physiological study [34].

2.2. Eigenvector Processing

Color vision deficiency cannot be understood easily by most people with normal vision. Thus, it is necessary to use simulation tools to create synthetic images for ordinary viewers to understand what are seen by the colorblind people [35]. Some well-known tools include Colblindor [6] and LMS [25], and there are also several websites to perform the simulation online (For example, Coblis Color Blindness Simulator (http://www.color-blindness.com/coblis-color-blindness-simulator), Color Blindness Simulator (http://www.etre.com/tools/colourblindsimulator), and Vision Simulator (http://www.webexhibits.org/causesofcolor/2.html)). In this work, Machado’s approach is adopted for our color vision deficiency simulation [35]. It utilizes a physiology based simulation model to achieve the sensation of cones in human visual perception. The simulation is to shift the pigments of the responding curve of spectral sensitivity functions as shown in Figure 3. Anomalous trichromacy can be simulated by shifting the sensitivity of the L, M, and S cones in the following ways:
  • Protanomaly: Shift L cone toward M cone, L ( λ ) a = L ( λ + Δ λ L ) .
  • Deuteranomaly: Shift M cone toward L cone, M ( λ ) a = M ( λ + Δ λ M ) .
  • Trianomaly: Shift S cone, S ( λ ) a = S ( λ + Δ λ S ) .
The elements of the transformation matrix Γ can be derived by
f ( λ , R , G , B ) W S , Y B , R G = ρ R , G , B ϕ R , G , B ( λ ) f ( λ ) W S , Y B , R G d λ
where ϕ R , G , B is the spectral power distribution function, and ρ R , G , B is a normalization factor. Thus, Γ is the projection of the spectral power distributions of RGB primaries onto a set of basic functions f ( λ , R , G , B ) W S , Y B , R G . That is,
Γ = f ( R ) W S f ( G ) W S f ( B ) W S f ( R ) Y B f ( G ) Y B f ( B ) Y B f ( R ) R G f ( G ) R G f ( B ) R G .
This model is based on the stage theory of human color vision, and is derived from the data reported in electro-physiological study [34]. Let Φ C V D be the matrix that maps RGB to the opponent-color space of normal trichromacy, then the simulation of dichromatic vision is obtained by the transformation
R s G s B s = Φ C V D R G B .
By definition, an eigenvector is the non-zero vector mapped by a given linear transformation of a vector space onto a vector that is the product of the original vector multiplied by a scalar. Thus, the algorithm counts the eigenvectors of the covariance matrix from the images in Y-B, R-G of the λ , Y-B, R-G opponent color space, i.e.,
[ v , d ] = e i g ( c o v ( I Y B , I R G ) )
where e i g is the function of eigenvalue and eigenvector, and I Y B and I R G are the Y-B and R-G images, respectively. On the left hand side of the equation, d is the generalized eigenvalue, and v is a 2 × 2 matrix since the covariance c o v is a 2 × 2 matrix derived from a pair of n × 1 images given by the covariance matrix
c o v ( X , Y ) = i = 1 n ( X i X ¯ ) ( Y i Y ¯ ) ( n 1 )
For the original and CVD simulation images shown in Figure 4, the characteristics of the associated eigenvectors are illustrated in Figure 5. The black line (at about 91 ) indicates the eigenvector of the original image. For protanopia (red line about 150 ) and deuteranopia (green line about 140 ), the eigenvectors lead the one associated with the original image. The eigenvector of tritanopia (blue line at about 80 ) is behind the original image case. Our objective is to recover the angle difference between the normal and color vision deficiency images, and use it to re-color the image. The difference image when observed by normal viewers and the color vision deficiency simulation is defined by
I d i f f = ( I n ( Y B ) I c ( Y B ) ) 2 + ( I n ( R G ) I c ( R G ) ) 2
where I n and I c represent the intensity observed by a normal viewer and obtained from the color vision deficiency simulation, respectively.
An example of protanopia simulation is illustrated in Figure 6. The difference image is shown in Figure 6c, and a binary image for better illustration is shown in Figure 6d. Our objective is to recover the angle and difference between the normal and CVD images and use the information to re-color the CVD simulation images.

2.3. Color Warping

The color values of the images are transformed from RGB to the opponent color space λ , Y-B, R-G using Equations (3) and (4). It is assumed that color vision deficiency does not affected by the brightness, so the value λ corresponding to the luminance is keep intact. To define the warping range with the angle of an eigenvector, we construct twelve pure colors in RGB using the values of 0, 150 and 255. Figure 7 shows the range of missing chroma of color vision deficiency. The area within the two green lines is the red chroma missing for protanopia and deuteranopia, and the area within th two purple lines is the blue chroma missing for tritanopia. The color points represented in the Cartesian coordinates are then transformed to the polar coordinates by
θ = tan 1 y x
for processing. The angle θ associated with the eigenvector in the λ , Y-B, R-G color space is used to derive the range to be processed. Since the image is now in the opponent color space, the range is defined by the angle of the simulation vector to the opposite angle of the simulation vector. Finally, the warping range is defined by the vertical angle of the original vector to the opposite angle of the simulation vector. An example is illustrated in Figure 8, the green area is warped to the red area for image re-coloring.
The new color angle is derived from the original color angle by
θ n e w = θ θ o p π · ( θ θ o p )
where the angles of color points are defined in the range of [ π , π ] , θ is the angle of vector orthogonal to the original vector, and θ o p is the angle of vector opposite to the color vision deficiency simulation vector.
When the image is converted from RGB to the λ , Y-B, R-G color space, it is in a limited range of color space representation. We need a constraint to avoid the luminance overflow problem, which will make colors not smooth after converted back to the RGB color representation. In our approach, a convex hull is adopted for the color constraint due to its simplicity for boundary derivation. Figure 9a–d illustrate the full-color images constructed using 256 3 pixels, i.e., the resolution of 4096 × 4096 , and the corresponding convex hull is shown in Figure 9e (the red lines). The formula used for conversion is given by
ρ n e w = ρ × ρ ( θ n e w ) ρ ( θ )
where ρ is the original value, ρ ( θ n e w ) is the value of the convex hull at θ n e w , and ρ ( θ ) is the value of the convex hull at θ . The resulting image in the λ , Y-B, R-G color space is then transformed back to the RGB color space for display.

3. Experiments

The proposed method has been tested on natural images including flowers, fruits, pedestrians and landscape, as well as synthetic images such as patterns with pure colors (see Figure 10). The experiments were carried out on both simulation view and human tests. Figure 11 shows the images of protanopia color vision deficiency with different sensitive from 0.3 to 0.9 after our re-coloring technique. For the color vision deficiency view simulation, we compared the results of the proposed approach with the methods presented by Kuhn et al. [37], Rasche et al. [17] and Huang et al. [12]. Figure 12 shows the results of the deuteranopia color vision deficiency simulation and re-coloring using different algorithms. While all methods are able to separate the flower from the leaves, our result is more distinguishable and much closer to original color.

3.1. Root Mean Square

We use the root mean square (RMS) value to measure the difference between two images. That is, to evaluate how far between the CVD view simulation and the image processed after our re-coloring algorithm. We calculate the RMS value with k-neighborhood defined by
R M S i = 1 N 1 2 j = k k [ ( a i + j r a i + j t ) 2 + ( b i + j r b i + j t ) 2 ]
where a i + j r and b i + j r are a * b * in L * a * b * of the reference image, a i + j t and b i + j t are a * b * in L * a * b * of the target image, and N is the number of elements in k-neighbor.
An example of tritanopia CVD simulation and the re-coloring results is shown in Figure 13. Compared to the results obtained from Kuhn’s and Huang’s methods, our approach provides better contrast between the colors. Figure 14 shows the comparison of the RMS values on several test images using the proposed technique and Kuhn’s method. The higher RMS value is displayed in dark blue, and the lowest value is shown in white. The figures indicate that, although the distributions of our and Kuhn’s results are similar, the RMS values of ours are higher than Kuhn’s, which implies a better separation in colors. Additional results of various types of test images are shown in Figure 15. The results of CVD simulation, re-coloring using the proposed technique and CVD simulation on the re-colored images are shown in the first, second and third column, respectively.

3.2. Visual Difference Predictor

HDR-VDP is a visual metric that compares a pair of images and predicts their visibility (the probability of the differences between the images) and quality (the quality degradation with the respect to the reference image). In this paper, Mantiuk et al.’s HDR-VDP-2 [23] is adopted to evaluate our re-coloring technique and Kuhn’s method. HDR-VDP-2 is a major revision which improves the accuracy of the prediction and changes the metric to predict the visibility (detection/discrimination) and image quality (mean-opinion-score). The new metric also models Long-, Middle-, Short-cone and rod sensitivities for different spectral characteristics of the incoming light. As shown in Figure 16, the first and fourth rows are two test images and their CVD simulation. The images from the left to the right are the original images, CVD simulation results using our re-coloring technique and the results obtained from Kuhn’s method. The second and fifth rows are the RMS values with k = 11 . The higher value of RMS is displayed in a deeper blue color, and the low value is displayed in a white color. The third and sixth rows are the visibility test results using HDR-VDP-2. The probability of detection map tells us how likely we will notice the difference between two images. Red color denotes the high probability and green color indicates a low probability. Finally, as shown in the second and fifth rows, the distribution of our results and Kuhn’s are almost the same. However, the third and sixth rows indicate that our re-coloring technique is able to provide more distinguishable colors on the CVD simulation results.

3.3. Human Subjective Evaluation

The color sensation is commonly considered as a subjective feeling of human beings. In this work, a human test is carried out to evaluate our color enhancement approach. The procedure consists of an image re-coloring stage to produce the CVD-friendly output, and an evaluation stage to analyze the responses collected from the volunteers. In the human test approved by an IRB (institutional review board) [38], people with color vision deficiency were asked to give judgments for the images enhanced by the re-coloring algorithms. We first let the subjects understand the purpose and process of this color test clearly. Three types of color vision deficiencies: protanopia, deuteranopia and tritanopia are considered, and the subjects are classified to groups for testing. Four different approaches, M 1 , M 2 , M 3 , M 4 are then evaluated as follows.
  • M 1 : The input image is converted to the L * u * v * color space, projected to u * v * and equalized the u * and v * coordinates.
  • M 2 : The input image is used to simulate the CVD view, and find the (R, G, B) difference between input and simulation images. A matrix is then used to enhance the color difference regions.
  • M 3 : The input image is converted to the L * u * v * color space, and rotated to the non-confused color position.
  • M 4 : The input image is used to simulate the CVD view, and the distances among the colors are used to obtain the discrepancy. The image is then converted to the λ , Y-B, R-G color space, and rotated the color difference regions.
We collected 55 valid subjects in the test. The results are tabulated in Table 4. In the table, i is the method of different research stages, j is the index of the test image, the letters are the feeling level of the pros and cons (denoted by A, B, C, D) for the subjects. The summary indicates the proportion of the method M i for the test image F j chosen by subjects is over 1 / 3 . As shown in Table 4, 83.64% (marked in blue) of 55 subjects selected level A for the method M 2 and the test image F 1 . The numbers marked in red indicate the proportion of method M i in levels A, B, C, D with higher percentages, and the associated methods are more representative in the level. Thus, each level (A, B, C, D) is represented by the methods: M 2 , M 4 , M 3 , and M 1 . It also shows that the best to the worst for color vision deficiency feeling of the four different methods are given by M 2 , M 4 , M 3 , M 1 .

4. Conclusions

In this paper, we present an image enhancement approach to assist colorblind people with a better viewing experience. An image re-coloring method based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The proposed method represents clearly subjective image quality and the objective evaluation. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed technique.

Author Contributions

H.-Y.L. proposed the idea, formulated the model, conducted the research and wrote the paper. L.-Q.C. developed the software programs, performed experiments and data analysis, and wrote the paper. M.-L.W. helped with the human test experiments.

Funding

The support of this work is in part by the Ministry of Science and Technology of Taiwan under Grant MOST 106-2221-E-194-004 and the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wong, B. Points of view: Color blindness. Nat. Methods 2011, 8, 441. [Google Scholar] [CrossRef] [PubMed]
  2. Ishihara, S. Ishihara’s Tests for Color-Blindness, 38th ed.; Kanehara, Shuppan: Tokyo, Japan, 1990. [Google Scholar]
  3. Hunt, R. Colour Standards and Calculations. In The Reproduction of Colour; John Wiley and Sons, Ltd.: Hoboken, NJ, USA, 2005; pp. 92–125. [Google Scholar] [CrossRef]
  4. Nathans, J.; Thomas, D.; Hogness, D.S. Molecular genetics of human color vision: The genes encoding blue, green, and red pigments. Science 1986, 232, 193–202. [Google Scholar] [CrossRef]
  5. Michael, K.; Charles, L. Psychophysics of Vision: The Perception of Color. Available online: https://www.ncbi.nlm.nih.gov/books/NBK11538/ (accessed on 30 April 2019).
  6. Colblindor Web Site. Available online: https://www.color-blindness.com/category/tools/ (accessed on 30 April 2019).
  7. Neitz, M.; Neitz, J. Numbers and ratios of visual pigment genes for normal red-green color vision. Science 1995, 267, 1013–1016. [Google Scholar] [CrossRef]
  8. Graham, C.; Hsia, Y. Color Defect and Color Theory Studies of normal and color-blind persons, including a subject color-blind in one eye but not in the other. Science 1958, 127, 675–682. [Google Scholar] [CrossRef]
  9. Fairchild, M. Color Appearance Models; The Wiley-IS&T Series in Imaging Science and Technology; Wiley: London, UK, 2013. [Google Scholar]
  10. Dolgin, E. Colour blindness corrected by gene therapy. Nature 2009, 2, 66–69. [Google Scholar] [CrossRef]
  11. Hunt, R.W.G.; Pointer, M.R. Measuring Colour; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  12. Huang, J.B.; Wu, S.Y.; Chen, C.S. Enhancing Color Representation for the Color Vision Impaired. In Proceedings of the Workshop on Computer Vision Applications for the Visually Impaired, Marseille, France, 12–18 October 2008. [Google Scholar]
  13. Huang, J.B.; Chen, C.S.; Jen, T.C.; Wang, S.J. Image recolorization for the colorblind. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 1161–1164. [Google Scholar]
  14. Huang, C.R.; Chiu, K.C.; Chen, C.S. Key Color Priority Based Image Recoloring for Dichromats. In Advances in Multimedia Information Processing, Proceedings of the 11th Pacific Rim Conference on Multimedia, Shanghai, China, 21–24 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 637–647. [Google Scholar] [CrossRef]
  15. Chen, Y.S.; Hsu, Y.C. Computer vision on a colour blindness plate. Image Vis. Comput. 1995, 13, 463–478. [Google Scholar] [CrossRef]
  16. Rasche, K.; Geist, R.; Westall, J. Re-coloring Images for Gamuts of Lower Dimension. Comput. Graph. Forum 2005, 24, 423–432. [Google Scholar] [CrossRef] [Green Version]
  17. Rasche, K.; Geist, R.; Westall, J. Detail preserving reproduction of color images for monochromats and dichromats. IEEE Comput. Graph. Appl. 2005, 25, 22–30. [Google Scholar] [CrossRef]
  18. Lau, C.; Heidrich, W.; Mantiuk, R. Cluster-based color space optimizations. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 1172–1179. [Google Scholar]
  19. Lee, J.; Santos, W. An adaptative fuzzy-based system to evaluate color blindness. In Proceedings of the 17th International Conference on Systems, Signals and Image Processing (IWSSIP 2010), Rio de Janeiro, Brazil, 17–19 June 2010. [Google Scholar]
  20. Poret, S.; Dony, R.; Gregori, S. Image processing for colour blindness correction. In Proceedings of the 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON, Canada, 26–27 September 2009; pp. 539–544. [Google Scholar]
  21. CIE Web Site. Available online: http://cie.co.at/ (accessed on 30 April 2019).
  22. Wright, W.D. Color Science, Concepts and Methods. Quantitative Data and Formulas. Phys. Bull. 1967, 18, 353. [Google Scholar] [CrossRef]
  23. Mantiuk, R.; Kim, K.J.; Rempel, A.G.; Heidrich, W. HDR-VDP-2: A Calibrated Visual Metric for Visibility and Quality Predictions in All Luminance Conditions. ACM Trans. Graph. 2011, 30, 40:1–40:14. [Google Scholar] [CrossRef]
  24. Moroney, N.; Fairchild, M.D.; Hunt, R.W.; Li, C.; Luo, M.R.; Newman, T. The CIECAM02 Color Appearance Model. Color Imaging Conf. 2002, 2002, 23–27. [Google Scholar]
  25. Brettel, H.; Viénot, F.; Mollon, J.D. Computerized simulation of color appearance for dichromats. J. Opt. Soc. Am. A 1997, 14, 2647–2655. [Google Scholar] [CrossRef]
  26. Wild, F. Outline of a Computational Theory of Human Vision. In Proceedings of the KI 2005 Workshop 7 Mixed-Reality as a Challenge to Image Understanding and Artificial Intelligence, Koblenz, Germany, 11 September 2005; p. 55. [Google Scholar]
  27. Busin, L.; Vandenbroucke, N.; Macaire, L. Color spaces and image segmentation. Adv. Imaging Electron Phys. 2008, 151, 65–168. [Google Scholar]
  28. Vrhel, M.; Saber, E.; Trussell, H. Color image generation and display technologies. IEEE Signal Process. Mag. 2005, 22, 23–33. [Google Scholar] [CrossRef]
  29. Sharma, G.; Trussell, H. Digital color imaging. IEEE Trans. Image Process. 1997, 6, 901–932. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Marguier, J.; Süsstrunk, S. Color matching functions for a perceptually uniform RGB space. In Proceedings of the ISCC/CIE Expert Symposium, Ottawa, ON, Canada, 16–17 May 2006. [Google Scholar]
  31. Huang, J.B.; Tseng, Y.C.; Wu, S.I.; Wang, S.J. Information preserving color transformation for protanopia and deuteranopia. IEEE Signal Process. Lett. 2007, 14, 711–714. [Google Scholar] [CrossRef]
  32. Ballard, D.H.; Brown, C.M. Computer Vision; Prentice Hall: Upper Saddle River, NJ, USA, 1982. [Google Scholar]
  33. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [Google Scholar] [CrossRef]
  34. Ingling, C.R.; Tsou, B.H.P. Orthogonal combination of the three visual channels. Vis. Res. 1977, 17, 1075–1082. [Google Scholar] [CrossRef]
  35. Machado, G.M.; Oliveira, M.M.; Fernandes, L.A. A physiologically-based model for simulation of color vision deficiency. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1291–1298. [Google Scholar] [CrossRef]
  36. Smith, V.C.; Pokorny, J. Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm. Vis. Res. 1975, 15, 161–171. [Google Scholar] [CrossRef]
  37. Kuhn, G.R.; Oliveira, M.M.; Fernandes, L.A. An efficient naturalness-preserving image-recoloring method for dichromats. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1747–1754. [Google Scholar] [CrossRef] [PubMed]
  38. Wikipedia. Institutional Review Board—Wikipedia. The Free Encyclopedia. Available online: http://en.wikipedia.org/wiki/Institutional_review_board (accessed on 1 July 2013).
Figure 1. (a) An original image from Ishihara plates and the enhanced images using our re-coloring algorithms for protanomaly, deuteranomaly and tritanomaly. (b) The images generated from a color vision deficiency simulation tool [6]. The results show that our image enhancement technique is able to improve check pattern recognition under various types of color vision deficiency.
Figure 1. (a) An original image from Ishihara plates and the enhanced images using our re-coloring algorithms for protanomaly, deuteranomaly and tritanomaly. (b) The images generated from a color vision deficiency simulation tool [6]. The results show that our image enhancement technique is able to improve check pattern recognition under various types of color vision deficiency.
Sensors 19 02250 g001
Figure 2. The flowchart of the proposed technique. In the pipeline, the images are first transformed to the λ , Y-B, R-G color space for the re-color processing, followed by a transformation back to the original RGB color space.
Figure 2. The flowchart of the proposed technique. In the pipeline, the images are first transformed to the λ , Y-B, R-G color space for the re-color processing, followed by a transformation back to the original RGB color space.
Sensors 19 02250 g002
Figure 3. The cone spectral sensitivity functions at all wavelengths in the visible range. (a) Responding curve [36]. (b) Spectral response functions for the opponent channels [34].
Figure 3. The cone spectral sensitivity functions at all wavelengths in the visible range. (a) Responding curve [36]. (b) Spectral response functions for the opponent channels [34].
Sensors 19 02250 g003
Figure 4. The three types of color vision deficiency simulation using Machado’s approach [35] with sensitive 0.6 and the matrix Φ C V D as shown in Table 3.
Figure 4. The three types of color vision deficiency simulation using Machado’s approach [35] with sensitive 0.6 and the matrix Φ C V D as shown in Table 3.
Sensors 19 02250 g004
Figure 5. Eigenvectors of the covariance matrix of the images shown in Figure 4.
Figure 5. Eigenvectors of the covariance matrix of the images shown in Figure 4.
Sensors 19 02250 g005
Figure 6. An example of Protanopia simulation. (a) is the original image and (b) is the Protanopia simulation result. (c) is the difference of (a,b) computed in the λ, Y-B, R-G color space. (d) is the binarized version of (c) for better illustration.
Figure 6. An example of Protanopia simulation. (a) is the original image and (b) is the Protanopia simulation result. (c) is the difference of (a,b) computed in the λ, Y-B, R-G color space. (d) is the binarized version of (c) for better illustration.
Sensors 19 02250 g006
Figure 7. The pure colors RGB (0, 150, and 255).
Figure 7. The pure colors RGB (0, 150, and 255).
Sensors 19 02250 g007
Figure 8. An illustration of the color warping range from the green area to the red area.
Figure 8. An illustration of the color warping range from the green area to the red area.
Sensors 19 02250 g008
Figure 9. (a) Three types of color vision deficiency simulation using a full-color image with 2563 pixels. The image resolution is 4096 × 4096. (e) The convex hulls of the images in (ad). All types of CVD simulation cover only a part of the convex hull of the original full color imag.
Figure 9. (a) Three types of color vision deficiency simulation using a full-color image with 2563 pixels. The image resolution is 4096 × 4096. (e) The convex hulls of the images in (ad). All types of CVD simulation cover only a part of the convex hull of the original full color imag.
Sensors 19 02250 g009
Figure 10. The test images used to evaluate the re-coloring techniques for color vision deficiency.
Figure 10. The test images used to evaluate the re-coloring techniques for color vision deficiency.
Sensors 19 02250 g010
Figure 11. Enhancement sensitive of protanopia color vision deficiency, (a) with sensitivity 0.3, (b) with sensitivity 0.5, (c) with sensitivity 0.7, (d) with sensitivity 0.9.
Figure 11. Enhancement sensitive of protanopia color vision deficiency, (a) with sensitivity 0.3, (b) with sensitivity 0.5, (c) with sensitivity 0.7, (d) with sensitivity 0.9.
Sensors 19 02250 g011
Figure 12. The comparison of deuteranopia simulation of the flower image in Figure 9a. (a) Machado’s CVD simulation. (b) Our re-coloring technique after Machado’s CVD simulation. (c) Brettel’s CVD simulation. (d) Kuhn’s re-coloring technique after Brettel’s CVD simulation. (e) Rasche’s CVD simulation. (f) Rasche’s re-coloring after CVD simulation. (g) Huang’s CVD simulation. (h) Huang’s re-coloring after CVD simulation.
Figure 12. The comparison of deuteranopia simulation of the flower image in Figure 9a. (a) Machado’s CVD simulation. (b) Our re-coloring technique after Machado’s CVD simulation. (c) Brettel’s CVD simulation. (d) Kuhn’s re-coloring technique after Brettel’s CVD simulation. (e) Rasche’s CVD simulation. (f) Rasche’s re-coloring after CVD simulation. (g) Huang’s CVD simulation. (h) Huang’s re-coloring after CVD simulation.
Sensors 19 02250 g012
Figure 13. The comparison of tritanopia simulation of the pencil image. (a) The original image. (b) The CVD simulation using Machado’s method. (c) Machado’s CVD simulation on the image processed by the proposed re-coloring technique. (d) The CVD simulation using Brettel’s method. (e) Brettel’s CVD simulation on the image processed by the Kuhn’s re-coloring technique. (f,g) CVD simulation and re-coloring using Huang’s approach.
Figure 13. The comparison of tritanopia simulation of the pencil image. (a) The original image. (b) The CVD simulation using Machado’s method. (c) Machado’s CVD simulation on the image processed by the proposed re-coloring technique. (d) The CVD simulation using Brettel’s method. (e) Brettel’s CVD simulation on the image processed by the Kuhn’s re-coloring technique. (f,g) CVD simulation and re-coloring using Huang’s approach.
Sensors 19 02250 g013
Figure 14. The comparison of RMS values between our method and Kuhn’s method. (a) The RMS value between Figure 6a and Figure 12a. (b) The RMS value between Figure 12b and Figure 12a. (c) The RMS value between Figure 6a and Figure 12c. (d) The RMS value between Figure 12d and Figure 12c. (e) The RMS value between Figure 13a and Figure 13b. (f) The RMS value between Figure 13c and Figure 13b. (g) The RMS value between Figure 13a and Figure 13d. (h) The RMS value between Figure 13e and Figure 13d.
Figure 14. The comparison of RMS values between our method and Kuhn’s method. (a) The RMS value between Figure 6a and Figure 12a. (b) The RMS value between Figure 12b and Figure 12a. (c) The RMS value between Figure 6a and Figure 12c. (d) The RMS value between Figure 12d and Figure 12c. (e) The RMS value between Figure 13a and Figure 13b. (f) The RMS value between Figure 13c and Figure 13b. (g) The RMS value between Figure 13a and Figure 13d. (h) The RMS value between Figure 13e and Figure 13d.
Sensors 19 02250 g014
Figure 15. The results of CVD simulation, re-coloring using the proposed technique and CVD simulation on the re-colored images for some test images in Figure 10 (the first two columns) and Figure 15 (the third column).
Figure 15. The results of CVD simulation, re-coloring using the proposed technique and CVD simulation on the re-colored images for some test images in Figure 10 (the first two columns) and Figure 15 (the third column).
Sensors 19 02250 g015
Figure 16. The comparison of CVD simulation results processed using our re-coloring technique and Kuhn’s method. The first and fourth rows are two test images and their CVD simulation results. The second and fifth rows are the visualized RMS values, and HDR-VDP evaluation is shown in the third and sixth rows.
Figure 16. The comparison of CVD simulation results processed using our re-coloring technique and Kuhn’s method. The first and fourth rows are two test images and their CVD simulation results. The second and fifth rows are the visualized RMS values, and HDR-VDP evaluation is shown in the third and sixth rows.
Sensors 19 02250 g016
Table 1. Cone cells in the human eyes and the response to the light wavelength.
Table 1. Cone cells in the human eyes and the response to the light wavelength.
TypeRangePeak Wavelength
S400–500 nm420–440 nm
M450–630 nm534–555 nm
L500–700 nm564–580 nm
Table 2. Approximate percentage occurrences of various types of color vision deficiency [11].
Table 2. Approximate percentage occurrences of various types of color vision deficiency [11].
TypeMale (%)Female (%)
Protanopia1.00.02
Deuteranopia1.10.01
Trianopia0.0020.001
Protanomaly1.00.02
Deuteranomaly4.90.38
Tritanomaly∼0∼0
Total8.0020.44
Table 3. Machado’s Simulation Matrices Φ C V D .
Table 3. Machado’s Simulation Matrices Φ C V D .
Sensitivity0.6
Protanopia 0.385 0.769 0.154 0.101 0.830 0.070 0.007 0.022 1.030
Deuteranopia 0.499 0.675 0.174 0.205 0.755 0.040 0.011 0.031 0.980
Tritanopia 1.105 0.047 0.058 0.032 0.972 0.061 0.001 0.318 0.681
Table 4. The human test on 55 valid subjects with four different methods. The number are shown in percentage. The numbers marked in red indicate the proportion of method M i in levels A, B, C, D with higher percentages, and the associated methods are more representative in the level.
Table 4. The human test on 55 valid subjects with four different methods. The number are shown in percentage. The numbers marked in red indicate the proportion of method M i in levels A, B, C, D with higher percentages, and the associated methods are more representative in the level.
LevelAB
M 1 M 2 M 3 M 4 M 1 M 2 M 3 M 4
F 1 5.4583.645.455.4545.459.099.0936.36
F 2 5.4590.910.003.643.645.4521.8269.09
F 3 1.8272.7314.5510.9020.003.6425.4550.91
F 4 1.8294.551.821.829.091.8225.4563.64
e F 5 1.8991.670.008.330.005.4520.0074.55
F 6 0.0041.8212.7345.455.368.9344.6441.07
F 7 49.099.093.6438.1816.3627.2730.9125.45
F 8 14.8120.3716.6748.1512.379.0950.9127.27
Summary7.1464.293.5725.0011.113.7033.3351.85
LevelCD
M 1 M 2 M 3 M 4 M 1 M 2 M 3 M 4
F 1 9.095.4554.5530.91
F 2 0.000.0076.3623.6490.913.643.641.82
F 3 9.0914.5543.6432.7369.099.0916.365.45
F 4 18.180.0049.0932.7370.913.6423.641.82
F 5 14.551.8263.6420.0088.890.007.413.70
F 6 21.8250.9114.5512.7375.470.0024.530.00
F 7 16.3630.9132.7320.0018.1832.7332.7316.36
F 8 41.8216.0032.0020.0031.4851.855.5611.11
Summary9.6812.9058.0619.3576.0012.008.004.00

Share and Cite

MDPI and ACS Style

Lin, H.-Y.; Chen, L.-Q.; Wang, M.-L. Improving Discrimination in Color Vision Deficiency by Image Re-Coloring. Sensors 2019, 19, 2250. https://doi.org/10.3390/s19102250

AMA Style

Lin H-Y, Chen L-Q, Wang M-L. Improving Discrimination in Color Vision Deficiency by Image Re-Coloring. Sensors. 2019; 19(10):2250. https://doi.org/10.3390/s19102250

Chicago/Turabian Style

Lin, Huei-Yung, Li-Qi Chen, and Min-Liang Wang. 2019. "Improving Discrimination in Color Vision Deficiency by Image Re-Coloring" Sensors 19, no. 10: 2250. https://doi.org/10.3390/s19102250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop