Special Issue "Color Image Processing"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 June 2017)

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Guest Editor
Prof. Dr. Edoardo Provenzi

IMB Institute de Mathématiques de Bordeaux UMR 5251, Université de Bordeaux, 351, cours de la Libération-33405, 33405 Talence, France
Website | E-Mail
Interests: color image processing; variational principles; geometry of color spaces; high dynamic range imaging; statistics of natural images; contrast measures; multispectral imaging; color in art and science

Special Issue Information

Dear Colleagues,  

Color is one of the most important and fascinating attributes of the natural environment. Research about color is becoming more and more prevalent in image processing and computer vision, even if many models are still designed for grayscale pictures and their extension to color images is not a trivial task. In fact, the intrinsic multidisciplinary character of color makes it difficult to model, both from a perceptual and a computational or mathematical level.

The intent of this Special Issue is to provide a framework where scientists in several different disciplines related to color can find a place to illustrate their ideas and results.

This Special Issue is primarily focused on the following topics, however we encourage all submissions related to color in imaging:

  • Computational color vision models
  • Perceptually-inspired color image and video processing
  • Variational and patch-based techniques applied to color images
  • Color data compression and encoding
  • Color image/video indexing and retrieval
  • Color enhancement
  • Color constancy and saliency
  • Color texture
  • Color image and video watermarking
  • Color image/video quality assessment
  • Multispectral imaging
  • Geometry of color spaces
  • Interactions between color science and other disciplines such as art, medicine, psychology, and so on
  • Color imaging and technology for material appearance
  • Color and contrast measures
  • Statistics of natural images in color
  • High dynamic range imaging in color

Prof. Dr. Edoardo Provenzi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Color image processing
  • Spatial models of color vision
  • Color in art, psychology, medicine
  • Color imaging and technology for material appearance
  • Geometry of color spaces
  • Color image/video quality assessment
  • High dynamic range imaging

Published Papers (12 papers)

View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Analytical Study of Colour Spaces for Plant Pixel Detection
J. Imaging 2018, 4(2), 42; https://doi.org/10.3390/jimaging4020042
Received: 26 September 2017 / Revised: 12 February 2018 / Accepted: 12 February 2018 / Published: 16 February 2018
Cited by 1 | PDF Full-text (34457 KB) | HTML Full-text | XML Full-text
Abstract
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a
[...] Read more.
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a statistical analysis to establish the respective abilities of different colour space representations to detect plant pixels and separate them from background pixels. Our hypothesis is that the colour space representation for which the separation of the distributions representing object and background pixels is maximized is the best for the detection of plant pixels. The two pixel classes are modelled by Gaussian Mixture Models (GMMs). In our statistical modelling we make no prior assumptions on the number of Gaussians employed. Instead, a constant bandwidth mean-shift filter is used to cluster the data with the number of clusters, and hence the number of Gaussians, being automatically determined. We have analysed the following representative colour spaces: R G B , r g b , H S V , Y c b c r and C I E - L a b . We have analysed the colour space features from a two-class variance ratio perspective and compared the results of our model with this metric. The dataset for our empirical study consisted of 378 digital images (and their manual segmentations) of a variety of plant species: Arabidopsis, tobacco, wheat, and rye grass, imaged under different lighting conditions, in either indoor or outdoor environments, and with either controlled or uncontrolled backgrounds. We have found that the best segmentation of plants is found using H S V colour space. This is supported by measures of Earth Mover Distance (EMD) of the GMM distributions of plant and background pixels. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Illusion and Illusoriness of Color and Coloration
J. Imaging 2018, 4(2), 30; https://doi.org/10.3390/jimaging4020030
Received: 24 November 2017 / Revised: 27 December 2017 / Accepted: 22 January 2018 / Published: 30 January 2018
PDF Full-text (7293 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead
[...] Read more.
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead a phenomenal attribute related to a sense of strangeness, deception, singularity, mendacity, and oddity. The main purpose of this work is to study the phenomenology of chromatic illusion vs. illusoriness, which is useful for shedding new light on the no-man’s land between “sensory” and “cognitive” processes that have not been fully explored. Some basic psychological and biological implications for living organisms are deduced. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessFeature PaperArticle Exemplar-Based Face Colorization Using Image Morphing
J. Imaging 2017, 3(4), 48; https://doi.org/10.3390/jimaging3040048
Received: 30 May 2017 / Revised: 18 September 2017 / Accepted: 19 October 2017 / Published: 31 October 2017
Cited by 1 | PDF Full-text (17606 KB) | HTML Full-text | XML Full-text
Abstract
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by
[...] Read more.
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by texture descriptors. Face images usually contain few texture so that the common approaches frequently fail. In this paper, we propose a new method taking the geometric structure of the images rather their texture into account such that it is more reliable for faces. Our approach is based on image morphing and relies on the YUV color space. First, a correspondence mapping between the luminance Y channel of the color source image and the gray-scale target image is computed. This mapping is based on the time discrete metamorphosis model suggested by Berkels, Effland and Rumpf. We provide a new finite difference approach for the numerical computation of the mapping. Then, the chrominance U,V channels of the source image are transferred via this correspondence map to the target image. A possible postprocessing step by a variational model is developed to further improve the results. To keep the contrast special attention is paid to make the postprocessing unbiased. Our numerical experiments show that our morphing based approach clearly outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Histogram-Based Color Transfer for Image Stitching
J. Imaging 2017, 3(3), 38; https://doi.org/10.3390/jimaging3030038
Received: 5 July 2017 / Revised: 5 September 2017 / Accepted: 6 September 2017 / Published: 9 September 2017
Cited by 1 | PDF Full-text (9074 KB) | HTML Full-text | XML Full-text
Abstract
Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper
[...] Read more.
Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper presents a color transfer approach via histogram specification and global mapping. The proposed algorithm can make images share the same color style and obtain color consistency. There are four main steps in this algorithm. Firstly, overlapping regions between a reference image and a test image are obtained. Secondly, an exact histogram specification is conducted for the overlapping region in the test image using the histogram of the overlapping region in the reference image. Thirdly, a global mapping function is obtained by minimizing color differences with an iterative method. Lastly, the global mapping function is applied to the whole test image for producing a color-corrected image. Both the synthetic dataset and real dataset are tested. The experiments demonstrate that the proposed algorithm outperforms the compared methods both quantitatively and qualitatively. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessFeature PaperArticle Improved Color Mapping Methods for Multiband Nighttime Image Fusion
J. Imaging 2017, 3(3), 36; https://doi.org/10.3390/jimaging3030036
Received: 30 June 2017 / Revised: 18 August 2017 / Accepted: 24 August 2017 / Published: 28 August 2017
PDF Full-text (12426 KB) | HTML Full-text | XML Full-text
Abstract
Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their
[...] Read more.
Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System
J. Imaging 2017, 3(3), 35; https://doi.org/10.3390/jimaging3030035
Received: 30 June 2017 / Revised: 31 July 2017 / Accepted: 8 August 2017 / Published: 23 August 2017
Cited by 1 | PDF Full-text (50319 KB) | HTML Full-text | XML Full-text
Abstract
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along
[...] Read more.
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along each image sequence as well as between corresponding image sequences due to the different illumination conditions, and to determine colors with natural appearance. We have developed a real-time local/global color processing approach for local contrast enhancement and lightness/color consistency, which processes images of the different sequences independently. Our approach combines the center/surround Retinex model and the Gray World hypothesis using a nonlinear color processing function. We propose an extended gain/offset scheme for Retinex to reduce the halo effect on shadow boundaries, and we employ stacked integral images (SII) for efficient Gaussian convolution. By applying the gain/offset function before the color processing function, we avoid color inversion issues, compared to the original scheme. Our combined Retinex/Gray World approach has been successfully applied to pairs of image sequences acquired on outdoor routes for change detection, and an experimental comparison with previous Retinex-based approaches has been carried out. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessFeature PaperArticle Image Fragile Watermarking through Quaternion Linear Transform in Secret Space
J. Imaging 2017, 3(3), 34; https://doi.org/10.3390/jimaging3030034
Received: 14 June 2017 / Revised: 4 August 2017 / Accepted: 5 August 2017 / Published: 11 August 2017
PDF Full-text (1966 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in
[...] Read more.
In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in a holistic and integrated fashion. We stress out that, by taking advantage of the host image quaternion representation, we extract complex features that are able to improve the embedding and verification of fragile watermarks. The algorithm, based on the Quaternion Karhunen-Loève Transform (QKLT), embeds a binary watermark into some QKLT coefficients representing a host image in a secret frequency space: the QKLT basis images are computed from a secret color image used as a symmetric key. A computational intelligence technique (i.e., genetic algorithm) is employed to modify the host image pixels in such a way that the watermark is contained in the protected image. The sensitivity to image modifications is then tested, showing very good performance. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Improving CNN-Based Texture Classification by Color Balancing
J. Imaging 2017, 3(3), 33; https://doi.org/10.3390/jimaging3030033
Received: 29 June 2017 / Revised: 17 July 2017 / Accepted: 21 July 2017 / Published: 27 July 2017
Cited by 2 | PDF Full-text (2486 KB) | HTML Full-text | XML Full-text
Abstract
Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems.
[...] Read more.
Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. However, their performance may be dampened by the fact that texture images are often characterized by color distributions that are unusual with respect to those seen by the networks during their training. In this paper we will show how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures. The feasibility of our approach is demonstrated by the experimental results obtained on the RawFooT dataset, which includes texture images acquired under several different lighting conditions. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer
J. Imaging 2017, 3(3), 31; https://doi.org/10.3390/jimaging3030031
Received: 27 May 2017 / Revised: 3 July 2017 / Accepted: 15 July 2017 / Published: 21 July 2017
Cited by 1 | PDF Full-text (1774 KB) | HTML Full-text | XML Full-text
Abstract
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the
[...] Read more.
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the driving condition for achieving maximum sharpness influences image quality. In most inspection systems, a single-color light source is used, and an equal step search (ESS) is employed to determine the maximum image quality. However, in the case of multiple color LEDs, the number of iterations becomes large, which is time-consuming. Hence, the steepest descent (STD) and conjugate gradient methods (CJG) were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence, the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The algorithm parameters were determined using orthogonal arrays, and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. The contribution of parameters was investigated using ANOVA. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Automatic Recognition of Speed Limits on Speed-Limit Signs by Using Machine Learning
J. Imaging 2017, 3(3), 25; https://doi.org/10.3390/jimaging3030025
Received: 30 May 2017 / Revised: 30 June 2017 / Accepted: 1 July 2017 / Published: 5 July 2017
PDF Full-text (11088 KB) | HTML Full-text | XML Full-text
Abstract
This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local
[...] Read more.
This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local binary pattern (LBP) feature quantities as information helpful for identification, then (2) an image processing method using Hue, Saturation and Value (HSV) color spaces for extracting the speed limit numbers on the identified speed-limit signs, and finally (3) a method for recognition of the extracted numbers using a neural network. The method of traffic sign recognition previously proposed by the author consisted of extracting geometric shapes from the sign and recognizing them based on their aspect ratios. This method cannot be used for the numbers on speed-limit signs because the numbers all have the same aspect ratios. In a study that proposed recognition of speed limit numbers using an Eigen space method, a method using only color information was used to detect speed-limit signs from images of scenery. Because this method used only color information for detection, precise color information settings and processing to exclude everything other than the signs are necessary in an environment where many colors similar to the speed-limit signs exist, and further study of the method for sign detection is needed. This study focuses on considering the following three points. (1) Make it possible to detect only the speed-limit sign in an image of scenery using a single process focusing on the local patterns of speed limit signs. (2) Make it possible to separate and extract the two-digit numbers on a speed-limit sign in cases when the two-digit numbers are incorrectly extracted as a single area due to the light environment. (3) Make it possible to identify the numbers using a neural network by focusing on three feature quantities. This study also used the proposed method with still images in order to validate it. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle RGB Color Cube-Based Histogram Specification for Hue-Preserving Color Image Enhancement
J. Imaging 2017, 3(3), 24; https://doi.org/10.3390/jimaging3030024
Received: 1 June 2017 / Revised: 27 June 2017 / Accepted: 27 June 2017 / Published: 1 July 2017
PDF Full-text (7229 KB) | HTML Full-text | XML Full-text
Abstract
A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute
[...] Read more.
A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute of grayscale values, the naive application of the methods for grayscale images to color images often results in unsatisfactory consequences. Conventional hue-preserving color image enhancement methods utilize histogram equalization (HE) for enhancing the contrast. However, they cannot always enhance the saturation simultaneously. In this paper, we propose a histogram specification (HS) method for enhancing the saturation in hue-preserving color image enhancement. The proposed method computes the target histogram for HS on the basis of the geometry of RGB (rad, green and blue) color space, whose shape is a cube with a unit side length. Therefore, the proposed method includes no parameters to be set by users. Experimental results show that the proposed method achieves higher color saturation than recent parameter-free methods for hue-preserving color image enhancement. As a result, the proposed method can be used for an alternative method of HE in hue-preserving color image enhancement. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Graphical abstract

Review

Jump to: Research

Open AccessReview The Academy Color Encoding System (ACES): A Professional Color-Management Framework for Production, Post-Production and Archival of Still and Motion Pictures
J. Imaging 2017, 3(4), 40; https://doi.org/10.3390/jimaging3040040
Received: 24 July 2017 / Revised: 12 September 2017 / Accepted: 13 September 2017 / Published: 21 September 2017
PDF Full-text (12308 KB) | HTML Full-text | XML Full-text
Abstract
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and
[...] Read more.
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and image preservation at large. For this reason, the Academy gathered an interdisciplinary group of scientists, technologists, and creatives, to contribute to it so that it is scientifically sound and technically advantageous in solving practical and interoperability problems in the current film production, postproduction and visual-effects (VFX) ecosystem—all while preserving and future-proofing the cinematographers’ and artists’ creative intent as its main objective. In this paper, a review of ACES’ technical specifications is provided, as well as the current status of the project and a recent use case is given, namely that of the first Italian production embracing an end-to-end ACES pipeline. In addition, new ACES components will be introduced and a discussion started about possible uses for long-time preservation of color imaging in video-content heritage. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Graphical abstract

Back to Top