Next Article in Journal
Does Social Distancing Matter for Infectious Disease Propagation? An SEIR Model and Gompertz Law Based Cellular Automaton
Next Article in Special Issue
Segmentation Method of Cerebral Aneurysms Based on Entropy Selection Strategy
Previous Article in Journal
Novel Optimization Design Methods of Highly Loaded Compressor Cascades Considering Endwall Effect
Previous Article in Special Issue
Cardiovascular Signal Entropy Predicts All-Cause Mortality: Evidence from The Irish Longitudinal Study on Ageing (TILDA)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application

1
Univ Angers, LARIS, SFR MATHSTIC, F-49000 Angers, France
2
LIBPhys, Department of Physics, University of Coimbra, P-3004-516 Coimbra, Portugal
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(6), 831; https://doi.org/10.3390/e24060831
Submission received: 8 May 2022 / Revised: 9 June 2022 / Accepted: 11 June 2022 / Published: 15 June 2022
(This article belongs to the Special Issue Entropy Algorithms for the Analysis of Biomedical Signals)

Abstract

:
Texture analysis is a subject of intensive focus in research due to its significant role in the field of image processing. However, few studies focus on colored texture analysis and even fewer use information theory concepts. Entropy measures have been proven competent for gray scale images. However, to the best of our knowledge, there are no well-established entropy methods that deal with colored images yet. Therefore, we propose the recent colored bidimensional fuzzy entropy measure, F u z E n C 2 D , and introduce its new multi-channel approaches, F u z E n V 2 D and F u z E n M 2 D , for the analysis of colored images. We investigate their sensitivity to parameters and ability to identify images with different irregularity degrees, and therefore different textures. Moreover, we study their behavior with colored Brodatz images in different color spaces. After verifying the results with test images, we employ the three methods for analyzing dermoscopic images of malignant melanoma and benign melanocytic nevi. F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D illustrate a good differentiation ability between the two—similar in appearance—pigmented skin lesions. The results outperform those of a well-known texture analysis measure. Our work provides the first entropy measure studying colored images using both single and multi-channel approaches.

1. Introduction

Texture features are of the utmost importance in segmentation, classification, and synthesis of images, to cite only few image processing steps. However, no precise definition of texture has been adopted yet. Texture is often referred to as the visual patterns appearing in the image. Several algorithms have been proposed for texture feature extraction in recent years and this research area is still the subject of many investigations [1,2,3,4,5,6,7,8,9,10]. Recently, seven classes were proposed to classify the texture feature extraction methods [1]: statistical approaches (among which we can find the co-occurrence matrices), structural approaches, transform-based approaches (Fourier transform-based approaches, among others), model-based approaches (such as the random field models), graph-based approaches (such as the local graph structures), learning-based approaches, and entropy-based approaches. The latter two classes (learning-based approaches and entropy-based approaches) are the most recent ones. Several studies have shown that the entropy-based measures are promising for texture analysis [11,12,13,14,15,16,17,18]. However, these studies are only at their beginning. Even if they have the great advantage of relying on reliable unidimensional, 1D, entropy-based measures (issued from the information theory field), they have the drawback—for most of them—of being designed for gray scale images only.
Besides texture, color is essential not only for human perception of images but also for digital image processing [19,20,21,22,23,24,25]. Unlike the intensity that is translated as scalar gray values for a gray scale image, color is a vectorial feature that is appointed to each pixel for a colored image [19]. In contrast to gray scale images that could be handled in a straightforward manner, colored images could be analyzed in several possible ways. This depends on many factors, such as the need to analyze texture or color, separately or combined, directly from the image or through a transformation, among other factors [19,24,25,26]. Only a few studies have been performed on colored texture analysis and most of them were achieved by adapting the application of gray scale textures analysis methods [13,18,27,28]. Nevertheless, color and texture are probably the most important components of visual features. Many biomedical images are color-textured: dermoscopy images, histological images, endoscopy data, fundus and retinal images, among others.
According to the World Health Organization, one in every three diagnosed cancer cases is a skin cancer and the incidence rate has been increasing over recent years. A non-invasive imaging modality, dermoscopy or epiluminescence microscopy (ELM), is one of the well-known non-invasive techniques used for skin cancer diagnosis on which most research studies are conducted. However, visual diagnosis alone might be misleading and subjective even when performed by experts. Thus, dermoscopy image analysis (DIA) using computer-aided diagnosis (CAD) systems is essential to help medical doctors. Several studies proposed computer extracted texture features for cutaneous lesions diagnosis, specifically for the most aggressive type, melanoma [29,30,31]. Melanoma is metastatic, thus its early diagnosis and excision would definitely increase the survival rate. Some DIA methods focus only on the dermoscopic image structure/patterns [32,33], others rely on colors [34,35,36], and some consider both [37], for more details please refer to [29,30,31]. Nevertheless, most studies propose learning-based approaches and only few have suggested entropy-based measures until now.
In this paper, we, therefore, propose novel bidimensional entropy-based measures dedicated to color images in their two approaches: single-channel approach, F u z E n C 2 D , and multi-channel approaches, F u z E n V 2 D and F u z E n M 2 D . First, we test the abilities of our proposed measures in colored texture analysis on different kinds of image. After that, we illustrate their application in the biomedical field by processing dermoscopic images of two different kinds of common pigmented lesions: melanoma and benign melanocytic nevi. Furthermore, our results are compared to one of the most well-known texture feature extraction methods (co-occurrence matrices).
The rest of the paper is organized as follows: Section 2 introduces the proposed bidimensional colored fuzzy entropy measures; Section 3 presents the validation images used; Section 4 reports the experimental results and their analysis; finally, Section 5 draws the conclusion of this paper.

2. Colored Bidimensional Fuzzy Entropy

We recently developed bidimensional fuzzy entropy, F u z E n 2 D , and its multi-scale extension M S F 2 D [17,18,38]. These entropy measures revealed interesting results for some dermoscopic images but were limited to gray scale images. Based on F u z E n 2 D , we propose herein approaches to deal with colored images: the single-channel bidimensional fuzzy entropy, F u z E n C 2 D [28] which considers the characteristics of each channel independently, and the multi-channel bidimensional fuzzy entropy measures, F u z E n V 2 D and F u z E n M 2 D , which take into consideration the inter-channel characteristics. In this paper, we limit our study to three color channels. However, extension to a higher number of channels would be straightforward. For a colored image U of W width, H height, and K channels ( W × H × K pixels), the following initial parameters are first set: tolerance level r, fuzzy power n, and window size m (see below). The algorithms to compute F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D are presented below.

2.1. F u z E n C 2 D Single-Channel Approach

The colored image U is separated into its corresponding color channels K 1 , K 2 , and K 3 , as U K 1 , U K 2 , and U K 3 , respectively. For each channel composed of u K ( i , j ) elements, X i , j , K m is designated as the m-length square window:
u K ( i , j ) u K ( i , j + m 1 ) u K ( i + 1 , j ) u K ( i + 1 , j + m 1 ) u K ( i + m 1 , j ) u K ( i + m 1 , j + m 1 ) ,
with K = K 1 , K 2 , or K 3 and the indices are defined as such: 1 i H m and 1 j W m . The m + 1 square window, X i , j , K m + 1 , is defined in the same way. In each of U K 1 , U K 2 , and U K 3 , the total number of defined square windows for both m and m + 1 sizes is N m = ( W m ) ( H m ) .
Based on the original fuzzy entropy definition, F u z E n 1 D [39], a distance function d i j , a b , K m between X i , j , K m and its neighboring windows X a , b , K m is defined as the maximum absolute difference in their corresponding scalar components. We compose d i j , a b , K m as follows:
d i j , a b , K m = d [ X i , j , K m , X a , b , K m ] = max s , t ( 0 , m 1 ) ( | u K ( i + s , j + t ) u K ( a + s , b + t ) | ) ,
with a ranging from 1 to H m and b ranging from 1 to W m . The similarity degree D i j , a b , K m of X i , j , K m with its neighboring patterns X a , b , K m is defined by a continuous fuzzy function μ ( d i j , a b , K m , n , r ) :
D i j , a b , K m ( n , r ) = μ ( d i j , a b , K m , n , r ) = exp ( ( d i j , a b , K m ) n / r ) .
Afterwards, the similarity degree of each X i , j , K m is averaged to obtain Φ i , j , K m ( n , r ) and then construct:
Φ K m ( n , r ) = 1 N m i = 1 , j = 1 i = H m , j = W m Φ i , j , K m ( n , r ) .
It is similar for m + 1 patterns to obtain Φ K m + 1 ( n , r ) . Consequently, F u z E n 2 D of each channel is calculated as:
F u z E n C K 2 D ( m , n , r , U K ) = ln Φ K m ( n , r ) Φ K m + 1 ( n , r ) .
Finally, F u z E n C 2 D is defined in each channel as the natural logarithm of the conditional probability that patterns with m × m similar pixels would remain similar for the next ( m + 1 ) × ( m + 1 ) pixels in each channel:
F u z E n C 2 D ( m , n , r , U ) = [ F u z E n C K 1 , 2 D , F u z E n C K 2 , 2 D , F u z E n C K 3 , 2 D ] .
This single-channel approach treats each channel independently. It has the advantage of allowing us to selectively study certain channels which is of special importance when it comes to images in different color spaces and natures (intensity, color, and texture). In our study, we used n = 2 . Thus, the similarity degree is expressed by a Gaussian function exp ( ( d i j , a b , K m ) 2 / r ) . For better illustration, we show in Figure 1 an example for F u z E n C 2 D of an RGB color space image for an embedding dimension of m = [2, 2]; i.e., m × m pixels for each channel. The illustration shows RGB channels as an example, but the same could be applied to different color spaces.

2.2. F u z E n V 2 D Multi-Channel Approach

For an image U composed of u i , j , k pixels, X i , j , k m is defined as the m-length cube. X i , j , k m represents the group of pixels in the image U of indices from line i to i + m 1 , column j to j + m 1 , and the depth of K-channels (k: depth index) as follows:
Entropy 24 00831 i001
Similarly, X i , j , k m + 1 is defined as the ( m + 1 ) -length cube. Let N m = ( W m ) ( H m ) ( K m ) be the total number of cubes that can be generated from U for both m and m + 1 sizes. For X i , j , k m and its neighboring cubes X a , b , c m , the distance function d i j k , a b c m between them is defined as the maximum absolute difference of their corresponding scalar components, knowing that a, b, and c range from 1 to H m , W m , and K m , respectively. Having ( a , b , c ) ( i , j , k ) , the distance function is depicted as follows:
d i j k , a b c m = d [ X i , j , k m , X a , b , c m ] = max e , f , g ( 0 , m 1 ) ( | u ( i + e , j + f , k + g ) u ( a + e , b + f , c + g ) | ) .
The similarity degree D i j k , a b c m of X i , j , k m with its neighboring cubes X a , b , c m is defined by a fuzzy function μ ( d i j k , a b c m , n , r ) :
D i j k , a b c m ( n , r ) = μ ( d i j k , a b c m , n , r ) = exp ( ( d i j k , a b c m ) n / r ) .
Afterwards, the similarity degree of each cube is averaged to obtain Φ i , j , k m ( n , r ) , then construct:
Φ m ( n , r ) = 1 N m i = 1 , j = 1 , k = 1 i = H m , j = W m , k = K m Φ i , j , k m ( n , r ) .
This is similar for m + 1 cubes to obtain Φ m + 1 ( n , r ) . Finally, multi-channel bidimensional fuzzy entropy of the colored image U is defined as the natural logarithm of the conditional probability that cubes similar in their m × m × m pixels would remain similar for the next ( m + 1 ) × ( m + 1 ) × ( m + 1 ) pixels:
F u z E n V 2 D ( m , n , r , U ) = ln Φ m ( n , r ) Φ m + 1 ( n , r ) .
The multi-channel approach has the advantage of extracting inter-channel features. However, we limit our study herein to 3-channel colored images. Thus, the embedding dimension m values could be 1 or 2 to avoid exceeding the maximum possible 3 × 3 × 3 pixels cubes for the m + 1 calculations. This means that for K channels the m-value can only be defined between 1 and K-1. Herein, n is taken to be 2 and r within the range suggested in previous studies. For better illustration, we show in Figure 2 an example for F u z E n V 2 D of an RGB color space image for an embedding dimension of m = [2, 2, 2].

2.3. F u z E n M 2 D Modified Multi-Channel Approach

Since the F u z E n V 2 D embedding dimension size is limited to m = 1 and m = 2 for this trichromatic study (K = 3), we introduce herein a modified colored multi-channel approach that can take up to any m value. This method is similar to F u z E n V 2 D except for the fact that the embedding dimension is a cuboid of m × m × K voxels for F u z E n M 2 D . Therefore, the third dimension of the template is not limited by the number of color channels in the study.
For image U with K = 3 color channels, composed of u i , j , k voxels, X i , j , k m is defined as the m × m × 3 cuboid. X i , j , k m represents the group of voxels in the image U of indices from line i to i + m 1 , column j to j + m 1 , and the depth of K-channels (k: depth index). Similarly, X i , j , k m + 1 is defined as the ( m + 1 ) × ( m + 1 ) × 3 cuboid. Let N m = ( W m ) ( H m ) be the total number of cuboids that can be generated from U for both m and m + 1 sizes. Sizes m and m + 1 stand for [m, m, 3] and [ m + 1 , m + 1 , 3] that are made up of m × m × 3 and ( m + 1 ) × ( m + 1 ) × 3 voxels, respectively.
For X i , j , k m and its neighboring cuboids X a , b , c m , the distance function d i j k , a b c m between them is defined as the maximum absolute difference of their corresponding scalar components, knowing that a and b range from 1 to H m and W m , respectively, whereas c is 1. Having ( a , b , c ) ( i , j , k ) , the distance function is depicted as follows:
d i j k , a b c m = d [ X i , j , k m , X a , b , c m ] = max e , f ( 0 , m 1 ) g ( 0 , 2 ) ( | u ( i + e , j + f , k + g ) u ( a + e , b + f , c + g ) | ) .
The similarity degree D i j k , a b c m of X i , j , k m with its neighboring cuboids X a , b , c m is defined by a fuzzy function μ ( d i j k , a b c m , n , r ) :
D i j k , a b c m ( n , r ) = μ ( d i j k , a b c m , n , r ) = exp ( ( d i j k , a b c m ) n / r ) .
Afterwards, the similarity degree of each cuboid is averaged to obtain Φ i , j , k m ( n , r ) , then construct:
Φ m ( n , r ) = 1 N m i = 1 , j = 1 , k = 1 i = H m , j = W m , k = K Φ i , j , k m ( n , r ) .
This is similar for ( m + 1 ) × ( m + 1 ) × 3 cuboids to obtain Φ m + 1 ( n , r ) . Finally, multi-channel bidimensional fuzzy entropy of the colored image U is defined as the natural logarithm of the conditional probability that cuboids similar in their m × m × 3 voxels would remain similar in their ( m + 1 ) × ( m + 1 ) × 3 voxels:
F u z E n M 2 D ( m , n , r , U ) = ln Φ m ( n , r ) Φ m + 1 ( n , r ) .
F u z E n M 2 D has the advantage of extracting inter-channel features and always considering all the color channels of texture images. However, as mentioned previously, we consider our study herein for 3-channel colored images which could be further adapted to a higher number as well. Herein, n is taken to be 2 and r within the range suggested in previous studies. For better illustration, we show in Figure 3 an example for F u z E n M 2 D of an RGB color space image for an embedding dimension of m = [2, 2, 3]; i.e., moving m-sized cuboid is 2 × 2 × 3 .

2.4. Comparing Algorithms

The proposed entropy measures are based on the fuzzy entropy definition [17,39,40] that calculates the similarity degree between the corresponding patterns using a continuous fuzzy function. The latter ensures calculating a participation degree for all the compared patterns and quantifies the irregularity of the analyzed data. This information theory concept has been proven to be reliable for 1D, 2D, and 3D data [17,18,38,39,40]. However, only gray scale data have been investigated to date. Therefore, the idea to analyze colored texture images using the fuzzy entropy concept from a single channel and a multi-channel perspective is interesting.
The major differences between the three proposed algorithms are in the way the similarity degrees are calculated. For the single-channel approach, F u z E n C 2 D , the image is analyzed channel by channel and the result is three entropy values that represent the three channels, respectively, please refer to Figure 1. This is a particular advantage when it comes to analyzing and comparing specific channels in different color spaces. On the other hand, the multi-channel approaches, F u z E n V 2 D and F u z E n M 2 D , deal with all the channels at the same time; i.e., the inter-channel information is taken into account (unlike handling each color channel separately). F u z E n V 2 D transforms the 2D similarity degree scanning window into a 3D cubic pattern that studies similarity among the m × m × m and the m + 1 × m + 1 × m + 1 patterns within a colored image. F u z E n V 2 D showed good results but for the application in trichromatic color spaces the embedding dimension size was limited to m = 1 or 2, please see Figure 2. Therefore, in order to investigate similarity degrees with larger embedding dimension sizes, we present the modified multi-channel approach F u z E n M 2 D , please refer back to Figure 3. F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D provide colored texture analysis from single-channel and multi-channel perspectives. The choice of the algorithm depends on the intended application. Moreover, the analysis could be extended to multi-spectral images and even other color spaces than the ones discussed in this paper.

3. Validation Tests and Medical Database

In order to validate the proposed colored bidimensional entropy measures, we studied their sensitivity to different parameter values. The algorithms were also tested using images with different degrees of randomness and the colored Brodatz dataset [41]. The images were normalized by subtracting their mean and dividing by their standard deviation and all the tests were performed using Matlab. In the following, we describe the elements used for the validation tests and the medical dataset.

3.1. MIX 2 D (p) Processes

MIX 2 D (p) [12] is a family of images of stochastic processes that are moderated by the probability of irregularity, p, varying from 0 (totally regular periodic image) to 1 (totally irregular image). We used MIX 2 D (p) for the single-channel approach, and MIX 3 D (p), a volumetric extension for MIX 2 D (p) proposed by [40], for our multi-channel approach.

3.2. Colored Brodatz Images

For texture validation tests, we used the colored Brodatz texture (CBT) [41,42] images, see Figure 4. CBT presents colored textures with different degrees of visible irregularity. We can notice that, for example, the CBT images (a), (b) and (e) show more regular and periodic repetitive patterns than (c), (f) and (i).

3.3. Color Spaces

Besides using the most common trichromatic color space, red, green, blue (RGB), we extend our study by transforming the images to use two other color spaces: hue, saturation, value (HSV; hue and saturation: chrominance, value: intensity) and YUV (Y: luminance, U and V: chrominance) to investigate the effect of color space transformations on F u z E n C 2 D , F u z E n M 2 D , and F u z E n M 2 D outcomes. In RGB color space, the intensity and color are combined to give us the final display, whereas for HSV and YUV color spaces, intensity and color are separated.

3.4. Co-Occurrence Matrices

For the application on medical images, we study the effect of different color spaces and compare our results to those obtained with gray level co-occurrence matrices [43], which probably remains the most used texture analysis technique. We employed the co-occurrence matrices of each channel (integrative way) for comparing the results to our single-channel approach, and its extended 3D co-occurrence matrices [44] for comparing the results to our multi-channel approach. We thus adopted the following procedure:
  • The 2D co-occurrence matrices were created considering 4 orientations (0 , 45 , 90 , and 135 ), 4 inter-pixel distances (1, 2, 4, and 8), and 8 gray levels ( N g = 8) to be compared with F u z E n C 2 D .
  • The 3D co-occurrence matrices were created considering 13 orientations [44], 4 inter-pixel distances (1, 2, 4, and 8), and 8 gray levels to be compared with F u z E n V 2 D and F u z E n M 2 D .
Then, we calculated the Haralick features for each co-occurrence matrix (for each orientation and distance). Finally, the average of features for all matrices was calculated to be compared with F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D values. Among the 14 features originally proposed [43], only six are commonly employed by researchers due to their correlation with the other eight, see Table 1.

3.5. Medical Images

For our medical application we used the HAM10000, “Human Against Machine with 10,000 training images” [45,46]. The dataset is composed of dermoscopic images for pigmented lesions, see an example in Figure 5a. The dataset contains dermoscopic images of melanocytic nevi, melanoma, dermatofibroma, actinic keratoses, basal cell carcinoma, and benign keratosis [45].
As suggested by medical doctors, the most significant comparison is that between melanoma and melanocytic nevi. The target of the medical application in our study is to try to differentiate the deadliest type of skin cancer, melanoma, from the benign melanocytic nevi. These two widespread types of pigmented skin lesions are often mistaken in diagnosis and detection, especially in their early stages. Moreover, early diagnosis and excision could vastly increase the patients’ survival rate [29,30,31]. Thus, we selected from the proposed dataset forty melanoma images and forty melanocytic nevi images to be processed and compared.

4. Results and Discussion

In this section, we present the results of the validation tests. We start by testing the algorithms’ sensitivity to initial parameter choice, then we explore the algorithms’ ability to identify increasing irregularity degrees in colored textures. After that, we analyze colored Brodatz texture images in 3 different color spaces (RGB, YUV, and HSV). Finally, we show the results using F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D for melanoma and melanocytic nevi dermoscopic images and compare them to those obtained using single-channel and multi-channel co-occurrence matrices.

4.1. Sensitivity to Initial Parameters

To study the sensitivity of our proposed measures, with different embedding dimensions m and tolerance levels r, we evaluated 100 × 100 pixels of a colored Brodatz image (Figure 4f) using different parameter choices.
  • For F u z E n C 2 D , the embedding dimension m was taken as 1, 2, 3, 4, and 5, and the tolerance level r from 0.06 up to 0.48 (step 0.06). The results are displayed in Figure 6.
  • For F u z E n V 2 D , the embedding dimension m was taken as 1 and 2, since the maximum possible cube volume for ( m + 1 ) -length cubes is 3 × 3 × 3 pixels (given the 3 color channels). The results are displayed in Figure 7.
  • For F u z E n M 2 D , the embedding dimension m was taken as 1, 2, 3, 4, and 5, and the tolerance level r from 0.06 up to 0.48 (step 0.06). The results are displayed in Figure 8.
We observe that F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D remain defined for different chosen initial parameters. Additionally, the algorithms show low variability upon changes in r and m. This illustrates their low sensitivity to r and m, allowing a certain degree of freedom in our choice of initial parameters without restrictions.

4.2. Detecting Colored Image Irregularity

We generated 256 × 256 pixel MIX 2 D (p) in three channels and 256 × 256 × 3 pixel MIX 3 D (p) images and analyzed them by single-channel ( F u z E n C 2 D ) and multi-channel approaches ( F u z E n V 2 D and F u z E n M 2 D ), respectively.
  • F u z E n C 2 D : we set r = 0.15 , m = 1 , 2 , 3 , 4 , 5 , and p = 0 to 1 with a step of 0.1 , and repeated the calculation for 10 images each. The results are depicted in Figure 9.
  • F u z E n V 2 D : we set r = 0.15 , m = 1 and 2 (as the maximum possible cube volume for m + 1 could only be 3 × 3 × 3 pixels), p = 0 to 1 with a step of 0.1 , and repeated the calculation for 10 images each. The results are depicted in Figure 10.
  • F u z E n M 2 D : we set r = 0.15 , m = 1 , 2 , 3 , and 4 , and p = 0 to 1 with a step of 0.1 , and repeated the calculation for 10 images each. The results are depicted in Figure 11.
The results show that both the single- and multi-channel approaches lead to increasing entropy values with increasing irregularity degree, p. This illustrates their ability to properly quantify increasing irregularity degrees and their consistency upon repetition.

4.3. Studying Texture Images

Nine CBT [41,42] images of 640 × 640 pixels, see Figure 4, were split into 144 sub-images of size 50 × 50 pixels. F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D were calculated for these sub-images and for a 300 × 300 pixel corner region from each corresponding original CBT image. The parameters r and m were set to 0.15 and 2, respectively. The results with F u z E n C 2 D and F u z E n V 2 D are depicted in Figure 12 and Figure 13. Similar results to those of F u z E n V 2 D are found with F u z E n M 2 D . We observe that, especially for the RGB color space, most of the F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D averages of the sub-images overlap with or are very similar to the value of their corresponding image’s 300 × 300 pixel region. Moreover, we notice their differentiation ability between different CBT images. In the HSV and YUV color spaces, the multi-channel approaches outperform F u z E n C 2 D (Figure 12) in differentiating the CBT images. We can also observe that for the RGB color space, the CBT images that are perceived visually to be of higher color and pattern irregularity, Figure 4c,f,g, obtained higher entropy values than the others, whereas those that appear to be of periodic well-defined repetitive patterns, Figure 4a,b,e, resulted in lower entropy values for the three measures F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D . This is in accordance with the literature of entropy measures and information theory concept applied to gray level texture images [12,14,15,16,17,18,38].

4.4. Medical Image Analysis

We calculated F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D for 40 melanoma images and 40 melanocytic nevi images from the HAM10000 dataset [45] in the color spaces RGB, HSV, and YUV. In order to determine the region of interest (ROI) of melanoma and melanocytic nevi images, the lesions were segmented as shown in Figure 5. Then, the central region of 128 × 128 × 3 pixels was selected, see Figure 5d. By adopting this procedure, we ensured that the same number of pixels were processed (equally sized images) and that no region outside the lesion was included. The parameters r and m were set to 0.15 and 2, respectively. The images were normalized by subtracting their mean and dividing by their standard deviation.
To validate the statistical significance of F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D in differentiating melanoma from melanocytic nevi images, we used the Mann–Whitney U test. The resulting p-values are presented in Table 2. F u z E n C 2 D shows statistical significance (for p < 0.05) in differentiating melanoma and melanocytic nevi for all the channels except V (of HSV color space). In addition, using F u z E n V 2 D and F u z E n V 2 D , melanoma and melanocytic nevi images are identified as statistically different for the three color spaces. Moreover, we calculated the Cohen’s d [47,48] to further validate our obtained statistical results, see Table 3. Most d values reflect “large”, “very Large”, and “huge” effect sizes, which validates the differentiation ability of our proposed measures.
Additionally, we compared F u z E n C 2 D results with Haralick features from 2D co-occurrence matrices. The results show that F u z E n C 2 D results in lower p-values than Haralick features for the G, H, Y, and U channels and none of the methods result in statistical significance for the S channel. Additionally, we compared F u z E n V 2 D and F u z E n M 2 D results with Haralick features from 3D co-occurrence matrices. The summaries of results for F u z E n V 2 D and F u z E n M 2 D are shown in Figure 14 and Figure 15, respectively. F u z E n V 2 D and F u z E n M 2 D surpassed Haralick features as p-values obtained for the results of both entropy measures are mostly lower than those of Haralick features. Moreover, using Haralick features, some results do not show statistical significance (p > 0.05), whereas all the three proposed colored entropy measures illustrate evident statistical significance in differentiating melanoma from melanocytic nevi, except in F u z E n C 2 D results for S and V color channels.
In addition to the p-value test, the receiver operating characteristic (ROC) and area under the ROC curve (AUC) of the results can be used as a criterion to measure the discrimination ability of our proposed measures. Since the best results (lowest p-values) were obtained for the RGB color space, we further establish the ROC curves for its F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results, see Figure 16, Figure 17 and Figure 18, respectively. Moreover, the AUC, sensitivity, specificity, accuracy, and precision are shown for the RGB, HSV, and YUV color spaces in Table 4, Table 5 and Table 6, respectively. The results show that F u z E n C 2 D has high accuracy and AUC values for R, G, B, H, Y, U, and V channels. In addition, the multi-channel approaches ( F u z E n V 2 D and F u z E n M 2 D ) illustrate high accuracy and AUC values for the three color spaces. For the three proposed entropy measures, the best accuracy and AUC values were obtained for the RGB color space.
Finally, we can say that the three entropy measures were able to differentiate both pigmented skin lesions. This was validated statistically by p-values, especially in the RGB color space. In the latter, F u z E n C 2 D achieved accuracies of 83.7%, 88.7%, 86.2% and AUC of 88.4%, 94.5%, 93%. On the other hand, F u z E n V 2 D , resulted in an accuracy of 93.7% and AUC of 96.4%. In addition, F u z E n M 2 D showed an accuracy of 91.2% and AUC of 95.0%.

5. Conclusions

In this paper, we presented a new concept and the first entropy method to investigate the single- and multi-channel features of colored images. To the best of our knowledge, this study is the only one that suggests entropy measures for analyzing colored images in their single- and multi-channel approaches. It was essential to perform some validation tests before employing those measures for analyzing colored medical images. The study was carried out as follows:
  • Studying the sensitivity of the proposed measures to different initial parameters (tolerance level r and window size m).
  • Identifying different irregularity degrees in colored images.
  • Studying colored texture images in three color spaces.
  • Analyzing medical images in three color spaces.
The three entropy measures, F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D , showed a reliable behavior with different initial parameters and an ability to gradually quantify irregularity degrees of colored textures and consistency upon repetition. When considering different color spaces, RGB, HSV, and YUV, these entropy measures showed promising results for the colored texture images.
Regarding the dermoscopic melanoma and melanocytic nevi images, single- and multi-channel entropy measures were able to differentiate both pigmented skin lesions. This was validated statistically by p-values, especially in the RGB color space. In the latter, F u z E n C 2 D achieved accuracies of 83.7%, 88.7%, 86.2% and AUC of 88.4%, 94.5%, 93%. On the other hand, F u z E n V 2 D , reached an accuracy of 93.7% and AUC of 96.4%. In addition, F u z E n M 2 D showed an accuracy of 91.2% and AUC of 95.0%. Moreover, F u z E n V 2 D and F u z E n M 2 D outperformed both F u z E n C 2 D and the classical descriptors, Haralick features, in differentiating the two similar malignant melanoma and benign melanocytic nevi dermoscopic images. These preliminary results could be the groundwork for developing an objective computer-based tool for helping medical doctors in diagnosing melanoma that is often mistaken for a benign melanocytic nevi or is properly diagnosed only in its late stages. We limited our investigation to three-channel colored images and, consequently, future work could be directed towards multi-spectral color images and towards more adapted applications for each color space and extending our study to a larger dataset.

Author Contributions

Conceptualization, M.H., A.H.-H. and A.S.G.; Methodology, M.H.; Software, M.H. and A.S.G.; Validation, M.H.; Formal Analysis, M.H., A.H.-H., A.S.G., P.G.V. and J.C.; Writing—Original Draft Preparation, M.H.; Writing—Review & Editing, M.H., A.H.-H., P.G.V., A.S.G. and J.C.; Visualization, M.H.; Supervision, A.H.-H. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Humeau-Heurtier, A. Texture feature extraction methods: A survey. IEEE Access 2019, 7, 8975–9000. [Google Scholar] [CrossRef]
  2. Song, T.; Feng, J.; Wang, S.; Xie, Y. Spatially weighted order binary pattern for color texture classification. Expert Syst. Appl. 2020, 147, 113167. [Google Scholar] [CrossRef]
  3. Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two decades of texture representation for texture classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
  4. Liu, L.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognit. 2017, 62, 135–160. [Google Scholar] [CrossRef] [Green Version]
  5. Nguyen, T.P.; Vu, N.S.; Manzanera, A. Statistical binary patterns for rotational invariant texture classification. Neurocomputing 2016, 173, 1565–1577. [Google Scholar] [CrossRef]
  6. Qi, X.; Zhao, G.; Shen, L.; Li, Q.; Pietikäinen, M. LOAD: Local orientation adaptive descriptor for texture and material classification. Neurocomputing 2016, 184, 28–35. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, S.; Wu, Q.; He, X.; Yang, J.; Wang, Y. Local N-Ary pattern and its extension for texture Classification. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1495–1506. [Google Scholar] [CrossRef]
  8. Zhang, J.; Liang, J.; Zhang, C.; Zhao, H. Scale invariant texture representation based on frequency decomposition and gradient orientation. Pattern Recognit. Lett. 2015, 51, 57–62. [Google Scholar] [CrossRef]
  9. Backes, A.R.; Martinez, A.S.; Bruno, O.M. Texture analysis using graphs generated by deterministic partially self-avoiding walks. Pattern Recognit. 2011, 44, 1684–1689. [Google Scholar] [CrossRef]
  10. Ghalati, M.K.; Nunes, A.; Ferreira, H.; Serranho, P.; Bernardes, R. Texture analysis and its applications in biomedical imaging: A survey. IEEE Rev. Biomed. Eng. 2021, 15, 222–246. [Google Scholar] [CrossRef]
  11. Yeh, J.R.; Lin, C.W.; Shieh, J.S. An approach of multiscale complexity in texture Analysis of lymphomas. IEEE Signal Process. Lett. 2011, 18, 239–242. [Google Scholar] [CrossRef]
  12. Silva, L.; Senra Filho, A.; Fazan, V.P.S.; Felipe, J.C.; Junior, L.M. Two-dimensional sample entropy: Assessing image texture through irregularity. Biomed. Phys. Eng. Express 2016, 2, 045002. [Google Scholar] [CrossRef]
  13. Dos Santos, L.F.S.; Neves, L.A.; Rozendo, G.B.; Ribeiro, M.G.; do Nascimento, M.Z.; Tosta, T.A.A. Multidimensional and fuzzy sample entropy (SampEnMF) for quantifying H&E histological images of colorectal cancer. Comput. Biol. Med. 2018, 103, 148–160. [Google Scholar]
  14. Azami, H.; Escudero, J.; Humeau-Heurtier, A. Bidimensional distribution entropy to analyze the irregularity of small-sized textures. IEEE Signal Process. Lett. 2017, 24, 1338–1342. [Google Scholar] [CrossRef] [Green Version]
  15. Silva, L.E.; Duque, J.J.; Felipe, J.C.; Murta Jr, L.O.; Humeau-Heurtier, A. Two-dimensional multiscale entropy analysis: Applications to image texture evaluation. Signal Process. 2018, 147, 224–232. [Google Scholar] [CrossRef]
  16. Humeau-Heurtier, A.; Omoto, A.C.M.; Silva, L.E. Bi-dimensional multiscale entropy: Relation with discrete Fourier transform and biomedical application. Comput. Biol. Med. 2018, 100, 36–40. [Google Scholar] [CrossRef]
  17. Hilal, M.; Berthin, C.; Martin, L.; Azami, H.; Humeau-Heurtier, A. Bidimensional Multiscale Fuzzy Entropy and its application to pseudoxanthoma elasticum. IEEE Trans. Biomed. Eng. 2019, 67, 2015–2022. [Google Scholar] [CrossRef]
  18. Furlong, R.; Hilal, M.; O’brien, V.; Humeau-Heurtier, A. Parameter Analysis of Multiscale Two-Dimensional Fuzzy and Dispersion Entropy Measures Using Machine Learning Classification. Entropy 2021, 23, 1303. [Google Scholar] [CrossRef]
  19. Palm, C. Color texture classification by integrative co-occurrence matrices. Pattern Recognit. 2004, 37, 965–976. [Google Scholar] [CrossRef]
  20. Backes, A.R.; Casanova, D.; Bruno, O.M. Color texture analysis based on fractal descriptors. Pattern Recognit. 2012, 45, 1984–1992. [Google Scholar] [CrossRef]
  21. Drimbarean, A.; Whelan, P.F. Experiments in colour texture analysis. Pattern Recognit. Lett. 2001, 22, 1161–1167. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, Q.; Yang, J.; Ding, S. Color texture analysis using the wavelet-based hidden Markov model. Pattern Recognit. Lett. 2005, 26, 1710–1719. [Google Scholar] [CrossRef]
  23. Arvis, V.; Debain, C.; Berducat, M.; Benassi, A. Generalization of the cooccurrence matrix for colour images: Application to colour texture classification. Image Anal. Stereol. 2004, 23, 63–72. [Google Scholar] [CrossRef] [Green Version]
  24. Alata, O.; Burie, J.C.; Moussa, A.; Fernandez-Maloigne, C.; Qazi, I.-U.-H. Choice of a pertinent color space for color texture characterization using parametric spectral analysis. Pattern Recognit. 2011, 44, 16–31. [Google Scholar]
  25. Mäenpää, T.; Pietikäinen, M. Classification with color and texture: Jointly or separately? Pattern Recognit. 2004, 37, 1629–1640. [Google Scholar] [CrossRef] [Green Version]
  26. Bianconi, F.; Harvey, R.W.; Southam, P.; Fernández, A. Theoretical and experimental comparison of different approaches for color texture classification. J. Electron. Imaging 2011, 20, 043006. [Google Scholar] [CrossRef]
  27. Manjunath, B.S.; Ohm, J.R.; Vasudevan, V.V.; Yamada, A. Color and texture descriptors. IEEE Trans. Circuits Syst. Video Technol. 2001, 11, 703–715. [Google Scholar] [CrossRef] [Green Version]
  28. Hilal, M.; Gaudêncio, A.S.F.; Berthin, C.; Vaz, P.G.; Cardoso, J.a.; Martin, L.; Humeau-Heurtier, A. Bidimensional Colored Fuzzy entropy measure: A Cutaneous Microcirculation Study. In Proceedings of the Fifth International Conference on Advances in Biomedical Engineering (ICABME), Tripoli, Lebanon, 17–19 October 2019. [Google Scholar]
  29. Celebi, M.E.; Codella, N.; Halpern, A. Dermoscopy image analysis: Overview and future directions. IEEE J. Biomed. Health Inform. 2019, 23, 474–478. [Google Scholar] [CrossRef]
  30. Talavera-Martínez, L.; Bibiloni, P.; González-Hidalgo, M. Computational Texture Features of Dermoscopic Images and Their Link to the Descriptive Terminology—A Survey. Comput. Methods Programs Biomed. 2019, 182, 105049. [Google Scholar] [CrossRef]
  31. Barata, C.; Celebi, M.E.; Marques, J.S. A survey of feature extraction in dermoscopy image analysis of skin cancer. IEEE J. Biomed. Health Inform. 2018, 23, 1096–1109. [Google Scholar] [CrossRef]
  32. Machado, M.; Pereira, J.; Fonseca-Pinto, R. Classification of reticular pattern and streaks in dermoscopic images based on texture analysis. J. Med. Imaging 2015, 2, 044503. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Garnavi, R.; Aldeen, M.; Bailey, J. Computer-aided diagnosis of melanoma using border-and wavelet-based texture analysis. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 1239–1252. [Google Scholar] [CrossRef] [PubMed]
  34. Sáez, A.; Acha, B.; Serrano, A.; Serrano, C. Statistical detection of colors in dermoscopic images with a texton-based estimation of probabilities. IEEE J. Biomed. Health Inform. 2018, 23, 560–569. [Google Scholar] [CrossRef] [PubMed]
  35. Isasi, A.G.; Zapirain, B.G.; Zorrilla, A.M. Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms. Comput. Biol. Med. 2011, 41, 742–755. [Google Scholar] [CrossRef]
  36. Celebi, M.E.; Zornberg, A. Automated quantification of clinically significant colors in dermoscopy images and its application to skin lesion classification. IEEE Syst. J. 2014, 8, 980–984. [Google Scholar] [CrossRef]
  37. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [Green Version]
  38. Hilal, M.; Humeau-Heurtier, A. Bidimensional fuzzy entropy: Principle analysis and biomedical applications. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 4811–4814. [Google Scholar]
  39. Chen, W.; Wang, Z.; Xie, H.; Yu, W. Characterization of surface EMG signal based on fuzzy entropy. IEEE Trans. Neural. Syst. Rehabil. Eng. 2007, 15, 266–272. [Google Scholar] [CrossRef]
  40. Gaudêncio, A.S.F.; Vaz, P.G.; Hilal, M.; Cardoso, J.M.; Mahé, G.; Lederlin, M.; Humeau-Heurtier, A. Three-dimensional multiscale fuzzy entropy: Validation and application to idiopathic pulmonary fibrosis. IEEE J. Biomed. Health Inform. 2020, 25, 100–107. [Google Scholar] [CrossRef]
  41. Abdelmounaime, S.; Dong-Chen, H. New Brodatz-based image databases for grayscale color and multiband texture analysis. ISRN Mach. Vis. 2013, 2013, 876386. [Google Scholar] [CrossRef]
  42. Colored Brodatz Texture. Available online: http://multibandtexture.recherche.usherbrooke.ca/ (accessed on 10 June 2022).
  43. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. Syst. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  44. Philips, C.; Li, D.; Raicu, D.; Furst, J. Directional invariance of co-occurrence matrices within the liver. In Proceedings of the 2008 International Conference on Biocomputation, Bioinformatics, and Biomedical Technologies, Bucharest, Romania, 29 June–5 July 2008; pp. 29–34. [Google Scholar]
  45. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
  46. Tschandl, P. Replication data for: “The HAM10000 Dataset, a larGe Collection of Multi-source Dermatoscopic Images of comMon Pigmented Skin Lesions”. Harvard Dataverse, V3, UNF:6:/APKSsDGVDhwPBWzsStU5A==. 2018. Available online: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T (accessed on 10 June 2022).
  47. Sawilowsky, S.S. New effect size rules of thumb. J. Mod. Appl. Stat. Methods 2009, 8, 26. [Google Scholar] [CrossRef]
  48. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: London, UK, 2013. [Google Scholar]
Figure 1. Illustration for F u z E n C 2 D of an RGB color space image. (a) The image U is split into its corresponding channels U R , U G , and U B , respectively, from left to right; (b) the embedding dimension pattern of size m × m having m = [ 2 , 2 ] ; (c) X i , j , K m and X a , b , K m for K = K1, K2, and K3 being the R, G, and B color channels, respectively.
Figure 1. Illustration for F u z E n C 2 D of an RGB color space image. (a) The image U is split into its corresponding channels U R , U G , and U B , respectively, from left to right; (b) the embedding dimension pattern of size m × m having m = [ 2 , 2 ] ; (c) X i , j , K m and X a , b , K m for K = K1, K2, and K3 being the R, G, and B color channels, respectively.
Entropy 24 00831 g001
Figure 2. Illustration for F u z E n V 2 D of an RGB color space image having m = [ 2,2,2]. (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m = [ 2 , 2 , 2 ] that is a 2 × 2 × 2 cube; (c) X i , j , k m and X a , b , c m , the fixed and moving templates defined above.
Figure 2. Illustration for F u z E n V 2 D of an RGB color space image having m = [ 2,2,2]. (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m = [ 2 , 2 , 2 ] that is a 2 × 2 × 2 cube; (c) X i , j , k m and X a , b , c m , the fixed and moving templates defined above.
Entropy 24 00831 g002
Figure 3. Illustration for F u z E n M 2 D of RGB color space image having m = [ 2 , 2 , 3 ] . (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m = [ 2 , 2 , 3 ] that is a 2 × 2 × 3 cuboid; (c) the fixed and moving templates defined above.
Figure 3. Illustration for F u z E n M 2 D of RGB color space image having m = [ 2 , 2 , 3 ] . (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m = [ 2 , 2 , 3 ] that is a 2 × 2 × 3 cuboid; (c) the fixed and moving templates defined above.
Entropy 24 00831 g003
Figure 4. Colored Brodatz texture (CBT) images of different colored irregularity degrees [41,42]. (ai) CBT images that are used for the validation test (Section 4.3) to compare the entropy values of each colored texture to its corresponding sub-images in three color spaces (RGB, HSV, and YUV); (f) is used again for studying the sensitivity of the proposed measures to different initial parameters (Section 4.1).
Figure 4. Colored Brodatz texture (CBT) images of different colored irregularity degrees [41,42]. (ai) CBT images that are used for the validation test (Section 4.3) to compare the entropy values of each colored texture to its corresponding sub-images in three color spaces (RGB, HSV, and YUV); (f) is used again for studying the sensitivity of the proposed measures to different initial parameters (Section 4.1).
Entropy 24 00831 g004
Figure 5. Dermoscopic images segmentation for choosing the region of interest (ROI). (a) an example of the dermoscopic image for a pigmented skin lesion; (b,c) the contouring and segmentation of the lesion; (d) the ROI as the central 128 × 128 × 3 pixels.
Figure 5. Dermoscopic images segmentation for choosing the region of interest (ROI). (a) an example of the dermoscopic image for a pigmented skin lesion; (b,c) the contouring and segmentation of the lesion; (d) the ROI as the central 128 × 128 × 3 pixels.
Entropy 24 00831 g005
Figure 6. F u z E n C 2 D results for the red, green, and blue channels (left to right) of the colored Brodatz image, Figure 4f, with varying r and m.
Figure 6. F u z E n C 2 D results for the red, green, and blue channels (left to right) of the colored Brodatz image, Figure 4f, with varying r and m.
Entropy 24 00831 g006
Figure 7. F u z E n V 2 D results with varying r and m of the colored Brodatz image, Figure 4f.
Figure 7. F u z E n V 2 D results with varying r and m of the colored Brodatz image, Figure 4f.
Entropy 24 00831 g007
Figure 8. F u z E n M 2 D results with varying r and m of the colored Brodatz image, Figure 4f.
Figure 8. F u z E n M 2 D results with varying r and m of the colored Brodatz image, Figure 4f.
Entropy 24 00831 g008
Figure 9. F u z E n C 2 D mean and standard deviation for MIX 2 D (p) images with 10 repetitions.
Figure 9. F u z E n C 2 D mean and standard deviation for MIX 2 D (p) images with 10 repetitions.
Entropy 24 00831 g009
Figure 10. F u z E n V 2 D mean and standard deviation for MIX 3 D (p) images with 10 repetitions.
Figure 10. F u z E n V 2 D mean and standard deviation for MIX 3 D (p) images with 10 repetitions.
Entropy 24 00831 g010
Figure 11. F u z E n M 2 D mean and standard deviation for MIX 3 D (p) images.
Figure 11. F u z E n M 2 D mean and standard deviation for MIX 3 D (p) images.
Entropy 24 00831 g011
Figure 12. F u z E n C 2 D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV, with K 1 , K 2 , and K 3 being the first, second, and third channel, respectively. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.
Figure 12. F u z E n C 2 D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV, with K 1 , K 2 , and K 3 being the first, second, and third channel, respectively. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.
Entropy 24 00831 g012
Figure 13. F u z E n V 2 D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.
Figure 13. F u z E n V 2 D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.
Entropy 24 00831 g013
Figure 14. F u z E n V 2 D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.
Figure 14. F u z E n V 2 D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.
Entropy 24 00831 g014
Figure 15. F u z E n M 2 D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.
Figure 15. F u z E n M 2 D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.
Entropy 24 00831 g015
Figure 16. ROC curves for F u z E n C 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space. The curves are for F u z E n C R 2 D , F u z E n C G 2 D , and F u z E n C B 2 D from left to right.
Figure 16. ROC curves for F u z E n C 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space. The curves are for F u z E n C R 2 D , F u z E n C G 2 D , and F u z E n C B 2 D from left to right.
Entropy 24 00831 g016
Figure 17. ROC curves for F u z E n V 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.
Figure 17. ROC curves for F u z E n V 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.
Entropy 24 00831 g017
Figure 18. ROC curves for F u z E n M 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.
Figure 18. ROC curves for F u z E n M 2 D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.
Entropy 24 00831 g018
Table 1. Definition of the computed Haralick features [43].
Table 1. Definition of the computed Haralick features [43].
Haralick FeatureAnnotation
Uniformity (Energy) i j P 2 i , j
Contrast n = 0 N g 1 n 2 ( i = 1 N g j = 1 N g P i , j ) , | i j | = n
Correlation i j ( i j ) P i , j μ x μ y / σ x σ y
Variance i j i μ 2 P i , j
Homogeneity i j P i , j / ( 1 + ( ( i j ) 2 )
Entropy i j P i , j l o g P ( i , j )
where P represents the elements of the co-occurrence matrices and μx, μy, σx, and σy are the means and standard deviations of row and column sums, respectively.
Table 2. Mann–Whitney U test p-values for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV, from top to bottom row, respectively.
Table 2. Mann–Whitney U test p-values for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV, from top to bottom row, respectively.
FuzEnC 2 D FuzEnV 2 D FuzEnM 2 D
U K 1 U K 2 U K 3 U U
3.3 × 10 9 7.0 × 10 12 3.4 × 10 11 9.0 × 10 13 4.1 × 10 12
2.9 × 10 5 5.7 × 10 2 1.5 × 10 1 2.9 × 10 5 2.9 × 10 5
9.8 × 10 6 1.7 × 10 3 5.8 × 10 4 4.5 × 10 5 1.1 × 10 5
Table 3. Cohen’s d-values for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV.
Table 3. Cohen’s d-values for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV.
FuzEnC 2 D FuzEnV 2 D FuzEnM 2 D
U K 1 U K 2 U K 3 U U
RGB1.501.891.972.712.19
HSV1.140.230.271.141.14
YUV1.100.580.701.001.09
Table 4. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi RGB images.
Table 4. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi RGB images.
FuzEnC 2 D FuzEnV 2 D FuzEnM 2 D
U R U G U B U U
AUC0.8840.9450.9300.9640.950
Sensitivity0.8250.9250.9000.9250.925
Specificity0.8500.8500.8250.9500.900
Accuracy0.8370.8870.8620.9370.912
Precision0.8460.8600.8370.9480.902
Table 5. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi HSV images.
Table 5. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi HSV images.
FuzEnC 2 D FuzEnV 2 D FuzEnM 2 D
U H U S U V U U
AUC0.7710.3760.4060.7710.771
Sensitivity0.6500.3250.2250.6500.650
Specificity0.8500.6000.8500.8500.850
Accuracy0.7500.4620.53750.7500.750
Precision0.8120.4480.6000.8120.812
Table 6. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi images in YUV.
Table 6. ROC analysis for F u z E n C 2 D , F u z E n V 2 D , and F u z E n M 2 D results of 40 melanoma and 40 melanocytic nevi images in YUV.
FuzEnC 2 D FuzEnV 2 D FuzEnM 2 D
U Y U U U V U U
AUC0.7870.7030.7230.7650.785
Sensitivity0.7250.7500.7000.7500.725
Specificity0.7500.6500.7000.7250.750
Accuracy0.7370.7000.7000.7370.737
Precision0.7430.6810.7000.7310.743
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hilal, M.; Gaudêncio, A.S.; Vaz, P.G.; Cardoso, J.; Humeau-Heurtier, A. Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application. Entropy 2022, 24, 831. https://doi.org/10.3390/e24060831

AMA Style

Hilal M, Gaudêncio AS, Vaz PG, Cardoso J, Humeau-Heurtier A. Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application. Entropy. 2022; 24(6):831. https://doi.org/10.3390/e24060831

Chicago/Turabian Style

Hilal, Mirvana, Andreia S. Gaudêncio, Pedro G. Vaz, João Cardoso, and Anne Humeau-Heurtier. 2022. "Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application" Entropy 24, no. 6: 831. https://doi.org/10.3390/e24060831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop