Next Article in Journal
Application of the Crow Search Algorithm to the Problem of the Parametric Estimation in Transformers Considering Voltage and Current Measures
Next Article in Special Issue
Brain Tumour Classification Using Noble Deep Learning Approach with Parametric Optimization through Metaheuristics Approaches
Previous Article in Journal
Experimental and Mathematical Models for Real-Time Monitoring and Auto Watering Using IoT Architecture
Previous Article in Special Issue
A Review of Intelligent Sensor-Based Systems for Pressure Ulcer Prevention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier

by
Benjamín Luna-Benoso
1,*,
José Cruz Martínez-Perales
1,
Jorge Cortés-Galicia
1,
Rolando Flores-Carapia
2 and
Víctor Manuel Silva-García
2
1
Escuela Superior de Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
2
Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
*
Author to whom correspondence should be addressed.
Computers 2022, 11(1), 8; https://doi.org/10.3390/computers11010008
Submission received: 8 December 2021 / Revised: 29 December 2021 / Accepted: 30 December 2021 / Published: 4 January 2022
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)

Abstract

:
Any cancer type is one of the leading death causes around the world. Skin cancer is a condition where malignant cells are formed in the tissues of the skin, such as melanoma, known as the most aggressive and deadly skin cancer type. The mortality rates of melanoma are associated with its high potential for metastasis in later stages, spreading to other body sites such as the lungs, bones, or the brain. Thus, early detection and diagnosis are closely related to survival rates. Computer Aided Design (CAD) systems carry out a pre-diagnosis of a skin lesion based on clinical criteria or global patterns associated with its structure. A CAD system is essentially composed by three modules: (i) lesion segmentation, (ii) feature extraction, and (iii) classification. In this work, a methodology is proposed for a CAD system development that detects global patterns using texture descriptors based on statistical measurements that allow melanoma detection from dermoscopic images. Image analysis was carried out using spatial domain methods, statistical measurements were used for feature extraction, and a classifier based on cellular automata (ACA) was used for classification. The proposed model was applied to dermoscopic images obtained from the PH2 database, and it was compared with other models using accuracy, sensitivity, and specificity as metrics. With the proposed model, values of 0.978, 0.944, and 0.987 of accuracy, sensitivity and specificity, respectively, were obtained. The results of the evaluated metrics show that the proposed method is more effective than other state-of-the-art methods for melanoma detection in dermoscopic images.

1. Introduction

1.1. Motivation

Skin cancer is a disease caused by the abnormal development of cancer cells in any layer of the skin. There are two types of skin cancer, the non-melanoma type and the melanoma type [1,2,3]. Basal cell and squamous cell carcinomas are non-melanoma malignancies that occur in the epidermis. They present as nodules or nonpainful ulcerated and crusted lesions that do not heal with the passage of time. Their growth is slow, and they rarely metastasize [3,4]. On the other hand, melanoma is the most aggressive skin cancer type; if not diagnosed on time, it is susceptible to invade nearby tissues and spread to other body parts [5]. Hence melanoma is considered as the deadliest form of skin cancer, and early detection is decisive for patient survival. Every year something worrying happens in the world—annual rates of all forms of skin cancer are significantly increased, and melanoma increases rapidly among other cancer forms, particularly affecting the young population [6]. Diagnosis of melanoma is a challenging task due to the similarity in appearance to nonmalignant nevi.
For skin cancer diagnosis, the first step is to obtain a patient medical history; subsequently, a physical examination is performed. If the doctor suspects melanoma, he or she will use a technique called dermoscopy to see the skin more clearly [7]. Dermoscopy consists of using a lens type and a light source, both on the lesion, and obtaining an image of the affected area, which is called a dermoscopic image [8]. The analysis of the dermoscopic image is extremely important in decision making if an invasive method such as biopsy is necessary to be applied or not [8,9,10].

1.2. Related Jobs

Computer Aided Design (CAD) are computer systems that perform the interpretation of a medical image as if it were done by a medical specialist. CADs focused on melanoma detection are a great help for medical specialists in the decision-making process that they carry out in observing a dermoscopic image and is helpful in the pre-diagnosis process in a rapid way [11,12,13]. The first task of a CAD system of this type is to segment the lesions present in dermoscopic images; therefore, there are works that show methodologies that allow lesion segmentation and provide guidelines for their subsequent classification. Some of these works propose methodologies that use fully convolutional network (FCN) and dual path network (DPN) [14]; other works perform segmentation by masks that are manually or automatically performed [15], and others use local binary patterns and clustering using k-means [16]. In this work, segmentation is carried out using spatial domain methods and morphological operators. There are works that perform melanoma detection in dermoscopic images based on clinical diagnostic procedures, such as the ABCD rule. The ABCD rule is a clinical guide to determine when a lesion is a melanoma. This guide consists of looking for specific characteristics that allow detecting the asymmetry (A) of a lesion; the type of border (B) if it is irregular, uneven, or blurred; the color variation (C), with reddish, whitish, and bluish being the most dangerous; and the length of the diameter (D) of the lesion [17]. In [18,19,20,21], different methodologies for melanoma detection use the ABCD rule. Unfortunately, this type of system development is hampered by several challenges, such as the lack of data sets with detailed clinical criteria information, or the subtlety of some diagnostic criteria that makes them difficult to detect. On the other hand, there are works that inspect the lesion, detecting the specific presence of patterns associated with its structure. These methods attempt to detect global patterns that are mainly divided into three categories, such as texture, shape, and color [22,23,24]. In this work, texture descriptors based on statistical measurements are used. In [18], a scoring system based on the ABCD rule, which they call the Total Dermoscopic Score (TDS) for the classification between malignant and benign lesions, is proposed. Other works classify using convolutional neural networks, such as AlexNet and VGG16 [25,26], Support Vector Machines [1,19], or k-NN [27], among others, to classify. In this work, the classification is carried out using a model based on cellular automata proposed by the authors in [28].
The objective of the present work is to develop and implement a classifier algorithm through the use of cellular automata that allows the detection of melanoma-type lesions in dermoscopic images. Cellular automata are simple models to implement and require few computational resources compared to models that make use, for example, of deep learning, such as the AlexNet and VGG16 architectures. On the other hand, with the proposal, concepts from the field of cellular automata could be used and applied to the field of supervised learning through this approach to that of cellular automata.

2. Basic Concepts

2.1. Digital Image

A digital image is a discrete two-dimensional function f ( x , y ) , where x and y are discrete spatial coordinates in the plane. The breadth f of a coordinate pair ( x , y ) is called the gray level intensity of the image at that point. An image is represented by a two-dimensional array F i j = ( f i j ) H × W , where H and W represent the size of the image (referencing H and W to height and width of the image, respectively), with f i j = f ( x i , x j ) (Figure 1) [29].

2.2. Mathematical Morphology

The following definitions and theorems correspond to the mathematical morphology [30,31].
Definition 1.
Let A 2 . It defines the reflected set of A denoted for A , by:
A = { x | x A }
Definition 2.
Let A 2  and x 2 . The translation of A  by x denoted by ( A ) x is defined as:
( A ) x = { a + x | a A }
Definition 3.
Let A , B 2 . The dilation of A  by B , denoted by A B , is the Minkowski sum of A and B , that is:
A B = { a + b | a A   a n d   b B }
The set B of the above definition is called the structuring element.
Theorem 1.
Let A , B 2 . The following applies:
A B = { x | ( B ) x A }
Definition 4.
Let A , B 2 . The erosion of A by B , denoted by A B , is the Minkowski subtraction of A by B , that is:
A B = { x 2 | x + b A   f o r   e a c h   b B }
Theorem 2.
Let A , B 2 . The following applies:
A B = { x | ( B ) x A }

2.3. Cellular Automata

The following definitions correspond to cellular automata [32].
Let I be a set of indices. Let A = { [ a i , b i ] } i I be a countable family of closed intervals in , such that the following conditions:
  • X A X = [ a , b ] for some a , b or X A X = .
  • If [ a i , b i ] A , then b i a i > 0 .
  • If [ a i , b i ] and [ c j , d j ] are in A with b i c j , then [ a i , b i ] [ c j , d j ] = or [ a i , b i ] [ c j , d j ] = b i = c j .
Definition 5.
Let [ a , b ] be an interval of with a b and A a family of closed intervals that satisfy 1, 2, and 3. A 1-dimensional lattice is the set = { x i × [ a , b ] | x i A } . If A 1 , A 2 , , A n are families of intervals that meets 1, 2, and 3, then a lattice of dimension n > 1 is the set = { x 1 × x 2 × × x n | x i A i } .
Definition 6.
Let r . A 1-dimensional lattice is regular if [ a i , b i ] = r for each [ a i , b i ] A . An n-dimensional lattice is regular if [ a i k , b i k ] A i for i = 1 , 2 , , n .
Definition 7.
Let be a lattice. A cell or site is an element f ; that is, a cell is an element of the form [ a 1 k , b 1 k ] × × [ a n k , b n k ] with [ a i k , b i k ] A i for i = 1 , 2 , , n .
Definition 8.
Let be a lattice, and r is a cell of . A neighborhood of size n for r is the set v ( r ) = { { k 1 , k 2 , , k n } | k j   i s   a   c e l l   o f     f o r   e a c h   j } .
Definition 9.
Let n . A cellular automata (CA) is a tuple ( , S , N , f ) , such that:
1.
is a regular lattice.
2.
S is a finite set of states.
3.
N is a set of neighborhoods that nest as follows:
N = { v ( r ) | r   i s   a   c e l l   a n d   v ( r )   i s   a   n e i g h b o r h o o d   r   o f   s i z e   n }
4.
f : N S is a function called the transition function.
Definition 10.
A configuration of the cellular automata ( , S , N , f ) is a function C t : S , which associates to each cell of the lattice at time t , a state of S .
If ( , S , N , f ) is a cellular automata and r , then the configuration C t is related with f through:
C t = f ( { C t ( i ) | i N ( r ) } )
Definition 11.
Let = ( , S , N , f ) and = ( , S , N , g ) be two cellular automata. Cellular automata composition of the C A and in the time t = t k is defined as by the cellular automata = ( , S , N , h ) , where h ,   f , and g are related as follows:
C t k + 1 ( r ) = f ( { C t k ( i ) | i N ( r ) } ) C t k + 2 ( r ) = g ( { C t k + 1 ( i ) | i N ( r ) } ) C t k + 2 ( r ) = h ( { C t k ( i ) | i N ( r ) } )
Definition 12.
Let = ( , S , N , f ) be a cellular automata with = 2 . If A 2 and x S , then [ A ] x denote the number of cells in A with state x .

3. Materials and Methods

This section presents the methodology that detect melanoma in dermoscopic images. The methodology is divided into three modules: image segmentation, feature extraction, and classification.

3.1. Image Segmentation

Segmentation of skin lesions into dermoscopic images was carried out using spatial domain methods. Then, the steps that led the lesion segmentation are shown.
Step 1. To reduce the number of variations in intensity between neighboring pixels, a median filter is applied.
Step 2. The image is binarized using Otsu’s method.
After you apply step 2, the binarized images exhibit additive noise at the corners of the image. To eliminate this noise, the next step is performed.
Step 3. It is considered the circumference that passes through the periphery of the corners of the binarized image taking as its center the coordinate ( C x , C y ) = ( H / 2 , W / 2 ) and radius r = ( C x a x ) 2 + ( C y a y ) 2 , where ( a x , a y ) . corresponds to the coordinate with the shortest distance that is on the corner’s periphery of the binarized image to the point ( C x , C y ) . Those points that are outside the circumference are removed:
f ( x , y ) = { 255   s i   ( C x x ) 2 + ( C y y ) 2 r 2 0                                               o t h e r w i s e
Step 4. Three morphological erosions are applied using Moore’s neighborhood as a structuring element.
Step 5. Three morphological dilations are applied using Moore’s neighborhood as a structuring element.
Step 6. Perform an operation a n d with the color input image.
Figure 2 shows the block diagram that obtains lesion segmentation in the skin from dermoscopic images.
Figure 3 shows the steps’ application to a dermoscopic image in order to obtain the segmented lesion.

3.2. Feature Extraction

To form the pattern of features associated with each image, texture descriptors were used based on 11 statistical measurements shown in Table 1 [33,34]. To perform this, N represents the total number of pixels, L is the total number of gray levels, I ( f i j ) is the value of the gray level of the pixel ( i , j ) in the image f ( x , y ) , P ( j ) is the probability that the value of the gray level occurs in the image f ( x , y ) , T ( i ) is the number of pixels with gray value i in the image f ( x , y ) , P ( I ( f i j ) ) is the probability that the gray level I ( f i j ) occurs in the image f ( x , y ) , and P ( f i j ) = T ( I ( f i j ) ) / N .

3.3. Classification

An associative model based on cellular automata was used for classification [30]. Associative models are mathematical models whose main objective is to retrieve complete patterns from input patterns. The operation of associative models is divided into two phases: the learning phase—the phase where the associative model is generated, and the recovery phase—the phase where the associative model is operated. During the learning phase, the associative model is constructed from a set of ordered pairs of previously known patterns called the fundamental set. Each pattern that defines the fundamental set is called the fundamental pattern. The fundamental set is represented as follows [35]:
F S = { ( x μ , y μ ) | μ = 1 , 2 , , p }
where ( x μ , y μ ) A n × A m for μ = 1 , 2 , , p with A = { 0 , 1 } . During the recovery phase, the associative model operates with an input pattern to obtain the corresponding output pattern. In this work, we used the associative model based on cellular automata presented in [30]. Then, we present previous definitions and the classifier model in its learning and recovery phase.
Let A , B 2 . Hereinafter, D = ( , S , N , f ) is a cellular automata with initial configuration A , defined as follows:
  • = 2 .
  • S = { 0 , 1 } .
  • N = { v x | x }   w i t h   v x = ( B ) x .
  • The transition function f : N S is given as follows:
    f ( v x ) = { 1   i f   [ v x ] 1 > 0 0   i f   [ v x ] 1 = 0
Theorem 3.
The CA D with initial configuration A is equivalent to dilating the set A by B in the first iteration.
Proof. 
If x A B , then by theorem 1, ( B ) x A . Therefore, there y such that y ( B ) x A . Since we are dealing with binary images, this means that 1 corresponds as the value of y such that y v x ; therefore, [ v x ] 1 > 0 then f ( x ) = 1 . If x A B then ( B ) x A = . Since we are dealing with binary images, this means that the neighborhood v x is formed only by cells with zero values, then [ v x ] 1 = 0 , and therefore, f ( x ) = 0 . Conversely, if x with f ( x ) = 1 , then [ v x ] 1 > 0 , then there is a cell with the value 1, which is in the vicinity v x , i.e., there y such that y v x and y A , and A is the initial configuration of the CA; therefore, y ( B ) x and y A exists, then y ( B ) x A , then it follows by theorem 1 that x A B . If x with f ( x ) = 0 , then [ v x ] 1 = 0 , this means that the neighborhood v x is formed only by cells with values equal to zero, i.e., v x A , then ( B ) x A ; therefore, x A B . □
The CA of the previous theorem is called Cellular Dilation (CAD). Let A , B 2 . Hereinafter, = ( , S , N , f ) is a CA with initial configuration A , defined as follows:
  • = 2 .
  • S = { 0 , 1 } .
  • N = { v x | x }   w i t h   v x = ( B ) x .
  • The transition function f : N S is given as follows:
    f ( v x ) = { 1   i f   [ v x ] 1 = | B | 0   i f   [ v x ] 1 < | B |  
Theorem 4.
The CA with initial configuration A is equivalent to erode the set A by B in the first iteration.
Proof. 
If x A B , then ( B ) x A by the Theorem 2, because ( B ) x is the set B moved by x , ( B ) x has to have as many cells with values 1 as B ; therefore, [ v x ] 1 = | B | , therefore f ( x ) = 1 . If x A B , then ( B ) x A , i.e., there is y ( B ) x such that y A , and this means that there is a cell ( B ) x with value 1 whose value is zero at v x , then [ v x ] 1 < | B | ; therefore f ( v x ) = 0 . Conversely, if x is a cell with f ( v x ) = 1 , then [ v x ] 1 = | B | , then ( B ) x A . If x is a cell with f ( v x ) = 0 , then [ v x ] 1 < | B | , and this means that there is a cell in v x with value zero, and on the other hand, this cell has the value 1 in B x ; therefore, y ( B ) x exists such that y A ; therefore, ( B ) x A . □
The CA of the previous theorem is called Cell Erosion (CAE).
In what follows, consider the set A = { 0 , 1 } and the fundamental set F S = { ( x μ , y μ ) | μ = 1 , 2 , , p } with x μ A n and y μ A m . The lattice for the CA shall consist of the matrix of size 2 m × 2 n with the first index in ( 0 , 0 ) . The set S = { 0 , 1 } is the finite set of states. Let I = { i | i = 2 k   f o r   k = 0 , 1 , , n 1 } and J = { j | j = 2 k + 1   f o r   k = 0 , 1 , , m 1 } . Consider the partition of formed by the family of subsets I J = { v ( i , j ) | ( i , j ) I × J } with v ( i , j ) = { ( i , j ) , ( i , j 1 ) , ( i + 1 , j ) , ( i + 1 , j 1 ) } . Since I J is a partition of , given , there exists a unique ( i , j ) I × J such that v ( i , j ) . We denote by v this single element, i.e., v = v ( i , j ) . For example, if = ( 3 , 0 ) , then v ( 3 , 0 ) = v ( 2 , 1 ) = { ( 2 , 1 ) , ( 2 , 0 ) , ( 3 , 1 ) , ( 3 , 0 ) } .
From the above fact, it defines the set of neighborhoods:
N = { v | }
Definition 13.
Consider the set A k . We defined the projection function of the i-th component ( 1 i k ) as P r i : A k A as:
P r i ( z ) = z i ,   w i t h   z = ( z 1 , z 2 , , z k )
Theorem 5.
If ( y i , x j ) P r y x = { ( y i , x j ) | y i = P r i ( y )   a n d   x j = P r j ( x ) } , then ( 2 j 2 + y i , 2 i 2 + x j ) v ( 2 j 2 , 2 i 1 ) .
We define the set F S = { ( 2 j 2 + y i μ , 2 i 2 + x j μ ) | 1 μ p ,   1 i m   a n d   1 j n } .
Consider the CA Q = ( , S , N , f Q ) and W = ( , S , N , f W ) with N = I J , and f Q : N S , f W : N S are defined as follows:
f Q ( v ( i , j ) ) = { 1   i f   ( i , j ) F S 0   i f   ( i , j ) F S
and
f W ( v ( i , j ) ) = { 1   i n   p o s i t i o n   ( i + 1 , j )   i f   ( i , j 1 ) = 1 1   i n   p o s i t i o n   ( i , j 1 )   i f   ( i + 1 , j ) = 1
We define the Associative CA (ACA) in its learning phase as:
W Q = ( , S , N , f A )
The recovery phase for ACA uses the composition of erosions and dilations CA. The algorithm which defines the recovery phase is shown in Algorithm 1.
Algorithm 1. ACA in recovery phase
Input: Fundamental set F S = { ( x μ , y μ ) | μ = 1 , 2 , , p } ; structuring element B ; integer value n e (number of erosions); integer value n d (number of dilations); pattern to recovery x ˜ A n .
Output: Recovery pattern y ˜ A m .
  1.
Building the Learning ACA for F S .
  2.
Applying n e times the cell erosion with the structuring element B to the initial configuration of learning ACA. This is, applied to the configuration of the ACA, n e times.
  3.
Applying n d times the cellular dilation with the structuring element D to the configuration obtained in point 2. This is, applying the configuration obtained in point 2, D D D n d times.
  4.
For the input pattern x ˜ A n will get the output pattern y ˜ A m applying:
    for i = 1 m do
                          y ˜ i = 1
      for j = 1 n do
         if ¬ ( x ˜ i = 0 ( 2 j 1 , 2 i 2 ) = 1 ) then
        if ¬ ( x ˜ i = 1 ( ( 2 j 1 , 2 i 2 ) = 1 ( 2 j 1 , 2 i 2 ) = 1 ) ) then
         y ˜ i = 0
        break
       end if
      end if
     end for
    end for

4. Experiments and Results

Dermoscopic images used in this work were obtained from the dermatological service of the Pedro Hispano Hospital in Matosinhos, Portugal. To capture the images, the same conditions were considered through the Tuebinger Mole Analyzer system using a 20× magnification. The images were stored in an 8-bit RGB format with a resolution of 768 × 560 pixels. The image bank is made up of melanocyte lesions, including common nevi, atypical nevi, and melanomas. The PH2 database is made up of a total of 200 dermoscopic images developed for research purposes in order to facilitate comparative studies on algorithms for segmentation and classification of dermoscopic images. In this work, 10 images were discarded whose segmentation was not carried out properly due to the lack of sharpness of the images; therefore, in this work, a total of 190 dermoscopic images obtained in the PH2 database [36] were used. The methodology proposed to each of the images was applied to first obtain the segmentation of the binarized lesion (the contour and the image of the lesion in color as shown in Figure 3) then a total of 11 statistical characteristics were extracted (as shown in Table 1), and finally, the classifier model based on cellular automata was applied. Algorithm 1 shows the ACA model in its recovery phase; as a structuring element, the Moore neighborhood was used. The classification of lesions were evaluated using three metrics: sensitivity (SE), specificity (SP), and accuracy (ACC). The evaluated metrics are obtained from the confusion matrix and are given by:
S E = T P / ( T P + F N ) S P = T N / ( T N + F P ) A C C = ( T N + T P ) / ( F N + F P + T N + T P )
where, T P = T r u e   P o s i t i v e , F P = F a l s e   P o s i t i v e , T N = T r u e   N e g a t i v e , and F N = F a l s e   N e g a t i v e .
Table 2 shows the values of the confusion matrix, and Table 3 shows a comparison between the proposed model and other works using the SE, SP, and ACC metrics.
Figure 4 shows the ROC plots of the models in Table 3 and the respective area under the curve (AUC).

5. Discussion

This paper proposes a methodology that allows the detection of melanoma in dermoscopic images. The methodology consists of three distinguishable stages: segmentation of the lesions, extraction of characteristics, and classification. The segmentation of the lesions was carried out using methods in the spatial domain and morphological operators. For the extraction of characteristics, 11 statistical characteristics shown in Table 1 were used, and for the classification, the ACA model, an associative model based on cellular automata, was used. Table 2 shows the confusion matrix, where it is observed that, out of a total of 190 images considered, the model classified two cases as false positives and two cases as false negatives. This is possibly due to the fact that these cases are on the border classification between the lesions that present melanoma and those that do not, or atypical situations that the model requires to know, or possibly the segmentation process yielded sections of the image that, due to the scarce clarity, classified them erroneously. Table 3 and Figure 4 show the values of the proposed model and its comparison with other models that solve the same problem. Table 3 shows the evaluated metrics considered, which are accuracy, sensitivity, and specificity, and Figure 3 also shows the ROC graphs and the areas under the curve. As the results are observed, the value of the sensitivity thrown by the proposed model is by the mean, which is higher than models such as those proposed in [18,37,39,42] but lower than models such as those proposed in [14,38,40,41]. This metric refers to the ability of the model to diagnose lesions that present melanoma adequately. As such, it is a metric that must be improved in the proposed model to obtain better results. However, with respect to the specificity metric, the proposed model yielded superior results to all the models with which it was compared; this metric refers to the ability of the model to diagnose lesions that do not correspond to the presence of melanoma. On the other hand, the proposed model also showed values of higher accuracy than the rest of the models compared, having an accuracy of 0.978. This is also reflected in the area under the ROC curve shown in Figure 4 of the proposed model, which is 0.965 and higher than the area of all the ROC curves compared. The AUC reflects how good the model is to discriminate lesions with the presence and absence of melanoma; the higher its value approaches 1, the greater its discriminative capacity. As can be seen, the proposed model yields competitive results and can be considered for use in the detection of skin diseases that are not necessarily melanoma. The main limitations of this work are due to the amount of images that the PH2 database has, in addition to the fact that the same conditions must be met for the acquisition of the images. It would be convenient to do tests with more dermoscopic images obtained from various conditions that may occur when the images are acquired by specialists who use dermoscopic images for the detection of melanoma. On the part of the classifier, it is limited to structuring elements of the Moore neighborhood form; however, other types of structuring elements can be considered.

6. Conclusions

In this work, a methodology was presented that allows the detection of melanoma in dermoscopic images using a classifier based on cellular automata. The results of the metrics evaluated (sensitivity, specificity, and accuracy) show that the proposed method is more effective than other state-of-the-art methods for the detection of melanoma in dermoscopic images (Table 3).

Author Contributions

Conceptualization, B.L.-B.; methodology, B.L.-B. and J.C.M.-P.; validation, J.C.M.-P.; investigation, R.F.-C.; writing—original draft preparation, J.C.-G. and V.M.S.-G.; writing—review and editing, B.L.-B., R.F.-C., J.C.M.-P., J.C.-G. and V.M.S.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

The data sets used are the property of PH2 dataset and were developed for research purposes.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: reference [36]; https://www.fc.up.pt/addi/ph2%20database.html (accessed on 7 December 2021).

Acknowledgments

The authors would like to thank the Instituto Politécnico Nacional (Secretaría Académica, COFAA, EDD, SIP, ESCOM and CIDETEC) for their financial support to develop this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhen, M.; Tavares, J.M. Effective features to classify skin lesions in dermoscopic images. Expert Syst. Appl. 2017, 84, 92–101. [Google Scholar]
  2. Chatterjee, S.; Dey, D.; Munshi, S. Optimal selection of features using wavelet fractal descriptors and automatic correlation bias reduction for classifying skin lesión. Biomed. Signal Proccess. Control 2018, 40, 252–262. [Google Scholar] [CrossRef]
  3. Craythorme, E.; Al-Niami, F. Skin cancer. Medicine 2017, 45, 431–434. [Google Scholar] [CrossRef]
  4. Caini, S.; Boniol, M.; Tosti, G.; Magi, S.; Medri, M.; Stanganelli, I.; Palli, D.; Assedi, M.; Del Marmol, V.; Gandini, S. Vitamin D and melanoma and non-melanoma skin cancer and prognosis: A comprehensive review and meta-analysis. Eur. J. Cancer 2014, 50, 2649–2658. [Google Scholar] [CrossRef] [PubMed]
  5. Xu, H.; Berendt, R.; Jha, N.; Mandal, M. Automatic measurenment of melanoma Depth of invasión in skin hitopathological images. Micron 2017, 97, 56–67. [Google Scholar] [CrossRef]
  6. El Abbad, N.; Faisal, Z. Detection and analysis of skin cancer from skin lesion. Int. J. Appl. Eng. Res. 2017, 12, 9046–9052. [Google Scholar]
  7. Boespflug, A.; Perier-Muzet, M.; Phan, A.; Dhaille, F.; Assouly, P.; Thomas, L.; Petit, A. Dermatoscopia de las lesiones cutáneas no neoplásicas. EMC-Dermatología 2018, 52, 1–9. [Google Scholar] [CrossRef]
  8. Rao, B.K.; Ahn, C.S. Dermatoscopy for melanoma and pigmented lesion. Dermatol. Clin. 2012, 30, 413–434. [Google Scholar] [CrossRef]
  9. Gallegos-Hernández, J.F.; Ortiz-Maldondado, A.L.; Minauro-Muñoz, G.G.; Arias-Ceballos, H.; Hernández-Sanjuan, M. Dermoscopy in cutaneous melanoma. Cirugía Y Cir. 2015, 83, 107–111. [Google Scholar] [CrossRef] [Green Version]
  10. Pastar, Z.; Lipozencic, J. Significance of dermoscopic in genital dermatoses. Clin. Dermatol. 2014, 32, 315–318. [Google Scholar] [CrossRef]
  11. Barata, C.; Celebi, M.E.; Marques, J.S. Development of a clinically oriented system for melanoma diagnosis. Pattern Recognit. 2017, 69, 270–285. [Google Scholar] [CrossRef]
  12. Torkashvand, F.; Fartash, M. Automatic segmentation of skin lesion using markov random field. Can. J. Basic Appl. Sci. 2015, 3, 93–107. [Google Scholar]
  13. Dalila, F.; Zohra, A.; Reda, K.; Hocine, C. Segmentation and classification of melanoma and benign skin lesions. Optik 2017, 140, 749–761. [Google Scholar] [CrossRef]
  14. Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef] [PubMed]
  15. Mahbod, A.; Tschandl, P.; Langs, G.; Ecker, R.; Ellinger, I. The effects of skin lesion segmentation on the performance of dermoscopic image classification. Comput. Methods Programs Biomed. 2020, 197, 105725. [Google Scholar] [CrossRef]
  16. Pereira, P.M.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.A.; Tavora, L.M.N.; Thomaz, L.A.; Faria, S.M.M. Dermoscopic skin lesión image segmentation based on Local Binary Pattern Clustering: Comparative study. Biomed. Signal Process. Control 2020, 59, 101924. [Google Scholar] [CrossRef]
  17. Rigel, D.S.; Russak, J.; Friedman, R. The evolution of melanoma diagnosis: 25 years beyond the ABCDs. CA Cancer J. Clin. 2010, 60, 301–316. [Google Scholar] [CrossRef]
  18. Mohammed, E.; Jadhav, M. Analysis of dermoscopic images by using ABCD rule for early detection of skin cancer. Glob. Transit. Proc. 2021, 2, 1–7. [Google Scholar]
  19. Singh, L.; Ram, R.; Prakash, S. Designing a Retrieval-Based Diagnostic Aid using Effective Features to Classify Skin Lesión in Dermoscopic Images. Procedia Comput. Sci. 2020, 167, 2172–2180. [Google Scholar] [CrossRef]
  20. Zakeri, A.; Hokmabadi, A. Improvement in the diagnosis of melanoma and dysplastic lesions by introducing ABCD-PDT features and a hybrid classifier. Biocybern. Biomed. Eng. 2018, 38, 456–466. [Google Scholar] [CrossRef]
  21. Monisha, M.; Suresh, A.; Bapu, B.R.; Rashmi, M. Classification of malignant melanoma and benign skin lesión by using back propagation neural network and ABCD rule. Clust. Comput. 2019, 22, 12897–12907. [Google Scholar] [CrossRef]
  22. Alfed, N.; Khelifi, F. Bagged textural and color features for melanoma skin cancer detection in dermoscopic and standar images. Expert Syst. Appl. 2017, 90, 101–110. [Google Scholar] [CrossRef] [Green Version]
  23. Stoecker, W.; Wronkiewiecz, M.; Chowdhury, R.; Stanley, R.; Xu, J.; Bangert, A.; Shrestha, B.; Calcara, D.; Rabinovitz, H.; Oliviero, M.; et al. Detection of granularity in dermoscopy images of malignant melanoma using color and texture features. Comput. Med. Imaging Graph. 2011, 35, 144–147. [Google Scholar] [CrossRef] [Green Version]
  24. Pathan, S.; Gopalakrishna, K.; Siddalingaswamy, P. Automated detection of melanocytes related pigmented skin lesión: A clinical framework. Biomed. Singal Process. Control 2019, 51, 59–72. [Google Scholar] [CrossRef]
  25. Amin, J.; Sharif, A.; Gul, N.; Almas, M.; Wasif, M.; Azam, F.; Ahmad, S. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar] [CrossRef]
  26. Hosseinzadeh, S.; Hosseinzadeh, P. A comparative study of deep learning architectures on melanoma detection. Tissue Cell 2019, 58, 76–83. [Google Scholar] [CrossRef]
  27. Ganster, H.; Pinz, P.; Rohrer, R.; Wildling, E.; Binder, M.; Kittler, H. Automated melanoma recognition. IEEE Trans. Med. Imaging 2001, 20, 233–239. [Google Scholar] [CrossRef] [PubMed]
  28. Luna-Benoso, B.; Flores-Carapia, R.; Yáñez-Márquez, C. Associative Memories Based on Cellular Automata: An Application to Pattern Recognition. Appl. Math. Sci. 2013, 7, 857–866. [Google Scholar] [CrossRef]
  29. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Prentice-Hall: Hoboken, NJ, USA, 2002. [Google Scholar]
  30. Shih, F.; Cheng, S. Adaptative mathematical morphology for Edge linking. Inf. Sci. 2004, 167, 9–21. [Google Scholar] [CrossRef]
  31. Bloch, I. On links between mathematical morphology and rough sets. Pattern Recognit. 2000, 33, 1487–1496. [Google Scholar] [CrossRef]
  32. Luna-Benoso, B.; Yáñez-Márquez, C.; Figueroa-Nazuno, J.; López-Yañéz, I. Cellular Mathematical Morphology. In Proceedings of the IEEE Sixth Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, 4–10 November 2008; pp. 105–112. [Google Scholar]
  33. Zhang, P.; Verma, B.; Kumar, K. Neural vs statistical classifier in conjunction with genetic algorithm based feature selection. Pattern Recognit. Lett. 2005, 26, 909–919. [Google Scholar] [CrossRef]
  34. Subashini, T.S.; Ramalingam, V.; Palanivel, S. Automated assement of breast tissue density in digital mammograms. Comput. Vis. Image Underst. 2010, 114, 33–43. [Google Scholar] [CrossRef]
  35. Santiago-Moreno, R.; Sossa, H.; Gutierrez-Hernández, D.A.; Zamudio, V.; Hernández-Bautista, I.; Valadez-Godínez, S. Novel mathematical modelo f breast cancer diagnostics using an associative pattern classification. Diagnostics 2020, 10, 136. [Google Scholar] [CrossRef] [Green Version]
  36. Mendonca, T.; Ferreira, P.M.; Marques, J.; Marcal, A.R.S.; Rozeira, J. PH2-A dermoscopic image database for research and benchmarking. In Proceedings of the 35th Internacional Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013. [Google Scholar]
  37. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
  38. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar] [CrossRef] [Green Version]
  39. Eltayef, K.; Li, Y.; Liu, X. Detection of melanoma skin cancer in dermoscopic images. J. Phys. Conf. Ser. 2017, 787, 012034. [Google Scholar] [CrossRef] [Green Version]
  40. Nida, N.; Irtaza, A.; Javed, A.; Yousaf, M. Melanoma lesion detection and segmentation using deep región based convolutional neural network and fuzzy C-means clustering. Int. J. Med. Inform. 2019, 124, 37–48. [Google Scholar] [CrossRef] [PubMed]
  41. Tajeddin, N.Z.; Asl, B.M. Melanoma recognition in dermoscopic images using lesion’s peripheral región information. Comput. Methods Programs Biomed. 2018, 163, 143–153. [Google Scholar] [CrossRef]
  42. Al-Masni, M.A.; Al-Antari, M.A.; Choi, M.T.; Han, S.M.; Kim, T.S. Skin lesion segmentation in dermoscopic image via deep full resolution convolutional networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef]
Figure 1. Definition of an image.
Figure 1. Definition of an image.
Computers 11 00008 g001
Figure 2. Block diagram that segments lesions into dermoscopic images.
Figure 2. Block diagram that segments lesions into dermoscopic images.
Computers 11 00008 g002
Figure 3. Lesion segmentation in (a) dermoscopic image, in (b) median filter, in (c) the Otsu method, in (d) corner elimination, in (e) erosions by Moore’s neighborhood, in (f) dilations by Moore’s neighborhood, and in (g) segmented lesion.
Figure 3. Lesion segmentation in (a) dermoscopic image, in (b) median filter, in (c) the Otsu method, in (d) corner elimination, in (e) erosions by Moore’s neighborhood, in (f) dilations by Moore’s neighborhood, and in (g) segmented lesion.
Computers 11 00008 g003
Figure 4. ROC graph for melanoma detection of different learning methods.
Figure 4. ROC graph for melanoma detection of different learning methods.
Computers 11 00008 g004
Table 1. Statistical characteristics.
Table 1. Statistical characteristics.
Statistical FeaturesExpression
Mean μ = i j f i j N
Standard Deviation σ = i j ( f i j μ ) 2 N
Smoothness R = 1 1 ( 1 + σ 2 )
Skewness S k = i j ( f i j μ ) 3 N σ 3
Kurtosis K = i j ( f i j μ ) 4 ( N 1 ) σ 4
Uniformity U = i = 0 L 1 P ( i ) 2
Average Histogram A H g = 1 L i = 0 L 1 T ( i )
Modified Skew M S K = 1 σ 3 i j ( f i j μ ) 3 P ( f i j )
Modified Standar Deviation σ m = i j ( f i j μ ) 2 P ( f i j )
Entropy E t p = j = 0 L 1 P ( j ) l o g 2 [ P ( j ) ]
Modified Entropy M E t p = i j P ( f i j ) l o g 2 [ P ( I ( f i j ) ) ]
Table 2. Confusion matrix.
Table 2. Confusion matrix.
True Condition Status
PositiveNegative
Test ResultPositiveTP = 34FP = 2
NegativeFN = 2TN = 152
Table 3. Comparison of melanoma diagnosis methods through the accuracy (ACC), sensitivity (SE), and specificity (SP).
Table 3. Comparison of melanoma diagnosis methods through the accuracy (ACC), sensitivity (SE), and specificity (SP).
MethodClassifierACCSESP
Shan et al. [14]FC-DPN0.9360.9470.962
Mohammed et al. [18]TDS0.840.6050.895
Goyal et al. [37]Ensemble-S0.9380.9320.929
Bi et al. [38]DCL-PSI0.9660.9710.958
Eltayef et al. [39]FCM-MRF0.940.9320.980
Nida et al. [40]RCNN-FCM0.9480.9780.941
Tajeddin et al. [41]RUSBoost0.9500.9500.950
Al-Masni et al. [42]FrCN0.9500.9370.956
ProposedACA0.9780.9440.987
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luna-Benoso, B.; Martínez-Perales, J.C.; Cortés-Galicia, J.; Flores-Carapia, R.; Silva-García, V.M. Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier. Computers 2022, 11, 8. https://doi.org/10.3390/computers11010008

AMA Style

Luna-Benoso B, Martínez-Perales JC, Cortés-Galicia J, Flores-Carapia R, Silva-García VM. Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier. Computers. 2022; 11(1):8. https://doi.org/10.3390/computers11010008

Chicago/Turabian Style

Luna-Benoso, Benjamín, José Cruz Martínez-Perales, Jorge Cortés-Galicia, Rolando Flores-Carapia, and Víctor Manuel Silva-García. 2022. "Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier" Computers 11, no. 1: 8. https://doi.org/10.3390/computers11010008

APA Style

Luna-Benoso, B., Martínez-Perales, J. C., Cortés-Galicia, J., Flores-Carapia, R., & Silva-García, V. M. (2022). Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier. Computers, 11(1), 8. https://doi.org/10.3390/computers11010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop