Next Article in Journal
A Multicriteria Goal Programming Model for Ranking Universities
Next Article in Special Issue
Controllability for Retarded Semilinear Neutral Control Systems of Fractional Order in Hilbert Spaces
Previous Article in Journal
Stochastic Modeling of Plant Virus Propagation with Biological Control
Previous Article in Special Issue
Fractional Vertical Infiltration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional-Order Colour Image Processing

1
IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal
2
CENTRA, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal
3
ICT, Universidade de Évora, Rua Romão Ramalho 59, 7000-671 Évora, Portugal
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(5), 457; https://doi.org/10.3390/math9050457
Submission received: 17 January 2021 / Revised: 19 February 2021 / Accepted: 20 February 2021 / Published: 24 February 2021
(This article belongs to the Special Issue Fractional Calculus and Nonlinear Systems)

Abstract

:
Many image processing algorithms make use of derivatives. In such cases, fractional derivatives allow an extra degree of freedom, which can be used to obtain better results in applications such as edge detection. Published literature concentrates on grey-scale images; in this paper, algorithms of six fractional detectors for colour images are implemented, and their performance is illustrated. The algorithms are: Canny, Sobel, Roberts, Laplacian of Gaussian, CRONE, and fractional derivative.

1. Introduction

Conventional edge detectors are based on integer derivatives. With the introduction of fractional calculus, some new non-integer edge detectors were created [1] and the existent integer ones adapted [2,3,4].
The aforementioned detectors were developed for grey-scale images. Since the input is often a coloured image, two options are available. One is to convert the image into grey-scale and then apply the detector. This may compromise the performance of edge detection. Thus, the other approach is to create colour-based edge-detection detectors [5,6].
This paper presents, in sequence for six different detection algorithms, three variants: conventional grey-scale integer detectors, grey-scale fractional detectors, and colour-based fractional detectors. The former two can be found in the literature; to the best of our knowledge, colour-based fractional detectors presented in this paper are the first ever used, with the exception of Canny, already given for integer orders, in [5]. While fractional derivatives are linear operators [7], the algorithms for image processing include other steps and operations, resulting in a non-linear treatment of the input data.
This paper is organised in the following manner. Section 2 explains the theoretical formulations for the edge detectors. Section 3 presents an application of the formulated methods to satellite images and the results which serve as an illustration of their performance. In Section 4 discussion of the results is made, and conclusions are drawn.

2. Materials and Methods

2.1. Grünwald-Letnikov Definition

The first-order derivative of a function is given by
D 1 f ( x ) = lim h 0 f ( x + h ) f ( x ) h .
Iterating, it is possible to arrive to the nth derivative of a function. A general expression can be deduced by induction:
D n f ( x ) = lim h 0 h n m = 0 n ( 1 ) m . n m f ( x m h )
Combinations of n things taken m at a time are given by
n m = n ! m ! ( n m ) ! .
The Grünwald–Letnikov (GL) derivative is a generalization of the derivative in (2). The idea behind it is that h should approach 0 as n approaches infinity. However, before doing so, binomial coefficients must cope with real numbers to extend this expression. For that, the Euler gamma function is used, instead of the factorials in (3):
a b = Γ ( a + 1 ) Γ ( b + 1 ) Γ ( a b + 1 ) , if a , b , a b Z ( 1 ) b Γ ( b a ) Γ ( b + 1 ) Γ ( a ) , if a Z b Z 0 + 0 , if b Z b a N a Z a , b Z | a | > | b |
Combining (2) with (4) provides the main justification for the following extension of the integer-order derivative to any real-order α , which was proposed independently by Grünwald [8] and Letnikov [9]:
GL D α f ( t ) = lim h 0 h α k = 0 ( 1 ) k α k f ( t k h ) , t R .
For practical reasons, a truncation of the expression above was introduced, corresponding to an initial value c. This version of the derivative starting in c can be applied to functions that are not defined in the interval from − to c:
c D t α f ( t ) = lim h 0 h α k = 0 N ( 1 ) k α k f ( t k h ) , N = t c h , t > c .
Expression (6) is the formulation that was applied to the different integer edge detectors in order to adapt them to fractional orders. The corresponding code is available from a public repository [10].

2.2. Derivative Filters

Derivative filters measure the rate of change in pixel value of a digital image. When filters of this kind are used, the result allows for enhancement of contrast, detection of boundaries and edges, and measurement of feature orientation.
Convolution of the specimen image with derivative filters is known as a derivative filtering operation. In most of the times, there is a filter to each direction; thus, convolution is performed twice. Using two dimensions, a gradient can be measured from the combination of the convolutions in x and y:
f = f x , f y .
The gradient points in the direction of the largest intensity increase. The magnitude (8) and orientation (9) of this gradient are frequently used to combine the two convolutions in the processing of images with edges with different orientations:
f = f x 2 + f y 2
θ = tan 1 f y / f x .

2.3. Canny Edge Detector

Original grey-scale: The Canny edge detector is a very popular edge detection algorithm. It was developed in 1986 by John F. Canny [11]. The algorithm is composed of the following steps: noise reduction, gradient calculation, non-maximum suppression, and hysteresis thresholding.
The first step of the Canny algorithm is noise reduction. Since image processing is always vulnerable to noise, it is important to remove or reduce it before processing. This is done by the convolution of the image f ( x , y ) with a Gaussian filter, defined as
G ( x , y , σ ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 .
Then, a simple 2D, first-derivative operator (which, in the case of the algorithm used in this work, is the derivative of the Gaussian function used to smooth the image) is applied to the image already smoothed G ( x , y , σ ) f ( x , y ) . This highlights the zones of the image where first spatial derivatives are significant:
g ( x , y ) = G ( x , y , σ ) f ( x , y ) = G ( x , y , σ ) f ( x , y ) .
Finally, the gradients in each direction E x and E y are given by
E x = G ( x , y , σ ) x f ( x , y ) E y = G ( x , y , σ ) y f ( x , y ) .
This step of the process is called gradient calculation.
After computing gradient magnitude and orientation with (8) and (9), a full scan is performed in order to remove any unwanted pixels which may not constitute edges (non-maximum suppression).
The final step of the algorithm is hysteresis thresholding. In this phase, the algorithm decides which edges are suitable for the output image. Each edge has an intensity proportional to the magnitude of the gradient. For this, two threshold values are defined, the minimum and maximum values. All gradients higher than the maximum threshold are considered “sure-edges”. In contrast, the gradients that are lower than the minimum threshold are considered “non-edges”. For the gradients that are between the two values, two instances may occur [12]:
  • If the pixels in question are connected to “sure-edge” pixels, they are considered to be part of the edges;
  • Otherwise, these pixels are also discarded and considered non-edges.
Fractional grey-scale: When adapting the grey-scale fractional Canny, the only change was that, instead of calculating the first-order gradient of the Gaussian kernel, the GL derivative was applied to the Gaussian function with the desired order α . This means that for each point of the Gaussian mask, its fractional derivative is obtained:
c D t α f G ( t ) = lim h 0 h α k = 0 N ( 1 ) k α k f G ( t k h ) , N = t c h , t > c ,
where α k is given by (4).
After computing the fractional derivative, the algorithm follows the same steps of the conventional Canny, including non-maximum suppression and hysteresis thresholding.
Colour: In 1987, Kanade introduced an extension of the Canny operator [5] for colour edge detection. The operator is based on the same steps as the conventional Canny, but the computations are now vector-based. This means that the algorithm determines the first partial derivatives of the smoothed image in both x and y directions.
A three-component colour image assumes, for each of its points in the plane, a value which is a vector in the colour space. In the RGB space, which is a three-dimensional (3D) space to represent colour by a mixture of red (R), green (G), and blue (B), this corresponds to a function C = ( R , G , B ) . It is possible now to define the Jacobian matrix, which is the matrix that contains the first partial derivatives for each component of the colour vector:
J = R x R y G x G y B x B y = C x , C y .
Indexes x and y are used to represent partial derivatives:
R x = R x and R y = R y .
The direction along which the largest variation in the colour image can be found is the direction of the eigenvector of J T J that corresponds to the largest eigenvalue:
J T J = J x J x y J y x J y
J x = R x 2 + G x 2 + B x 2 J y = R y 2 + G y 2 + B y 2 J x y = J y x = R x R y + G x G y + B x B y .
In order to calculate the magnitude, one has to compute det ( J T J λ I ) = 0 , which yields
λ = J y + J x ± ( J y + J x ) 2 4 ( J x J y J x y 2 ) 2 .
The orientation θ of a colour edge is determined in an image by
tan ( 2 θ ) = 2 · C x · C y C x 2 C y 2 .
After the magnitude is determined for each edge, non-maximum suppression is used. This eliminates broad edges, thanks to a threshold value.
According to the literature [13], even though colour edges and intensity edges are identical in over 90 % of the cases, the former describes object geometry in the scene better than the latter.

2.4. Sobel Edge Detector

Original grey-scale: The Sobel operator measures the spatial gradient of an image. In this way, regions where there are sudden increases of pixel intensity are highlighted. Such regions correspond to edges.
The operator consists of two masks, one for each direction of x and y ( G x and G y , respectively). Note that the mask, or kernel, for G y is nothing more than that for G x rotated 90 degrees [14]:
1 0 + 1 2 0 + 2 1 0 + 1 + 1 + 2 + 1 0 0 0 1 2 1
Those edges that are vertical and horizontal in relation to the pixel grid cause a maximal response of these kernels. They can be applied to the input image separately. The resulting measurements of the gradient component in each direction ( G x and G y ) are thereafter combined, so as to find at each point both the magnitude of the gradient and its orientation, using (8) and (9), respectively.
The resulting image gradient components can be expressed as
G x = f ( x 1 , y 1 ) 2 f ( x 1 , y ) f ( x 1 , y + 1 ) + f ( x + 1 , y 1 ) + 2 f ( x + 1 , y ) + f ( x + 1 , y + 1 )
G y = f ( x 1 , y 1 ) 2 f ( x , y 1 ) f ( x + 1 , y 1 ) + f ( x 1 , y + 1 ) + 2 f ( x , y + 1 ) + f ( x + 1 , y + 1 ) .
Fractional grey-scale: Following the same line of thought, Yaacoub [2] presented a fractional Sobel operator.
Applying the GL definition (6) to G x (21), the fractional α -order derivative of G x yields
D α G x = α ( α + 1 ) ( α + 2 ) 12 · [ f ( x 4 , y 1 ) + 2 f ( x 4 , y ) + f ( x 4 , y + 1 ) ] + α ( α + 1 ) 4 · [ f ( x 3 , y 1 ) + 2 f ( x 3 , y ) + f ( x 3 , y + 1 ) ] + α 2 α ( α + 1 ) ( α + 2 ) 12 · [ f ( x 2 , y 1 ) + 2 f ( x 2 , y ) + f ( x 2 , y + 1 ) ] + 1 2 α ( α + 1 ) 4 · [ f ( x 1 , y 1 ) + 2 f ( x 1 , y ) + f ( x 1 , y + 1 ) ] α 2 [ f ( x , y 1 ) + 2 f ( x , y ) + f ( x , y + 1 ) ] + 1 2 [ f ( x + 1 , y 1 ) + 2 f ( x + 1 , y ) + f ( x + 1 , y + 1 ) ] .
This gradient is obtained convolving the image f ( x , y ) with a filter mask:
α ( α + 1 ) ( α + 2 ) 12 α ( α + 1 ) ( α + 2 ) 6 α ( α + 1 ) ( α + 2 ) 12 α ( α + 1 ) 4 α ( α + 1 ) 2 α ( α + 1 ) 4 α 2 α ( α + 1 ) ( α + 2 ) 12 α α ( α + 1 ) ( α + 2 ) 6 α 2 α ( α + 1 ) ( α + 2 ) 12 1 2 α ( α + 1 ) 4 1 α ( α + 1 ) 2 1 2 α ( α + 1 ) 4 α 2 α α 2 1 2 1 1 2
Since the mask has an even number of rows, the origin is not centered. In the mask above, the origin is considered to be located on the fifth row, in the second column, shown in bold.
Similar reasoning can be applied to the y-direction. That is why, in this case, the mask in y is not the mask in x transposed.
According to the authors, this edge detector, compared with the conventional Sobel edge detector, resulted in thinner edges and reduced the number pixels of false edges.
Colour: A novel, colour-based, fractional Sobel was introduced by applying the same colour-based formulation of Section 2.3, only this time with the mask in (24). A Jacobian was constructed, and the largest eigenvalue of J T J computed; this allows for discovery of the direction in the image, along which the largest variation in the chromatic image function occurs.

2.5. Roberts Edge Detector

Original grey-scale: The Roberts Cross operator [15] is a simpler, quick way to find the gradient of an image. It also finds zones with great variations in pixel intensity that correspond to edges.
The Roberts operator consists of a pair of 2 × 2 masks, again, one for each direction. Here, the mask to compute the gradient in one direction is the other mask rotated by 90 [14]:
+ 1 0 0 1 0 + 1 1 0
The combination of the two gradients in order to find the magnitude and orientation is performed once more using the above-mentioned expressions.
Fractional grey-scale: The authors of [3] presented the application of the GL derivative to the integer Roberts edge detector and arrived to a kernel for a fractional-order operator. It is known that the Roberts expression for the gradients stands:
g ( x , y ) = [ f ( x , y + 1 ) f ( x + 1 , y ) ] 2 + [ f ( x + 1 , y + 1 ) f ( x , y ) ] 2 1 2 .
Combining (6) with (4), the authors arrived at expressions for the gradient’s components:
α f ( x , y ) x α f ( x , y ) + ( α ) f ( x 1 , y ) + ( α ) ( α + 1 ) 2 f ( x 2 , y ) + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) f ( x n , y )
α f ( x , y ) y α f ( x , y ) + ( α ) f ( x , y 1 ) + ( α ) ( α + 1 ) 2 f ( x , y 2 ) + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) f ( x , y n ) .
Referring to (27) and (28), the 3 × 3 fractional differential mask can be constructed in the eight central symmetric directions, viz. positive and negative x and y coordinates, and left and right downward and upward diagonals. The sum of the eight directional masks yields
α 2 α + 2 2 α 2 α + 2 2 α 2 α + 2 2 α 2 α + 2 2 8 α α 2 α + 2 2 α 2 α + 2 2 α 2 α + 2 2 α 2 α + 2 2
Combining the fractional mask with the Roberts operator defined by (26), the authors arrived at a solution for edge detection in which the texture of the image is enhanced and small edges are also detected. The mathematical formulation for this combination is
D α [ g ( x , y ) ] = α g ( x , y ) x α + α g ( x , y ) y α
2 g ( x , y ) x 2 g ( x , y ) + ( α ) g ( x 1 , y ) + ( α ) ( α + 1 ) 2 g ( x 2 , y ) + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) g ( x n , y ) = = [ f ( x , y + 1 ) f ( x + 1 , y ) ] 2 + [ f ( x + 1 , y + 1 ) f ( x , y ) ] 2 1 2 + ( α ) [ f ( x 1 , y + 1 ) f ( x , y ) ] 2 + [ f ( x , y + 1 ) f ( x 1 , y ) ] 2 1 2 + ( α ) ( α + 1 ) 2 [ f ( x 2 , y + 1 ) f ( x 1 , y ) ] 2 + + [ f ( x 1 , y + 1 ) f ( x 2 , y ) ] 2 1 2 + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) [ f ( x n , y + 1 ) f ( x n + 1 , y ) ] 2 + + [ f ( x n + 1 , y + 1 ) f ( x n , y ) ] 2
α g ( x , y ) y α g ( x , y ) + ( α ) g ( x , y 1 ) + ( α ) ( α + 1 ) 2 g ( x , y 2 ) + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) g ( x , y n ) = [ f ( x , y + 1 ) f ( x + 1 , y ) ] 2 + [ f ( x + 1 , y + 1 ) f ( x , y ) ] 2 1 2 + ( α ) [ f ( x , y ) f ( x + 1 , y 1 ) ] 2 + [ f ( x + 1 , y ) f ( x , y 1 ) ] 2 1 2 + ( α ) ( α + 1 ) 2 [ f ( x , y 1 ) f ( x + 1 , y 2 ) ] 2 + + [ f ( x + 1 , y 1 ) f ( x , y 2 ) ] 2 1 2 + + ( 1 ) n 1 Γ ( α + 1 ) ( n 1 ) ! Γ ( α n + 2 ) [ f ( x , y n + 1 ) f ( x + 1 , y n ) ] 2 + + [ f ( x + 1 , y n + 1 ) f ( x , y n ) ] 2 1 2 ,
where f ( x , y ) is the input image, and g ( x , y ) is the output image using an integer Roberts operator.
From the experimental results in [3], and comparing this fractional algorithm with the original Roberts algorithm, it was concluded that edge detection was enhanced, while the thinner edges of the original algorithm are preserved.
Colour: The reasoning used to implement the colour-based Sobel operator was also used here. The fractional Roberts requires convolutions with two masks—first with the integer masks, and then with the fractional-derivatives operator. The colour-based vector convolution and Jacobian computations were applied only to the first one with the integer Roberts. Then, the output of this first integer colour-based edge detection serves as input to the fractional derivative operation.

2.6. Laplacian of Gaussian Detector

Original grey-scale: The Laplacian is a measure of the second spatial derivative of an image. It allows for the identification of zones where intensity changes fast. It is thus often used for edge detection. The Laplacian of a 2D image is given by
L ( x , y ) = 2 I x 2 + 2 I y 2 .
In the discrete domain, the simplest approximation of the continuous Laplacian is the numerical first-derivative of the numerical first-derivative, yielding
2 f x 2 = f ( i , j + 1 ) 2 f ( i , j ) + f ( i , j 1 )
2 f y 2 = f ( i + 1 , j ) 2 f ( i , j ) + f ( i 1 , j ) .
Substituting (34) and (35) in (33), the first kernel of (36) is obtained. The second, a non-separable eight-neighbor Laplacian defined by the gain-normalized impulse response array, was suggested by Prewitt. The third mask in (36) is a separable eight-neighbor version of the Laplacian [16].
0 1 0 1 4 1 0 1 0 1 1 1 1 8 1 1 1 1 1 2 1 2 4 2 1 2 1
To tackle sensitivity of second-order derivatives to noise, the image is smoothed with a Gaussian filter, and only then is the Laplacian filter reducing high-frequency noise applied. The smoothing filter can also be convolved first with the Laplacian kernel, and only then is the result convolved with the input image.
The Laplacian of Gaussian (LoG) operator of a 2D image is illustrated in Figure 1 and defined by [14]
: LoG ( x , y ) = 1 π σ 4 1 x 2 + y 2 2 σ 2 e x 2 + y 2 2 σ 2 .
Fractional grey-scale: In 2014, the authors of [4] presented a fractional adaptation for the first operator in (36) (using the symetric mask).
In a discrete function (f), the operator corresponds to approximation
G ( f ) = f ( x 1 , y ) f ( x , y 1 ) + 4 f ( x , y ) f ( x , y + 1 ) f ( x + 1 , y ) .
Decomposing and noting that for this case, h = 1 ,
G ( f ) = f ( x , y ) x + f ( x , y ) y f ( x , y + 1 ) y f ( x + 1 , y ) x = 2 f ( x + 1 , y ) x 2 2 f ( x , y + 1 ) y 2 .
By generalizing the order from integer to fractional, a fractional-order differential form of the Laplacian operator can be obtained:
G α ( f ) = α f ( x + 1 , y ) x α α f ( x , y + 1 ) y α .
Using the GL definition for the fractional-order derivative as it was used for the other operators, one may arrive at
G α ( f ) = k = 0 K 1 ( 1 ) k C k α f ( x + 1 k , y ) k = 0 K 1 ( 1 ) k C k α f ( x , y + 1 k ) = f ( x + 1 , y ) α f ( x , y ) + α 2 α 2 f ( x 1 , y ) + + ( 1 ) K 1 C K 1 α f ( x + 2 K , y ) f ( x , y + 1 ) α f ( x , y ) + α 2 α 2 f ( x , y 1 ) + + ( 1 ) K 1 C K 1 α f ( x , y + 2 K ) .
With the definition above, the mask that performs the calculation of the fractional Laplacian may be built:
0 0 ( 1 ) κ C K 1 α 0 0 0 α α 2 / 2 0 ( 1 ) κ C K 1 α α α 2 / 2 2 α 1 0 0 1 0
Experiments with (42) show that, the larger the order of differentiation is, the better the image feature is preserved, but the more noise that appears too.
Colour: A novel, colour-based, fractional LoG operator was implemented and tested in this study. The conventional algorithm finds edges searching for zero-crossings. This means that previous formulations cannot be adapted. According to [17], a pixel of a colour image is considered as part of an edge if zero-crossings are found in any of the colour channels. Thus, the fractional grey-scale operator formulated in [4] can be applied to each colour channel of a colour image. Then, a search for zero-crossings in the convolution outputs may be performed. If a zero-crossing is found in any of the channels, the corresponding pixel is flagged as part of an edge. The output of this algorithm is a binary image with all edges found.

2.7. CRONE

Original fractional grey-scale: In 2002, Benoît Mathieu wanted to prove that an edge detector based on fractional differentiation could improve edge detection and detection selectivity in the case of parabolic luminance transition.
The first derivative of a function f ( x ) , calculated with increasing x, can be defined by
D n f ( x ) = f ( x ) f ( x h ) h
with decreasing x,
D n f ( x ) = f ( x ) f ( x + h ) h
with h being infinitesimally small.
A shift operator q is consequently introduced, defined by
q f ( x ) = f ( x + h )
q 1 f ( x ) = f ( x h ) .
Using the shift operator on the directional derivative yields
D f ( x ) = 1 q 1 h f ( x )
D f ( x ) = 1 q h f ( x ) .
From the expressions above, it is clear that
D = 1 q 1 h
D = 1 q h .
Generalizing to an order n, D n and D n can be defined as
D n = 1 q 1 h n
D n = 1 q h n .
As explained before, the bidirectional detector can be constructed by a composition of the two unidirectional operators using the following expression:
D n = D n D n = 1 h n 1 q 1 n ( 1 q ) n .
Expanding ( 1 q 1 ) n and ( 1 q ) n using Newton’s binomial formula, the expression above can be rewritten:
D n = 1 h n k = 0 ( 1 ) k n ( n 1 ) ( n k + 1 ) k ! × q k q k .
Applying the operator to a function, such as the transition studied before,
D n f ( x ) = 1 h n k = 0 a k [ f ( x k h ) f ( x + k h ) ]
where
a k = ( 1 ) k n k = ( 1 ) k n ( n 1 ) ( n k + 1 ) k ! .
In order to detect edges on images, the formulated detector must be designed in two dimensions. Two independent vector operators for x and y, each of them a truncated CRONE detector, given respectively by
+ a m + a 1 0 a 1 a m
+ a m + a 1 0 a 1 a m
are used for this purpose. The detector was experimented in artificial and real images, and performance compared with Prewitt operators. In all cases, the CRONE detector showed better immunity to noise.
Colour: In this paper, a novel, colour-based, fractional CRONE was implemented, following the same steps as in the colour Canny already formulated. The colour channels are convolved with the masks for each direction constructing a Jacobian. Then, the maximum eigenvalue of J T J is computed in order to find the pixels where the variation in chromatic image is higher than a designated threshold.

2.8. Fractional Derivative Operator

Original fractional grey-scale: In the fractional Roberts, the fractional mask is used in addition to the conventional integer mask. In this study, the Fractional Derivatives mask in (29) was implemented individually as a fractional edge detector.
Colour: A new colour-based version for this detector was also implemented. In this case, there is only one mask to detect edges in eight different directions. Thus, the Jacobian is reduced to a vector (one dimension):
J = R G B
The magnitude of the gradients was computed with the 3D vector formula:
f = R 2 + G 2 + B 2 .

3. Results

3.1. Implementation

All edge detectors were implemented in MATLAB to perform fractional detection. A low-nebulosity image from ESA’s Sentinel-2 satellite [18] is used below to illustrate performance. The image retrieved from the website was analysed, and a ground truth was manually taken using GIMP. Both the selected photograph and its corresponding ground truth are given in Figure 2. Additional results for a greater number of images from the same source are available in Excel files from our public repository [10]. Performance was checked with the usual metrics
J ( A , B ) = T P F P + T P + F N
D S C = 2 × T P 2 × T P + F P + F N
Sensitivity = T P T P + F N
Specificity = T N F P + T N ,
of which the first two are, respectively, the Jaccard coefficient and the Dice similarity coefficient. In (61)–(64), true-positives, false-positives, true-negatives, and false-negatives are defined as usual (see Table 1).
All the algorithms in Section 2 were tested with parameters varying within wide ranges ( α = 3 : 0.1 : 3 , threshold = 0.1 : 0.2 : 0.9 , σ = 0.2 : 0.2 : 2 , k = 1 : 1 : 5 ). The best performances of the different colour detectors are presented as an example in Figure 3; for additional examples, see the repository [10].

4. Discussion

Fractional-order adaptations of grey-scale edge detection algorithms are known to usually have better performance than the original method [19]. This happens in the example shown, where the results for Canny serve as an illustration: the integer algorithm achieves a Jaccard coefficient of 51.80%, the fractional method increases this result to 73.56%, and the colour version was able to almost entirely close the smooth inner-land gradients, achieving a Jaccard coefficient of 96.32%. Even when the increase in performance is low in percentage, since we are dealing with images that are composed of more than 120 million pixels, a 1% increase corresponds to more than one million correctly identified pixels. On the other hand, it is true that the colour-based detector is also heavier computationally, since it usually requires more than one convolution (at least one for each colour channel).
To summarize, in this study, seven novel fractional edge-detection methods were introduced (viz. one grey-scale and six colour-based versions). Future work includes a statistical assessment of their relative performance, in different types of images and applications, to statistically measure how their use improves performance.

Author Contributions

M.H.: methodology, software, investigation, data curation, writing—original draft preparation, visualization; D.V.: conceptualization, methodology, validation, investigation, writing—review and editing, supervision; P.G.: validation, investigation, writing—review and editing; R.M.: conceptualization, methodology, validation, investigation, data curation, writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by FCT, through IDMEC, under LAETA, project UIDB/50022/2020; by FCT under the ICT (Institute of Earth Sciences) project UIDB/04683/2020; and by FCT, through the CENTRA project UIDB/00099/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

See [10].

Acknowledgments

The authors would like to acknowledge ESA, Copernicus for kindly providing access to the database for this research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Appears in
CRONECommande Robuste d’Order Non-EntierSection 2.7
(Non-integer order robust control, in French)
DSCDice similarity coefficientEquation (62)
ESAEuropean Space AgencySection 3.1
FNFalse negativeTable 1
FPFalse positiveTable 1
GIMPGNU Image Manipulation ProgramSection 3.1
GLGrünwald-LetnikoffSection 2.1
LoGLaplacian of GaussianSection 2.6
TNTrue negativeTable 1
TPTrue positiveTable 1

References

  1. Mathieu, B.; Melchior, P.; Oustaloup, A.; Ceyral, C. Fractional differentiation for edge detection. Signal Process. 2003, 83, 2421–2432. [Google Scholar] [CrossRef]
  2. Yaacoub, C.; Zeid Daou, R.A. Fractional Order Sobel Edge Detector. In Proceedings of the 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 6–9 November 2019; pp. 1–5. [Google Scholar] [CrossRef]
  3. Chen, X.; Fei, X. Improving edge-detection algorithm based on fractional differential approach. In Proceedings of the 2012 International Conference on Image, Vision and Computing, Shanghai, China, 25–26 August 2012; Volume 50. [Google Scholar] [CrossRef]
  4. Tian, D.; Wu, J.; Yang, Y. A Fractional-Order Laplacian Operator for Image Edge Detection. Appl. Mech. Mater. 2014, 536–537, 55–58. [Google Scholar] [CrossRef]
  5. Kanade, T. Image Understanding Research at Carnegie Mellon. In Proceedings of a Workshop on Image Understanding Workshop; Morgan Kaufmann Publishers Inc.: Los Angeles, CA, USA, 1987; pp. 32–48. [Google Scholar]
  6. Cumani, A. Edge detection in multispectral images. CVGIP Graph. Model. Image Process. 1991, 53, 40–51. [Google Scholar] [CrossRef]
  7. Valério, D.; da Costa, J.S. Introduction to single-input, single-output fractional control. IET Control Theory Appl. 2011, 5, 1033–1057. [Google Scholar] [CrossRef]
  8. Grünwald, A.K. Über “begrenzte” Derivationen und deren Anwendung. Z. FüR Math. Und Phys. 1867, 12, 441–480. [Google Scholar]
  9. Letnikov, A. Theory of Differentiation with an Arbitrary Index (Russian). Moscow Matem. Sbornik 1872, 6, 413–445. [Google Scholar]
  10. Henriques, M. Results Repository. Available online: https://drive.google.com/drive/folders/1GMeKvc3oqNWfzd4h-GyRwHT9yFJ_UDLP?usp=sharing (accessed on 15 January 2021).
  11. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 679–698. [Google Scholar] [CrossRef]
  12. CVonline. Edges: The Canny Edge Detector. Available online: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MARBLE/low/edges/canny.htm (accessed on 31 July 2020).
  13. Koschan, A.; Abidi, M. Detection and classification of edges in color images. IEEE Signal Process. Mag. 2005, 22, 64–73. [Google Scholar] [CrossRef]
  14. Fisher, R.; Perkins, S.; Walker, A.; Wolfart, E. Image Processing Learning Resources. 2004. Available online: http://homepages.inf.ed.ac.uk/rbf/HIPR2/ (accessed on 15 January 2021).
  15. Roberts, L. Machine Perception of 3-D Solids. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963. [Google Scholar]
  16. Pratt, W.K. Second-Order Derivative Edge Detection. In Digital Image Processing; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2007. [Google Scholar] [CrossRef]
  17. Zhu, S.Y.; Plataniotis, K.N.; Venetsanopoulos, A.N. Comprehensive analysis of edge detection in color image processing. Opt. Eng. 1999, 38, 612–625. [Google Scholar] [CrossRef]
  18. ESA. Copernicus Open Access Hub. 2020. Available online: https://scihub.copernicus.eu/ (accessed on 31 March 2020).
  19. Bento, T.; Valério, D.; Teodoro, P.; Martins, J. Fractional order image processing of medical images. J. Appl. Nonlinear Dyn. 2017, 6, 181–191. [Google Scholar] [CrossRef]
Figure 1. 2D Laplacian of Gaussian operator (37) with σ = 3 4 .
Figure 1. 2D Laplacian of Gaussian operator (37) with σ = 3 4 .
Mathematics 09 00457 g001
Figure 2. Photograph used to illustrate performance and its ground truth.
Figure 2. Photograph used to illustrate performance and its ground truth.
Mathematics 09 00457 g002
Figure 3. Best results for Figure 2 processed using the different algorithms for colour images; t h is the threshold.
Figure 3. Best results for Figure 2 processed using the different algorithms for colour images; t h is the threshold.
Mathematics 09 00457 g003
Table 1. Performance instances.
Table 1. Performance instances.
Processed Image
01
Ground Truth0TNFP
1FNTP
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Henriques, M.; Valério, D.; Gordo, P.; Melicio, R. Fractional-Order Colour Image Processing. Mathematics 2021, 9, 457. https://doi.org/10.3390/math9050457

AMA Style

Henriques M, Valério D, Gordo P, Melicio R. Fractional-Order Colour Image Processing. Mathematics. 2021; 9(5):457. https://doi.org/10.3390/math9050457

Chicago/Turabian Style

Henriques, Manuel, Duarte Valério, Paulo Gordo, and Rui Melicio. 2021. "Fractional-Order Colour Image Processing" Mathematics 9, no. 5: 457. https://doi.org/10.3390/math9050457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop