Next Article in Journal
An Automated Machine Learning Approach for Real-Time Fault Detection and Diagnosis
Next Article in Special Issue
Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification
Previous Article in Journal
Circular Optical Phased Array with Large Steering Range and High Resolution
Previous Article in Special Issue
Impact of Lung Segmentation on the Diagnosis and Explanation of COVID-19 in Chest X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets

1
Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
2
Department of Information Technology and Cybersecurity, Missouri State University, 901 S. National Street, Springfield, MO 65804, USA
3
BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, D 219, FI-33520 Tampere, Finland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(16), 6129; https://doi.org/10.3390/s22166129
Submission received: 2 June 2022 / Revised: 9 August 2022 / Accepted: 12 August 2022 / Published: 16 August 2022
(This article belongs to the Collection Medical Image Classification)

Abstract

:
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.

1. Introduction

First developed in the 1940s, artificial neural networks have had a checkered history, sometimes lauded by researchers for their unique computational powers and other times discounted for being no better than statistical methods. About a decade ago, the power of deep artificial neural networks radically changed the direction of machine learning and rapidly made significant inroads into many scientific, medical, and engineering areas [1,2,3,4,5,6,7,8]. The strength of deep learners is demonstrated by the many successes achieved by one of the most famous and robust deep learning architectures, Convolutional Neural Networks (CNNs). CNNs frequently win image recognition competitions and have consistently outperformed other classifiers on a variety of applications, including image classification [9,10], object detection [11,12], face recognition [13,14], and machine translation [15], to name a few. Not only do CNNs continue to perform better than traditional classifiers, but they also outperform human beings, including experts, in many image recognition tasks. In the medical domain, for example, CNNs have been shown to outperform human experts in recognizing skin cancer [16], skin lesions on the face and scalp [17], and the detection of esophageal cancer [18].
It is no wonder, then, that CNNs and other deep learners have exploded exponentially in medical imaging research [19]. CNNs have been successfully applied to a wide range of applications (as evidenced by these very recent reviews and studies): the identification and recognition of facial phenotypes of genetic disorders [20], diabetic retinopathy [21,22,23], glaucoma [24], lung cancer [25], breast cancer [26], colon cancer [27], gastric cancer [28,29], ovarian cancer [30,31,32], Alzheimer’s disease [33,34], skin cancer [16], skin lesions [17], oral cancer [35,36], esophageal cancer [18], and GI ulcers [37].
Despite these successes, the unique characteristics of medical images pose challenges for CNN classification. The first challenge concerns the image size of medical data. Typical CNN image inputs are around 200 × 200 pixels. Many medical images are gigantic. For instance, histopathology slides, once digitized, often result in gigapixel images, around 100,000 × 100,000 pixels [38]. Another problem for CNNs is the small sample size of many medical data sets. As is well known, CNNs require massive numbers of samples to prevent overfitting. It is too cumbersome, labor-intensive, and expensive to acquire collections of images numbering in the hundreds of thousands in some medical areas. There are well-known techniques for tackling the problem of overfitting when data are low, the two most common being transfer learning with pre-trained CNNs and data argumentation. The medical literature is replete with studies using both methods (for some literature reviews on these two methods in medical imaging, see [39,40,41]). As observed in [40], transfer learning works well combined with data augmentation. Transfer learning is typically applied in two ways: for fine-tuning with pre-trained CNNs and as feature extractors, with the features then fed into more traditional classifiers.
Building Deep CNN ensembles of pre-trained CNNs is yet another powerful technique for enhancing CNN performance on small sample sizes. Some examples of robust CNN ensembles reported in the last couple of years include [42], for classifying ER status from DCE-MRI breast volumes; [43], where a hierarchical ensemble was trained for diabetic macular edema diagnosis; [44] for whole-brain segmentation; and [45] for small lesion detection.
Ensemble learning combines outputs from multiple classifiers to improve performance. This method relies on the introduction of diversity, whether in the data each CNN is trained on, the type of CNNs used to build the ensemble, or some other changes in the architecture of the CNNs. For example, in [43], mentioned above, ensembles were built on the classifier level by combining the results of two sets of CNNs within a hierarchical schema. In [44], a novel ensemble was developed on the data level by looking at different brain areas, and in [45], multiple-depth CNNs were trained on image patches. In [46], CNNs with different activation functions were shown to be highly effective, and in [47], different activation functions were inserted into different layers within a single CNN network.
In this paper, we extend [46] by testing several activation functions with two CNNs, VGG16 [48] and ResNet50 [49], and their fusions across fifteen biomedical data sets representing different biomedical classification tasks. The set of activation functions includes the best-performing ones used with these networks and six new ones: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The best performance was obtained by randomly replacing every ReLU layer of each CNN with a different activation function.
The contributions of this study are the following:
(1)
The performance of twenty individual activation functions is assessed using two CNNs (VGG16 and ResNet50) across fifteen different medical data sets.
(2)
The performance of ensembles composed of the CNNs examined in #1 and four other topologies is evaluated.
(3)
Six new activation functions are proposed.
The remainder of this paper is organized as follows. In Section 2, we review the literature on activation functions used with CNNs. In Section 3, we describe all the activation functions tested in this work. In Section 4, the stochastic approach for constructing CNN ensembles is detailed (some other methods are described in the experimental section). In Section 5, we present a detailed evaluation of each of the activation functions using both CNNs on the fifteen data sets, along with the results of their fusions. Finally, in Section 6, we suggest new ideas for future investigation together with some concluding remarks.
The MATLAB source code for this study will be available at https://github.com/LorisNanni.

2. Related Work with Activation Functions

Evolutions in CNN design initially focused on building better network topologies. As activation functions impact training dynamics and performance, many researchers have also focused on developing better activation functions. For many years, the sigmoid and the hyperbolic tangent were the most popular neural network activation functions. The hyperbolic tangent’s main advantage over the sigmoid is that the hyperbolic has a steeper derivative than the sigmoid function. Neither function, however, works that well with deep learners since both are subject to the vanishing gradient problem. It was soon realized that nonlinearities work better with deep learners.
One of the first nonlinearities to demonstrate improved performance with CNNs was the Rectified Linear Unit (ReLU) activation function [50], which is equal to the identity function with positive input and zero with negative input [51]. Although ReLU is nondifferentiable, it gave AlexNet the edge to win the 2012 ImageNet competition [52].
The success of ReLU in AlexNet motivated researchers to investigate other nonlinearities and the desirable properties they possess. As a consequence, variations of ReLU have proliferated. For example, Leaky ReLU [53], like ReLU, is also equivalent to the identity function for positive values but has a hyperparameter α     > 0 applied to the negative inputs to ensure the gradient is never zero. As a result, Leaky ReLU is not as prone to getting caught in local minima and solves the ReLU problem with hard zeros that makes it more likely to fail to activate. The Exponential Linear Unit (ELU) [54] is an activation function similar to Leaky ReLU. The advantage offered by ELU is that it always produces a positive gradient since it exponentially decreases to the limit point α as the input goes to minus infinity. The main disadvantage of ELU is that it saturates on the left side. Another activation function designed to handle the vanishing gradient problem is the Scaled Exponential Linear Unit (SELU) [55]. SELU is identical to ELU except that it is multiplied by the constant λ > 1 to maintain the mean and the variance of the input features.
Until 2015, activation functions were engineered to modify the weights and biases of a neural network. Parametric ReLU (PReLU) [56] gave Leaky ReLU a learnable parameter applied to the negative slope. The success of PReLU attracted more research on the learnable activation functions topic [57,58]. A new generation of activation functions was then developed, one notable example being the Adaptive Piecewise Linear Unit (APLU) [57]. APLU independently learns during the training phase the piecewise slopes and points of nondifferentiability for each neuron using gradient descent; therefore, it can imitate any piecewise linear function.
Instead of employing a learnable parameter in the definition of an activation function, as with PReLu and APLU, the construction of an activation function from a given set of functions can be learned. In [59], for instance, it was proposed to create an activation function that automatically learned the best combinations of tanh, ReLU, and the identity function. Another activation function of this type is the S-shaped Rectified Linear Activation Unit (SReLU) [60]. Using reinforcement learning, SReLU was designed to learn convex and nonconvex functions to imitate both the Webner–Fechner and the Stevens law. This process produced an activation called Swish, which the authors view as a smooth function that nonlinearly interpolates between the linear function and ReLU.
Similar to APLU is the Mexican ReLU (MeLU [61]), whose shape resembles the Mexican hat wavelet. MeLU is a piecewise linear activation function that combines PReLU with many Mexican hat functions. Like APLU, MeLU has learnable parameters that approximate the same piecewise linear functions equivalent to identity when x is sufficiently large. MeLU has some main differences with respect to APLU: first, it has a much larger number of parameters; and second, the method in which the approximations are calculated for each function is different.

3. Activation Functions

As described in the Introduction, this paper explores classifying medical imagery using combinations of some of the best performing activation functions on two widely used high-performance CNN architectures: VGG16 [48] and ResNet50 [49], each pre-trained on ImageNet. VGG16 [48], also known as the OxfordNet, is the second-place winner in the ILSVRC 2014 competition and was one of the deepest neural networks produced at that time. The input into VGG16 passes through stacks of convolutional layers, with filters having small receptive fields. Stacking these layers is similar in effect to CNNs having larger convolutional filters, but the stacks involve fewer parameters and are thus more efficient. ResNet50 [49], winner of the ILSVRC 2015 contest and now a popular network, is a CNN with fifty layers known for its skip connections that sum the input of a block to its output, a technique that promotes gradient propagation and that propagates lower-level information to higher level layers.
The remainder of this section mathematically describes and discusses the twenty activation functions investigated in this study: ReLU [50], Leaky ReLU [62], ELU [54], SELU [55], PReLU [56], APLU [57], SReLU [63], MeLU [61], Splash [64], Mish [65], PDELU [66], Swish [60], Soft Learnable [67], SRS [67], and GaLU ([68]), as well as the novel activation functions proposed here: 2D Mexican ReLU; TanELU; MeLU + GaLU; Symmetric MeLU; Symmetric GaLU; Flexible MeLU.
The main advantage of these more complex activation functions with learnable parameters is that they can better learn the abstract features through nonlinear transformations. This is a generic characteristic of learnable activation functions, well known in shallow networks [69]. The main disadvantage is that activation functions with several learnable parameters need large data sets for training.
A further rationale for our proposed activation functions is to create activation functions that are quite different from each other to improve performance in ensembles; for this reason, we have developed the 2D MeLU, which is quite different from standard activation functions.

3.1. Rectified Activation Functions

3.1.1. ReLU

ReLU [50], illustrated in Figure 1, is defined as:
y i = f x i = 0 , | x i < 0 x i , | x i 0 .
The gradient of ReLU is
d y i d x i = f x i = 0 , | x i < 0 1 , | x i 0 .                                    

3.1.2. Leaky ReLU

In contrast to ReLU, Leaky ReLU [53] has no point with a null gradient. Leaky ReLU, illustrated in Figure 2, is defined as:
y i = f x i = a x i , | x i < 0       x i , | x i 0 ,
where a   ( set   to   0.01 here) is a small real number.
The gradient of Leaky ReLU is:
d y i d x i = f x i = a , | x i < 0 1 , | x i 0 .

3.1.3. PReLU

Parametric ReLU (PreLU) [56] is identical to Leaky ReLU except that the parameter ac (different for every channel of the input) is learnable. PReLU is defined as:
y i = f x i = a c x i , | x i < 0       x i , | x i 0 ,
where a c is a real number.
The gradients of PReLU are:
d y i d x i = f x i = a c , | x i < 0   1 , | x i 0 ,
d y i d a c = x i , | x i < 0   0 , | x i 0 .
Slopes on the left-hand sides are all initialized to 0.

3.2. Exponential Activation Functions

3.2.1. ELU

Exponential Linear Unit (ELU) [54] is differentiable and, as is the case with Leaky ReLU, the gradient is always positive and bounded from below by a . ELU, illustrated in Figure 3, is defined as:
y i = f x i = a exp ( x i ) 1 , | x i < 0 x i , | x i 0 ,
where a   ( set   to   1 here) is a real number.
The gradient of Leaky ELU is:
d y i d x i = f x i = a   e x p x i , | x i < 0 1 , | x i 0 .

3.2.2. PDELU

Piecewise linear Parametric Deformable Exponential Linear Unit (PDELU) [66] is designed to have zero mean, which speeds up the training process. PDELU is defined as
y i = f x i = x i x i > 0 α i · [ 1 + 1 t x i ] 1 1 t 1 x i 0
where x + = max x , 0 . The f x i function takes values in the α , range; its slope in the negative part is controlled by means of the α i parameters ( i runs over the input channels) that are jointly learned by the loss function. The parameter t controls the degree of deformation of the exponential function. If 0 < t < 1 , then f x i decays to 0 faster than the exponential.

3.3. Logistic Sigmoid and Tanh-Based AFs

3.3.1. Swish

Swish [60] is designed using reinforcement learning to learn to efficiently sum, multiply, and compose different functions that are used as building blocks. The best function is
y = f x = x · s i g m o i d β x = x 1 + e β x
where β acts as a constant or a learnable parameter that is evaluated during training. When β = 1 , as in this study, Swish is equivalent to the Sigmoid-weighted Linear Unit (SiLU), proposed for reinforcement learning. As β , Swish assumes the shape of a ReLU function. Unlike ReLU, however, Swish is smooth and nonmonotonic, as demonstrated in [60]; this is a peculiar aspect of this activation function. In practice, a value of β = 1 is a good starting point, from which performance can be further improved by training such a parameter.

3.3.2. Mish

Mish [65] is defined as
y = f x = x · t a n h s o f t p l u s α x = x · t a n h l n 1 + e α x ,
where α is a learnable parameter.

3.3.3. TanELU (New)

TanELU is an activation function presented here that is simply the weighted sum of tanh and ReLU:
y i = R e L U x i + a i t a n h x i ,  
where a i is a learnable parameter.

3.4. Learning/Adaptive Activation Functions

3.4.1. SReLU

S-shaped ReLU (SReLU) [63] is composed of three piecewise linear functions expressed by four learnable parameters ( t l , t r , a l , and a r initialized as a l = 0 ,   t l = 0 , t r = m a x I n p u t , a hyperparameter). This rather large set of parameters gives SReLU its high representational power. SReLU, illustrated in Figure 4, is defined as:
y i = f x i = t l + a l x i t l , | x i t l x i , | t l < x i < t r t r + a r x i t r , | x i t r . ,  
where a c is a real number.
The gradients of SeLU are:
d y i d x i = f x i = a l , | x i t l 1 , | t l < x i < t r a r , | x i t r ,
d y i d a l = x i t l , | x i t l 0 , | x i > t l ,
d y i d t l = 1 a l , | x i t l 0 , | x i > t l ,
d y i d a r = x i t r , | x i t r 0 , | x i < t r ,
d y i d t r = 1 a r , | x i t r 0 , | x i < t r .
Here, we use a l = 0.5 , a r = 0.2 , t l = 2 , t r = 1.5 .

3.4.2. APLU

Adaptive Piecewise Linear Unit (APLU) [57] is a linear piecewise function that can approximate any continuous function on a compact set. The gradient of APLU is the sum of the gradients of ReLU and of the functions contained in the sum. APLU is defined as:
y i =   ReLU x i + c = 1 n a c min 0 , x i + b c ,
where a c and b c are real numbers that are different for each channel of the input.
With respect to the parameters a c and b c , the gradients of APLU are:
d f x , a d a c = x + b c , | x < b c 0 , | x b c ,
d f x , a d b c = a c , | x < b c 0 , | x b c .
The values for a c are initialized here to zero, with points randomly initialized. The 0.001 L 2 -penalty is added to the norm of the a c values. This addition requires that another term L r e g be included in the loss function:
L reg = c = 1 n a c 2 .
Furthermore, a relative learning rate is added: m a x I n p u t multiplied by the smallest value used for the rest of the network. If λ is the global learning rate, then the learning rate λ * of the parameters a c would be
λ * = λ / m a x I n p u t .

3.4.3. MeLU

The mathematical basis of the Mexican ReLU (MeLU) [61] activation function can be described as follows. Given the real numbers a and λ and letting ϕ a ,   λ x = max λ x a , 0 be a so-called Mexican hat type of function, then when x a > λ , the function ϕ a ,   λ x is null but increases with a derivative of 1 and a between a λ and decreases with a derivative of 1 between a and a + λ .
Considering the above, MeLU is defined as
y i = M e L U x i = P R e L U c 0 x i + j = 1 k 1 c j   ϕ a j , λ j x i ,  
where k is the number of learnable parameters for each channel, c j are the learnable parameters, and c 0 is the vector of parameters in PReLU.
The parameter k ( k = 4   or   8 here) has one value for PReLU and k 1 values for the coefficients in the sum of the Mexican hat functions. The real numbers a j and λ j are fixed (see Table 1) and are chosen recursively. The value of m a x I n p u t is set to 256. The first Mexican hat function has its maximum at 2 · m a x I n p u t and is equal to zero in 0 and 4 · m a x I n p u t . The next two functions are chosen to be zero outside the interval [0, 2 · m a x I n p u t ] and [ 2 · m a x I n p u t , 4 · m a x I n p u t ], with the requirement being they have their maximum in m a x I n p u t and 3 · m a x I n p u t .
The Mexican hat functions on which MeLU is based are continuous and piecewise differentiable. Mexican hat functions are also a Hilbert basis on a compact set with the L 2 norm. As a result, MeLU can approximate every function in L 2 0 ,   1024 as k goes to infinity.
When the c i learnable parameters are set to zero, MeLU is identical to ReLU. Thus, MeLU can easily replace networks pre-trained with ReLU. This is not to say, of course, that MeLU cannot replace the activation functions of networks trained with Leaky ReLU and PReLU. In this study, all c i are initialized to zero, so start off as ReLU, with all its attendant properties.
MeLU’s hyperparameter ranges from zero to infinity, producing many desirable properties. The gradient is rarely flat, and saturation does not occur in any direction. As the size of the hyperparameter approaches infinity, it can approximate every continuous function on a compact set. Finally, the modification of any given parameter only changes the activation on a small interval and only when needed, making optimization relatively simple.

3.4.4. GaLU

Piecewise linear odd functions, composed of many linear pieces, do a better job in approximating nonlinear functions compared to ReLU [70]. For this reason, Gaussian ReLU (GaLU) [68], based on Gaussian types of functions, aims to add more linear pieces with respect to MeLU. Since GaLU extends MeLU, GaLU retains all the favorable properties discussed in Section 3.4.3.
Letting ϕ g a ,   λ x = max λ x a , 0 +min ( x a 2 λ λ , 0 ) be a Gaussian type of function, where a and λ are real numbers, GaLU is defined, similarly to MeLU, as
y i = G a L U x i = P R e L U c 0 x i + j = 1 k 1 c j   ϕ g a j , λ j x i .
In this work, k = 2 parameters for what will be called in the experimental section Small GaLU and k = 4 for GaLU proper.
Like MeLU, GaLU has the same set of fixed parameters. A comparison of values for the fixed parameters with m a x I n p u t = 1 is provided in Table 2.

3.4.5. SRS

Soft Root Sign (SRS) [67] is defined as
y = f x = x x α + e x β ,
where α and β are nonnegative learnable parameters. The output has zero means if the input is a standard normal.

3.4.6. Soft Learnable

It is defined as
y = f x = x , | x > 0 α · l n 1 + e β x 2 , | x 0 ,
where α and β are nonnegative trainable parameters that enable SRS to adaptively adjust its output to provide a zero-mean property for enhanced generalization and training speed. SRS also has two more advantages over the commonly used ReLU function: (i) it has nonzero derivative in the negative portion of the function, and (ii) bounded output, i.e., the function takes values in the range α β β α e , α ) , which is in turn controlled by the α and β parameters
We used two different versions of this activation, depending on whether the parameter β is fixed (labeled here as Soft Learnable) or not (labeled here as Soft Learnable2).

3.4.7. Splash

Splash [64] is another modification of APLU that makes the function symmetric. In the definition of APLU, let a i and b i be the learnable parameters leading to A P L U a i , b i x . Then, Splash is defined as
S p l a s h a i + , a i , b i x = A P L U a i + , b i x + A P L U a i , b i x .
This equation’s hinges are symmetric with respect to the origin. The authors in [65] claim that this network is more robust against adversarial attacks.

3.4.8. 2D MeLU (New)

The 2D Mexican ReLU (2D MeLU) is a novel activation function presented here that is not defined component-wise; instead, every output neuron depends on two input neurons. If a layer has N neurons (or channels), its output is defined as
y i = P R e L U c 0 x i + P R e L U c 0 x i + 1 + u , v = 1 k 1 c j   ϕ a u , v , λ max u , v x i , x i + 1 ,
where ϕ a j , λ j x i , x i + 1 = max λ j x i , x i + 1 a u , v , 0 .
The parameter a u , v is a two-dimensional vector whose entries are the same as those used in MeLU. In other words, a u , v = a u , a v as defined in Table 1. Likewise, λ max u , v is defined as it is for MeLU in Table 1.

3.4.9. MeLU + GaLU (New)

MeLU + GaLU is an activation function presented here that is, as its name suggests, the weighted sum of MeLU and GaLU:
y i = 1 a i M e L U x i + a i   GaLU x i ,  
where a i is a learnable parameter.

3.4.10. Symmetric MeLU (New)

Symmetric MeLU is the equivalent of MeLU, but it is symmetric like Splash. Symmetric MeLU is defined as
y i = M e L U x i + M e L U x i ,
where the coefficients of the two MeLUs are the same. In other words, the k coefficients of M e L U x i are the same as M e L U x i .

3.4.11. Symmetric GaLU (New)

Symmetric GaLU is the equivalent of symmetric MeLU but uses GaLU instead of MeLU. Symmetric GaLU is defined as
y i = G a L U x i + G a L U x i ,
where the coefficients of the two GaLUs are the same. In other words, the k coefficients of G a L U x i are the same as G a L U x i .

3.4.12. Flexible MeLU (New)

Flexible MeLU is a modification of MeLU where the peaks of the Mexican function are also learnable. This feature makes it more similar to APLU since its points of nondifferentiability are also learnable. Compared to MeLU, APLU has more hyperparameters.

4. Building CNN Ensembles

One of the objectives of this study is to use several methods for combining the two CNNs with the different activation functions discussed above. Two methods are in need of discussion: Sequential Forward Floating Selection (SFFS) [71] and the stochastic method for combining CNNs introduced in [47].

4.1. Sequential Forward Floating Selection (SFFS)

A popular method for selecting an optimal set of descriptors, SFFS [71], has been adapted for selecting the best performing/independent classifiers to be added to the ensemble. In applying the SFFS method, each model to be included in the final ensemble is selected by adding, at each step, the model which provides the highest increment in performance compared to the existing subset of models. Then, a backtracking step is performed to exclude the worst model from the actual ensemble.
This method for combining CNNs is labeled Selection in the experimental section. Since SFFS requires a training phase, we perform a leave-one-out data set selection to select the best-suited models.

4.2. Stochastic Method (Stoc)

The stochastic approach [47] involves randomly substituting all the activations in a CNN architecture with a new one selected from a pool of potential candidates. Random selection is repeated many times to generate a set of networks that will be fused together. The candidate activation functions within a pool differ depending on the CNN architecture. Some activation functions appear to perform poorly and some quite well on a given CNN, with quite a large variance. The activation functions included in the pools for each of the CNNs tested here are provided in Table 3. The CNN ensembles randomly built from these pools varied in size, as is noted in the experimental section, which investigates the different ensembles. Ensemble decisions are combined by sum rule, where the softmax probabilities of a sample given by all the networks are averaged, and the new score is used for classification. The stochastic method of combining CNNs is labeled Stoc in the experimental section.
It should be noted that there is no risk of overfitting in the proposed ensemble. The replacement is randomly performed; we did not choose any ad hoc data sets. Overfitting could occur if we chose the Activation Functions (AFs) ad hoc data sets. The aim of this work is to propose an ensemble based on stochastic selection of AFs in order to avoid any risk of overfitting. The disadvantage of our approach is the increased computation time needed to generate the ensembles. As a final note, since 2D MeLU, Splash, and SRS obtain low performance when run with MI = 255 using VGG16, we ran those tests on only a few data sets; those AFs that demonstrate poor performance were cut to reduce computational time.

5. Experimental Results

5.1. Biomedical Data Sets

There are no fixed definitions of small/midsize data sets that would apply to all fields in data mining. Whether a data set is considered large or small is relative to the task and the publication date of the research. As many deep learning algorithms require large data sets to avoid overfitting, the expectation today is to produce extremely large data sets. We claim that if the data set contains fewer than 1000 images, then it is small, and if the number of images is between 1001 and 10,000, we say that it is midsize.
Each of the activation functions detailed in Section 3 is tested on the CNNs using the following fifteen publicly available biomedical data sets:
  • CH (CHO data set [72]): this is a data set containing 327 fluorescence microscopy images of Chinese hamster ovary cells divided into five classes: antigiantin, Hoechst 33,258 (DNA), antilamp2, antinop4, and antitubulin.
  • HE (2D HeLa data set [72]): this is a balanced data set containing 862 fluorescence microscopy images of HeLa cells stained with various organelle-specific fluorescent dyes. The images are divided into ten classes of organelles: DNA (Nuclei); ER (Endoplasmic reticulum); Giantin, (cis/medial Golgi); GPP130 (cis Golgi); Lamp2 (Lysosomes); Nucleolin (Nucleoli); Actin, TfR (Endosomes); Mitochondria; and Tubulin.
  • RN (RNAi data set [73]): this is a data set of 200 fluorescence microscopy images of fly cells (D. melanogaster) divided into ten classes. Each class contains 1024 × 1024 TIFF images of phenotypes produced from one of ten knock-down genes, the IDs of which form the class labels.
  • MA (C. elegans Muscle Age data set [73]): this data set is for classifying the age of a nematode given twenty-five images of C. elegans muscles collected at four ages representing the classes.
  • TB (Terminal Bulb Aging data set [73]): this is the companion data set to MA and contains 970 images of C. elegans terminal bulbs collected at seven ages representing the classes.
  • LY (Lymphoma data set [73]): this data set contains 375 images of malignant lymphoma representative of three types: Chronic Lymphocytic Leukemia (CLL), Follicular Lymphoma (FL), and Mantle Cell Lymphoma (MCL).
  • LG (Liver Gender Caloric Restriction (CR) data set [73]): this data set contains 265 images of liver tissue sections from six-month-old male and female mice on a CR diet; the two classes represent the gender of the mice.
  • LA (Liver Aging Ad libitum data set [73]): this data set contains 529 images of liver tissue sections from female mice on an ad libitum diet divided into four classes representing the age of the mice.
  • CO (Colorectal Cancer [74]): this is a Zenodo data set (record: 53169#.WaXjW8hJaUm) of 5000 histological images (150 x 150 pixels each) of human colorectal cancer divided into eight classes.
  • BGR (Breast Grading Carcinoma [75]): this is a Zenodo data set (record: 834910#.Wp1bQ-jOWUl) that contains 300 annotated histological images of twenty-one patients with invasive ductal carcinoma of the breast representing three classes/grades 1–3.
  • LAR (Laryngeal data set [76]): this is a Zenodo data set (record: 1003200#.WdeQcnBx0nQ) containing 1320 images of thirty-three healthy and early-stage cancerous laryngeal tissues representative of four tissue classes.
  • HP (set of immunohistochemistry images from the Human Protein Atlas [77]): this is a Zenodo data set (record: 3875786#.XthkoDozY2w) of 353 images of fourteen proteins in nine normal reproductive tissues belonging to seven subcellular locations. The data set in [77] is partitioned into two folds, one for training (177 images) and one for testing (176 images).
  • RT (2D 3T3 Randomly CD-Tagged Images: Set 3 [78]): this collection of 304 2D 3T3 randomly CD-tagged images was created by randomly generating CD-tagged cell clones and imaging them by automated microscopy. The images are divided into ten classes. As in [78], the proteins are put into ten folds so that images in the training and testing sets never come from the same protein.
  • LO (Locate Endogenous data set [79]): this fairly balanced data set contains 502 images of endogenous cells divided into ten classes: Actin-cytoskeleton, Endosomes, ER, Golgi, Lysosomes, Microtubule, Mitochondria, Nucleus, Peroxisomes, and PM. This data set is archived at https://integbio.jp/dbcatalog/en/record/nbdc00296 (accessed on 9 August 2022).
  • TR (Locate Transfected data [79]): this is a companion data set to LO. TR contains 553 images divided into the set same ten classes as LO but with the additional class of Cytoplasm for a total of eleven classes.
Data sets 1–8 can be downloaded at https://ome.grc.nia.nih.gov/iicbu2008/ (accessed on 9 August 2022), data sets 9–12 can be found on Zenodo at https://zenodo.org/record/ (accessed on 9 August 2022) by concatenating the data set’s Zenodo record number provided in the descriptions above to this URL. Data set 13 is available at http://murphylab.web.cmu.edu/data/#RandTagAL (accessed on 9 August 2022), and data sets 14 and 15 are available on request. Unless otherwise noted, the five-fold cross-validation protocol is applied (see Table 3 for details), and the Wilcoxon signed-rank test [80] is the measure used to validate experiments.

5.2. Experimental Results

Reported in Table 4 and Table 5 is the performance (accuracy) of the different activation functions on the CNN topologies VGG16 and ResNet50, each trained with a batch size (BS) of 30 and a learning rate (LR) of 0.0001 for 20 epochs (the last fully connected layer has an LR 20 times larger than the rest of the layers (i.e., 0.002)), except the stochastic architectures that are trained for 30 epochs (because of slower convergence). The reason for selecting these settings was to reduce computation time. Images were augmented with random reflections on both axes and two independent random rescales of both axes by two factors uniformly sampled in [1,2] (using MATLAB data augmentation procedures). The objective was to rescale both the vertical and horizontal proportions of the new image. For each stochastic approach, a set of 15 networks was built and combined by sum rule. We trained the models using MATLAB 2019b; however, the pre-trained architectures of newer versions perform better.
The performance (accuracy) of the following ensembles is reported in Table 4 and Table 5:
  • ENS: sum rule of {MeLU ( k = 8 ), Leaky ReLU, ELU, MeLU ( k = 4 ), PReLU, SReLU, APLU, ReLU} (if m a x I n p u t = 1 ) or {MeLU ( k = 8 ), MeLU ( k = 4 ), SReLU, APLU, ReLU} (if m a x I n p u t = 255 );
  • eENS: sum rule of the methods that belong to ENS considering both m a x I n p u t = 1 and m a x I n p u t = 255 ;
  • ENS_G: as in ENS but Small GaLU and GaLU are added, and in both cases m a x I n p u t = 1 or m a x I n p u t = 255 ;
  • eENS_G: sum rule of the methods that belong to ENS_G but considering m a x I n p u t = 1 and m a x I n p u t = 255 ;
  • ALL: sum rule among all the methods reported in Table 4 with m a x I n p u t = 1 or m a x I n p u t = 255 . Notice that when the methods with m a x I n p u t = 255 are combined, standard ReLU is also added to the fusion. Due to computation time, some activation functions are not combined with VGG16 and so are not considered;
  • eALL: sum rule among all the methods, both with m a x I n p u t = 1 and m a x I n p u t = 255 . Due to computation time, some activation functions are not combined with VGG16 and thus are not considered in an ensemble;
  • 15ReLU: ensemble obtained by the fusion of 15 ReLU models. Each network is different because of the stochasticity of the training process;
  • Selection: ensemble selected using SFFS (see Section 3.1);
  • Stoc_1: MeLU( k = 8 ), Leaky ReLU, ELU, MeLU( k = 4 ), PReLU, SReLU, APLU, GaLU, sGaLU. A m a x I n p u t = 255 has been used in the stochastic approach (see Section 3.2);
  • Stoc_2: the same nine functions of Stoc_1 and an additional set of seven activation functions: ReLU, Soft Learnable, PDeLU, learnableMish, SRS, Swish Learnable, and Swish. A m a x I n p u t = 255 has been used;
  • Stoc_3: same as Stoc_2 but excluding all the activation functions proposed in [46,47,61] (i.e., MeLU, GaLU, and sGaLU);
  • Stoc_4: the ensemble detailed in Section 4.
The most relevant results reported in Table 4 on ResNet50 can be summarized as follows:
  • ensemble methods outperform stand-alone networks. This result confirms previous research showing that changing activation functions is a viable method for creating ensembles of networks. Note how well 15ReLU outperforms (p-value of 0.01) the stand-alone ReLU;
  • among the stand-alone ResNet50 networks, ReLU is not the best activation function. The two activations that reach the highest performance on ResNet50 are MeLU ( k = 8 ) with m a x I n p u t = 255 and Splash with m a x I n p u t = 255 . According to the Wilcoxon signed rank test, MeLU ( k = 8 ) with m a x I n p u t = 255 outperforms ReLU with a p-value of 0.1. There is no statistical difference between MeLU ( k = 8 ) and Splash (with m a x I n p u t = 255 for both);
  • according to the Wilcoxon signed rank test, Stoc_4 and Stoc_2 are similar in performance, and both outperform the other stochastic approach with a p-value of 0.1;
  • Stoc_4 outperforms eALL, 15ReLU, and Selection with a p-value of 0.1. Selection outperforms 15ReLU with p-value of 0.01, but Selection’s performance is similar to eALL.
Examining Figure 5, which illustrates the average rank of the different methods used in Table 4, with ensembles in dark blue and stand-alone in light blue, it is clear that:
(a)
there is not a clear winner among the different AFs;
(b)
ensembles work better with respect to stand-alone approaches;
(c)
the methods named Sto_x work better with respect to other ensembles.
The most relevant results reported in Table 5 on VGG16 can be summarized as follows:
  • again, the ensemble methods outperform the stand-alone CNNs. As was the case with ResNet50, 15ReLU strongly outperforms (p-value of 0.01) the stand-alone CNNs with ReLU;
  • among the stand-alone VGG16 networks, ReLU is not the best activation function. The two activations that reach the highest performance on V6616 are MeLU ( k = 4 ) with m a x I n p u t = 255 and GaLU with m a x I n p u t = 255 . According to the Wilcoxon signed rank test, there is no statistical difference between ReLU, MeLU ( k = 4 ) ,   MI = 255 , and GaLU, MI = 255 ;
  • interestingly, ALL with m a x I n p u t = 1 outperforms eALL with p-value of 0.05;
  • Stoc_4 outperforms 15ReLU with p-value of 0.01, but the performance of Stoc_4 is similar to eALL, ALL ( m a x I n p u t = 1 ), and Selection.
Considering both ResNet50 and Vgg16, the best AF is MeLU (k = 8), MI = 255. It outperforms ReLU with a p-value 0.1 in ResNet and a p-value of 0.16 in VGG16. Interestingly, the best average AF is a learnable one that works even on small/midsize data sets.
Figure 6 provides a graph reporting the average rank of different AFs and ensembles for VGG16. As with ResNet50 (see Figure 5), it is clear that ensembles of AFs outperform the baseline 15ReLU and stand-alone networks. With VGG16, the performance of Stoc_4 is similar to eALL and Selection.
To further substantiate the power of different AFs in ensembles with small to midsize data sets, in Table 6, we show a further batch of tests comparing 15ReLU and the ensembles built by varying the activation functions to the performance of five different CNN topologies, each trained with a batch size (BS) of 30 and a learning rate (LR) of 0.001 for 20 epochs (the last fully connected layer has an LR 20 times larger than the rest of the layers (i.e., 0.02)) using Matlab2021b. Note that these parameters are slightly different from those of the previous tests. We did not run tests using VGG16 due to computational issues.
The tested CNNs are the following:
  • EfficientNetB0 [81]: this CNN does not have ReLU layers, so we only compare the stand-alone CNN with the ensemble labeled 15Reit (15 reiterations of the training).
  • MobileNetV2 [82].
  • DarkNet53, [83]: this deep network uses LeakyReLU with no ReLU layers; the fusion of 15 standard DarkNet53 models is labeled 15Leaky.
  • DenseNet201 [84].
  • ResNet50.
As in the previous tests, training images were augmented with random reflections on both axes and two independent random rescales of both axes by two factors uniformly sampled in [1,2] (using MATLAB data augmentation procedures). The objective was to rescale both the vertical and horizontal proportions of the new image.
The most relevant results reported in Table 6 can be summarized as follows:
  • the ensembles strongly outperform (p-value 0.01) the stand-alone CNN in each topology;
  • in MobileNetV2, DenseNet201, and ResNet50, Stoc_4 outperforms 15ReLU (p-value 0.05);
  • DarkNet53 behaved differently: on this network, 15Leaky and Stoc_4 obtained similar performance.
In Table 7, we report the performance on a few data sets obtained by ResNet50, choosing the optimal values of BS and LR for ReLU. Even with BS and LR optimized for ReLU, the performance of Sto_4 is higher than that obtained by ReLU and 15ReLU.
In Table 8, we report some computation time tests.
The hardware improvements reduce the inference time; there are several applications where it is not a problem to classify 100 images in just a few seconds.
In Table 9, we report the four best AFs for each topology with both MI = 1 and MI = 255.
If we consider the two larger data sets, CO and LAR, the best AF is always a learnable one:
  • CO—ResNet: the best is Swish Learnable;
  • LAR—ResNet: the best is 2D MeLU;
  • CO—VGG16: the best is MeLU + GaLU;
  • LAR—VGG16: the best is MeLU (k = 4).
It is clear that some of the best AFs are proposed here.

6. Conclusions

The goal of this study was to evaluate some state-of-the-art deep learning techniques on medical images and data. Towards this aim, we evaluated the performance of CNN ensembles created by replacing the ReLU layers with activations from a large set of activation functions, including six new activation functions introduced here for the first time (2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU). Tests were run on two different networks: VGG16 and ResNet50, across fifteen challenging image data sets representing various tasks. Several methods for making ensembles were also explored.
Experiments demonstrate that an ensemble of multiple CNNs that differ only in their activation functions outperforms the results of single CNNs. Experiments also show that, among the single architectures, there is no clear winner.
More studies need to investigate the performance gains offered by our approach on even more data sets. It would be of value, for instance, to examine whether the boosts in performance our system achieved on the type of data tested in this work would transfer to other types of medical data, such as Computer Tomography (CT) and Magnetic Resonance Imaging (MRI), as well as image/tumor segmentation. Studies such as the one presented here are difficult, however, because investigating CNNs requires enormous computational resources. Nonetheless, such studies are necessary to increase the capacity of deep learners to classify medical images and data accurately.

Author Contributions

Investigation, S.B., M.P. and S.G.; Methodology, L.N., M.P. and S.G.; Project administration, L.N.; Resources, S.B. and M.P.; Software, M.P.; Writing—original draft, L.N., S.B., M.P. and S.G.; Writing—review & editing, S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors are grateful to NVIDIA Corporation for supporting this research with the donation of a Titan Xp GPU. The authors also wish to acknowledge the Tampere Center for Scientific Computing (TCSC) and IT Center for Science (CSC, Finland) for generous computational resources. Part of this work was supported by the Italian Minister for Education (MIUR) under the initiative “Departments of Excellence” (Law 232/2016).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sony, S.; Dunphy, K.; Sadhu, A.; Capretz, M. A systematic review of convolutional neural network-based structural condition assessment techniques. Eng. Struct. 2021, 226, 111347. [Google Scholar] [CrossRef]
  2. Christin, S.; Hervet, É.; Lecomte, N. Applications for deep learning in ecology. Methods Ecol. Evol. 2019, 10, 1632–1644. [Google Scholar] [CrossRef]
  3. Min, S.; Lee, B.; Yoon, S. Deep learning in bioinformatics. Brief. Bioinform. 2016, 18, 851–869. [Google Scholar] [CrossRef] [PubMed]
  4. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  5. Yapici, M.M.; Tekerek, A.; Topaloğlu, N. Literature review of deep learning research areas. Gazi Mühendislik Bilimleri Derg. GMBD 2019, 5, 188–215. [Google Scholar]
  6. Bakator, M.; Radosav, D. Deep Learning and Medical Diagnosis: A Review of Literature. Multimodal Technol. Interact. 2018, 2, 47. Available online: https://www.mdpi.com/2414-4088/2/3/47 (accessed on 9 August 2022). [CrossRef]
  7. Wang, F.; Casalino, L.P.; Khullar, D. Deep learning in medicine—Promise, progress, and challenges. JAMA Intern. Med. 2019, 179, 293–294. [Google Scholar] [CrossRef]
  8. Ching, T.; Himmelstein, D.S.; Beaulieu-Jones, B.K.; Kalinin, A.A.; Do, B.T.; Way, G.P.; Ferrero, E.; Agapow, P.-M.; Zietz, M.; Hoffman, M.M.; et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 2018, 15, 20170387. [Google Scholar] [CrossRef] [PubMed]
  9. Cai, L.; Gao, J.; Zhao, D. A review of the application of deep learning in medical image classification and segmentation. Ann. Transl. Med. 2020, 8, 713. [Google Scholar] [CrossRef]
  10. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
  11. Dhillon, A.; Verma, G.K. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  12. Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed]
  13. Taskiran, M.; Kahraman, N.; Erdem, C.E. Face recognition: Past, present and future (a review). Digit. Signal Processing 2020, 106, 102809. [Google Scholar] [CrossRef]
  14. Kortli, Y.; Jridi, M.; al Falou, A.; Atri, M. Face recognition systems: A survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef]
  15. Bodapati, S.; Bandarupally, H.; Shaw, R.N.; Ghosh, A. Comparison and analysis of RNN-LSTMs and CNNs for social reviews classification. In Advances in Applications of Data-Driven Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 49–59. [Google Scholar]
  16. Haggenmüller, S.; Maron, R.C.; Hekler, A.; Utikal, J.S.; Barata, C.; Barnhill, R.L.; Beltraminelli, H.; Berking, C.; Betz-Stablein, B.; Blum, A.; et al. Skin cancer classification via convolutional neural networks: Systematic review of studies involving human experts. Eur. J. Cancer 2021, 156, 202–216. [Google Scholar] [CrossRef] [PubMed]
  17. Haenssle, H.A.; Winkler, J.K.; Fink, C.; Toberer, F.; Enk, A.; Stolz, W.; Deinlein, T.; Hofmann-Wellenhof, R.; Kittler, H.; Tschandl, P.; et al. Skin lesions of face and scalp—Classification by a market-approved convolutional neural network in comparison with 64 dermatologists. Eur. J. Cancer 2021, 144, 192–199. [Google Scholar] [CrossRef]
  18. Zhang, S.M.; Wang, Y.J.; Zhang, S.T. Accuracy of artificial intelligence-assisted detection of esophageal cancer and neoplasms on endoscopic images: A systematic review and meta-analysis. J. Dig. Dis. 2021, 22, 318–328. [Google Scholar] [CrossRef]
  19. Singh, S.P.; Wang, L.; Gupta, S.; Goli, H.; Padmanabhan, P.; Gulyás, B. 3D Deep Learning on Medical Images: A Review. Sensors 2020, 20, 5097. [Google Scholar] [CrossRef]
  20. Gurovich, Y.; Hanani, Y.; Bar, O.; Nadav, G.; Fleischer, N.; Gelbman, D.; Basel-Salmon, L.; Krawitz, P.M.; Kamphausen, S.B.; Zenker, M.; et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat. Med. 2019, 25, 60–64. [Google Scholar] [CrossRef]
  21. Oltu, B.; Karaca, B.K.; Erdem, H.; Özgür, A. A systematic review of transfer learning based approaches for diabetic retinopathy detection. arXiv 2021, arXiv:2105.13793. [Google Scholar]
  22. Kadan, A.B.; Subbian, P.S. Diabetic Retinopathy Detection from Fundus Images Using Machine Learning Techniques: A Review. Wirel. Pers. Commun. 2021, 121, 2199–2212. [Google Scholar] [CrossRef]
  23. Kapoor, P.; Arora, S. Applications of Deep Learning in Diabetic Retinopathy Detection and Classification: A Critical Review. In Proceedings of Data Analytics and Management; Springer: Singapore, 2021; pp. 505–535. [Google Scholar]
  24. Mirzania, D.; Thompson, A.C.; Muir, K.W. Applications of deep learning in detection of glaucoma: A systematic review. Eur. J. Ophthalmol. 2021, 31, 1618–1642. [Google Scholar] [CrossRef] [PubMed]
  25. Gumma, L.N.; Thiruvengatanadhan, R.; Kurakula, L.; Sivaprakasam, T. A Survey on Convolutional Neural Network (Deep-Learning Technique) -Based Lung Cancer Detection. SN Comput. Sci. 2021, 3, 66. [Google Scholar] [CrossRef]
  26. Abdelrahman, L.; al Ghamdi, M.; Collado-Mesa, F.; Abdel-Mottaleb, M. Convolutional neural networks for breast cancer detection in mammography: A survey. Comput. Biol. Med. 2021, 131, 104248. [Google Scholar] [CrossRef]
  27. Leng, X. Photoacoustic Imaging of Colorectal Cancer and Ovarian Cancer. Ph.D. Dissertation, Washington University in St. Louis, St. Louis, MO, USA, 2022. [Google Scholar]
  28. Yu, C.; Helwig, E.J. Artificial intelligence in gastric cancer: A translational narrative review. Ann. Transl. Med. 2021, 9, 269. [Google Scholar] [CrossRef]
  29. Kuntz, S.; Krieghoff-Henning, E.; Kather, J.N.; Jutzi, T.; Höhn, J.; Kiehl, L.; Hekler, A.; Alwers, E.; von Kalle, C.; Fröhling, S.; et al. Gastrointestinal cancer classification and prognostication from histology using deep learning: Systematic review. Eur. J. Cancer 2021, 155, 200–215. [Google Scholar] [CrossRef]
  30. Desai, M.; Shah, M. An anatomization on breast cancer detection and diagnosis employing multi-layer perceptron neural network (MLP) and Convolutional neural network (CNN). Clin. Ehealth 2021, 4, 1–11. [Google Scholar] [CrossRef]
  31. Senthil, K. Ovarian cancer diagnosis using pretrained mask CNN-based segmentation with VGG-19 architecture. Bio-Algorithms Med-Syst. 2021. [Google Scholar] [CrossRef]
  32. Soudy, M.; Alam, A.; Ola, O. Predicting the Cancer Recurrence Using Artificial Neural Networks. In Computational Intelligence in Oncology; Springer: Singapore, 2022; pp. 177–186. [Google Scholar]
  33. AbdulAzeem, Y.; Bahgat, W.M.; Badawy, M. A CNN based framework for classification of Alzheimer’s disease. Neural Comput. Appl. 2021, 33, 10415–10428. [Google Scholar] [CrossRef]
  34. Amini, M.; Pedram, M.M.; Moradi, A.; Ouchani, M. Diagnosis of Alzheimer’s Disease Severity with fMRI Images Using Robust Multitask Feature Extraction Method and Convolutional Neural Network (CNN). Comput. Math. Methods Med. 2021, 2021, 5514839. [Google Scholar] [CrossRef]
  35. Khanagar, S.B.; Naik, S.; Al Kheraif, A.A.; Vishwanathaiah, S.; Maganur, P.C.; Alhazmi, Y.; Mushtaq, S.; Sarode, S.C.; Sarode, G.S.; Zanza, A.; et al. Application and performance of artificial intelligence technology in oral cancer diagnosis and prediction of prognosis: A systematic review. Diagnostics 2021, 11, 1004. [Google Scholar] [CrossRef] [PubMed]
  36. Ren, R.; Luo, H.; Su, C.; Yao, Y.; Liao, W. Machine learning in dental, oral and craniofacial imaging: A review of recent progress. PeerJ 2021, 9, e11451. [Google Scholar] [CrossRef] [PubMed]
  37. Mohan, B.P.; Khan, S.R.; Kassab, L.L.; Ponnada, S.; Chandan, S.; Ali, T.; Dulai, P.S.; Adler, D.G.; Kochhar, G.S. High pooled performance of convolutional neural networks in computer-aided diagnosis of GI ulcers and/or hemorrhage on wireless capsule endoscopy images: A systematic review and meta-analysis. Gastrointest. Endosc. 2021, 93, 356–364.e4. [Google Scholar] [CrossRef] [PubMed]
  38. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. NPJ Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef] [PubMed]
  39. Gonçalves, C.B.; Souza, J.R.; Fernandes, H. Classification of static infrared images using pre-trained CNN for breast cancer detection. In Proceedings of the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 101–106. [Google Scholar]
  40. Morid, M.A.; Borjali, A.; del Fiol, G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput. Biol. Med. 2021, 128, 104115. [Google Scholar] [CrossRef]
  41. Chlap, P.; Min, H.; Vandenberg, N.; Dowling, J.; Holloway, L.; Haworth, A. A review of medical image data augmentation techniques for deep learning applications. J. Med. Imaging Radiat. Oncol. 2021, 65, 545–563. [Google Scholar] [CrossRef]
  42. Papanastasopoulos, Z.; Samala, R.K.; Chan, H.-P.; Hadjiiski, L.; Paramagul, C.; Helvie, M.A.; Neal, C.H. Explainable AI for medical imaging: Deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI. In Medical Imaging 2020: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; p. 113140Z. [Google Scholar]
  43. Singh, R.K.; Gorantla, R. DMENet: Diabetic macular edema diagnosis using hierarchical ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar]
  44. Coupé, P.; Mansencal, B.; Clément, M.; Giraud, R.; de Senneville, B.D.; Ta, V.; Lepetit, V.; Manjon, J.V. AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. NeuroImage 2020, 219, 117026. [Google Scholar] [CrossRef]
  45. Savelli, B.; Bria, A.; Molinara, M.; Marrocco, C.; Tortorella, F. A multi-context CNN ensemble for small lesion detection. Artif. Intell. Med. 2020, 103, 101749. [Google Scholar] [CrossRef]
  46. Maguolo, G.; Nanni, L.; Ghidoni, S. Ensemble of Convolutional Neural Networks Trained with Different Activation Functions. Expert Syst. Appl. 2021, 166, 114048. [Google Scholar] [CrossRef]
  47. Nanni, L.; Lumini, A.; Ghidoni, S.; Maguolo, G. Stochastic Selection of Activation Layers for Convolutional Neural Networks. Sensors 2020, 20, 1626. [Google Scholar] [CrossRef] [PubMed]
  48. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Cornell Univ. arXiv 2014, arXiv:1409.1556v6 2014. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  50. Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In AISTATS; PMLR: Birmingham, UK, 2011; Available online: https://pdfs.semanticscholar.org/6710/7f78a84bdb2411053cb54e94fa226eea6d8e.pdf?_ga=2.211730323.729472771.1575613836-1202913834.1575613836 (accessed on 9 August 2022).
  51. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21 June 2010. [Google Scholar]
  52. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.: New Yoek, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  53. Maas, A.L. Rectifier Nonlinearities Improve Neural Network Acoustic Models. 2013. Available online: https://pdfs.semanticscholar.org/367f/2c63a6f6a10b3b64b8729d601e69337ee3cc.pdf?_ga=2.208124820.729472771.1575613836-1202913834.1575613836 (accessed on 9 August 2022).
  54. Clevert, D.-A.; Unterthiner, T.; Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv 2015, arXiv:1511.07289v5. [Google Scholar]
  55. Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-Normalizing Neural Networks. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  56. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  57. Agostinelli, F.; Hoffman, M.D.; Sadowski, P.J.; Baldi, P. Learning Activation Functions to Improve Deep Neural Networks. arXiv 2014, arXiv:1412.6830. [Google Scholar]
  58. Scardapane, S.; Vaerenbergh, S.V.; Uncini, A. Kafnets: Kernel-based non-parametric activation functions for neural networks. Neural Netw. Off. J. Int. Neural Netw. Soc. 2017, 110, 19–32. [Google Scholar] [CrossRef] [PubMed]
  59. Manessi, F.; Rozza, A. Learning Combinations of Activation Functions. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 61–66. [Google Scholar]
  60. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for Activation Functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
  61. Maguolo, G.; Nanni, L.; Ghidoni, S. Ensemble of convolutional neural networks trained with different activation functions. arXiv 2019, arXiv:1905.02473. [Google Scholar] [CrossRef]
  62. Junior, B.G.; da Rocha, S.V.; Gattass, M.; Silva, A.C.; de Paiva, A.C. A mass classification using spatial diversity approaches in mammography images for false positive reduction. Expert Syst. Appl. 2013, 40, 7534–7543. [Google Scholar] [CrossRef]
  63. Jin, X.; Xu, C.; Feng, J.; Wei, Y.; Xiong, J.; Yan, S. Deep learning with S-shaped rectified linear activation units. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12 February 2016. [Google Scholar]
  64. Tavakoli, M.; Agostinelli, F.; Baldi, P. SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness. arXiv 2020, arXiv:2006.08947. [Google Scholar] [CrossRef]
  65. Misra, D. Mish: A Self Regularized Non-Monotonic Activation Function. arXiv 2020, arXiv:1908.08681. [Google Scholar]
  66. Cheng, Q.; Li, H.; Wu, Q.; Ma, L.; Ngan, K.N. Parametric Deformable Exponential Linear Units for deep neural networks. Neural Netw. 2020, 125, 281–289. [Google Scholar] [CrossRef] [PubMed]
  67. Zhou, Y.; Li, D.; Huo, S.; Kung, S. Soft-Root-Sign Activation Function. arXiv 2020, arXiv:2003.00547. [Google Scholar]
  68. Berno, F.; Nanni, L.; Maguolo, G.; Brahnam, S. Ensembles of convolutional neural networks with different activation functions for small to medium size biomedical datasets. In Machine Learning in Medicine; CRC Press Taylor & Francis Group: Boca Raton, FL, USA, 2021; In Press. [Google Scholar]
  69. Duch, W.; Jankowski, N. Survey of neural transfer functions. Neural Comput. Surv. 1999, 2, 163–212. [Google Scholar]
  70. Nicolae, A. PLU: The Piecewise Linear Unit Activation Function. arXiv 2018, arXiv:2104.03693. [Google Scholar]
  71. Pudil, P.; Novovicova, J.; Kittler, J. Floating search methods in feature selection. Pattern Recognit Lett 1994, 5, 1119–1125. [Google Scholar] [CrossRef]
  72. Boland, M.V.; Murphy, R.F. A neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells. BioInformatics 2001, 17, 1213–1223. [Google Scholar] [CrossRef] [PubMed]
  73. Shamir, L.; Orlov, N.V.; Eckley, D.M.; Goldberg, I. IICBU 2008: A proposed benchmark suite for biological image analysis. Med. Biol. Eng. Comput. 2008, 46, 943–947. [Google Scholar] [CrossRef]
  74. Kather, J.N.; Weis, C.-A.; Bianconi, F.; Melchers, S.M.; Schad, L.R.; Gaiser, T.; Marx, A.; Zöllner, F.G. Multi-class texture analysis in colorectal cancer histology. Sci. Rep. 2016, 6, 27988. [Google Scholar] [CrossRef]
  75. Dimitropoulos, K.; Barmpoutis, P.; Zioga, C.; Kamas, A.; Patsiaoura, K.; Grammalidis, N. Grading of invasive breast carcinoma through Grassmannian VLAD encoding. PLoS ONE 2017, 12, e0185110. [Google Scholar] [CrossRef]
  76. Moccia, S.; De Momi, E.; Guarnaschelli, M.; Savazzi, M.; Laborai, A. Confident texture-based laryngeal tissue classification for early stage diagnosis support. J. Med. Imaging 2017, 4, 34502. [Google Scholar] [CrossRef]
  77. Yang, F.; Xu, Y.; Wang, S.; Shen, H. Image-based classification of protein subcellular location patterns in human reproductive tissue by ensemble learning global and local features. Neurocomputing 2014, 131, 113–123. [Google Scholar] [CrossRef]
  78. Coelho, L.P.; Kangas, J.D.; Naik, A.W.; Osuna-Highley, E.; Glory-Afshar, E.; Fuhrman, M.; Simha, R.; Berget, P.B.; Jarvik, J.W.; Murphy, R.F. Determining the subcellular location of new proteins from microscope images using local features. Bioinformatics 2013, 29, 2343–2352. [Google Scholar] [CrossRef] [PubMed]
  79. Hamilton, N.; Pantelic, R.; Hanson, K.; Teasdale, R.D. Fast automated cell phenotype classification. BMC Bioinform. 2007, 8, 110. [Google Scholar] [CrossRef] [PubMed]
  80. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  81. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  82. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
  83. Joseph, R. Darknet: Open Source Neural Networks in C. Available online: https://pjreddie.com/darknet/ (accessed on 22 January 2022).
  84. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. CVPR 2017, 1, 3. [Google Scholar]
Figure 1. ReLU.
Figure 1. ReLU.
Sensors 22 06129 g001
Figure 2. Leaky ReLU.
Figure 2. Leaky ReLU.
Sensors 22 06129 g002
Figure 3. ELU.
Figure 3. ELU.
Sensors 22 06129 g003
Figure 4. SReLU.
Figure 4. SReLU.
Sensors 22 06129 g004
Figure 5. Average rank (lower is better) obtained by different AFs and ensembles coupled with ResNet50 (light blue represents stand-alone methods and dark blue, ensembles).
Figure 5. Average rank (lower is better) obtained by different AFs and ensembles coupled with ResNet50 (light blue represents stand-alone methods and dark blue, ensembles).
Sensors 22 06129 g005
Figure 6. Average rank (lower is better) obtained by different AFs and ensembles coupled with VGG16 (light blue represents stand-alone methods and dark blue, ensembles).
Figure 6. Average rank (lower is better) obtained by different AFs and ensembles coupled with VGG16 (light blue represents stand-alone methods and dark blue, ensembles).
Sensors 22 06129 g006
Table 1. Fixed parameters of MeLU with m a x I n p u t = 256 (these are the same values as in [61]).
Table 1. Fixed parameters of MeLU with m a x I n p u t = 256 (these are the same values as in [61]).
J1234567
a j 512256768128384640896
λ j 512256256128128128128
Table 2. Comparison of the fixed parameters of GaLU and MeLU with m a x I n p u t = 1 .
Table 2. Comparison of the fixed parameters of GaLU and MeLU with m a x I n p u t = 1 .
J1234567
MELU a j 2.001.003.000.501.502.503.50
λ j 2.001.001.000.500.500.500.50
GALU a j 1.000.502.500.251.252.253.25
λ j 1.000.500.500.250.250.250.25
Table 3. Description of the data sets: xCV means a x fold cross-validation; Tr-Te means that training and test set are split by the authors of that data set.
Table 3. Description of the data sets: xCV means a x fold cross-validation; Tr-Te means that training and test set are split by the authors of that data set.
Short NameFull Name#Classes#SamplesProtocolImage Type
CHCHO53275CVhamster ovary cells
HE2D HeLa108625CVsubcellular location
RNRNAi data set 2005CVfly cells
MAMuscle aging42375CVmuscles
TBTerminal Bulb Aging79705CVterminal bulbs
LYLymphoma33755CVmalignant lymphoma
LGLiver Gender22655CVliver tissue
LALiver Aging45295CVliver tissue
COColorectal Cancer8500010CVhistological images
BGRBreast grading carcinoma33005CVhistological images
LARLaryngeal data set41320Tr-Telaryngeal tissues
HPImmunohistochemistry images from the human protein atlas7353Tr-Tereproductive tissues
RT2D 3T3 Randomly CD-Tagged Cell Clones1030410CVCD-tagged cell clones
LOLocate Endogenous105025CVsubcellular location
TRLocate Transfected115535CVsubcellular location
Table 4. Performance (accuracy) of activation function obtained using ResNet50.
Table 4. Performance (accuracy) of activation function obtained using ResNet50.
ActivationCHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ResNet50
MaxInput = 1
MeLU (k = 8)92.9286.4091.8082.9125.5056.2967.4776.2591.0082.4894.8289.6788.7968.3648.8676.23
Leaky ReLU89.2387.0992.8084.1834.0057.1170.9379.1793.6782.4895.6690.3387.2769.7245.4577.27
ELU90.1586.7494.0085.8248.0060.8265.3385.0096.0090.1095.1489.3389.9273.5040.9179.38
MeLU (k = 4)91.0885.3592.8084.9127.5055.3668.5377.0890.0079.4395.3489.3387.2072.2451.1476.48
PReLU92.0085.3591.4081.6433.5057.1168.8076.2588.3382.1095.6888.6789.5571.2044.8976.43
SReLU91.3885.5892.6083.2730.0055.8869.3375.0088.0082.1095.6689.0089.4769.9842.6175.99
APLU92.3187.0993.2080.9125.0054.1267.2076.6793.0082.6795.4690.3388.8671.6548.3076.45
ReLU93.5489.8895.6090.0055.0058.4577.8790.0093.0085.1494.9288.6787.0569.7748.8681.18
Small GaLU92.3187.9193.2091.0952.0060.0072.5390.0095.3387.4395.3887.6788.7967.5744.3280.36
GaLU92.9288.3792.2090.3641.5057.8473.6089.1792.6788.7694.9090.3390.0072.9848.8680.29
Flexible MeLU91.6988.4993.0091.6438.5060.3173.3388.3395.6787.6294.7289.6786.6767.3544.3279.42
TanELU93.5486.1690.6090.9140.0058.5669.6086.2595.3383.0594.8087.6786.8973.9543.1878.69
2D MeLU91.6987.6793.0091.6448.0060.4172.0091.6796.0088.3895.4289.0087.5870.5342.6180.37
MeLU + GaLU93.2388.0293.4092.9154.5059.1872.5389.5895.3386.2995.3488.6488.6469.2943.1880.67
Splash93.5487.5693.8090.0047.5055.9872.0082.9294.3384.1995.0286.0087.1275.7042.6179.21
Symmetric GaLU93.8584.1992.8089.4547.5058.6672.8087.0895.3382.6794.4487.3387.8071.5252.8479.88
Symmetric MeLU92.6286.6392.4089.2750.0060.6272.2785.4295.0085.1494.7290.0087.5866.7150.5779.93
Soft Learnable v293.9387.3393.6092.5546.0060.3169.0789.5894.6786.1095.0089.6787.0573.7254.5580.87
Soft Learnable94.1587.4493.4090.3647.0059.1867.7388.3395.0085.5295.5289.3388.2672.0446.5979.99
PDELU94.1587.2192.0091.6451.5056.7070.9389.5896.3386.6795.0889.6788.1872.7646.5980.59
Mish95.0887.5693.2091.8245.0058.4569.0786.6795.3386.6795.4890.0088.4153.4134.0978.01
SRS93.2388.8493.4091.0951.5060.1069.8788.7595.0086.4895.7288.3389.4754.0648.8679.64
Swish Learnable93.5487.9194.4091.6448.0059.2869.3388.7595.3383.2496.1090.0089.3241.1539.7777.85
Swish94.1588.0294.2090.7348.5059.9070.1389.1792.6786.1095.6687.6787.6565.0532.3978.79
ENS95.3889.5397.0089.8259.0062.7876.5386.6796.0091.4396.6091.0089.9274.0050.0083.04
ENS_G93.5490.7097.2092.7356.0063.9277.6090.8396.3391.4396.4290.0090.0073.7650.0083.36
ALL97.2391.1697.2095.2758.0065.1576.8092.9298.0090.1096.5890.0090.3874.6753.9884.49
ResNet50
MaxInput = 255
MeLU (k = 8)94.4689.3094.2092.1854.0061.8675.7389.1797.0088.5795.6087.6788.7172.0952.2782.18
MeLU (k = 4)92.9290.2395.0091.8257.0059.7978.4087.5097.3385.1495.7289.3388.2666.2048.3081.52
SReLU92.3189.4293.0090.7356.5059.6973.3391.6798.3388.9595.5289.6787.8868.9448.3081.61
APLU95.0889.1993.6090.7347.5056.9175.2089.1797.3387.0595.6889.6789.4771.4451.1481.27
Small GaLU93.5487.7995.6089.8255.0063.0976.0090.4295.0085.3395.0889.6789.7772.1445.4581.58
GaLU92.9287.2192.0091.2747.5060.1074.1387.9296.0086.8695.5689.3387.7370.2644.3280.20
Flexible MeLU92.6287.0991.6091.0948.5057.0169.6086.6795.0087.8195.2689.0088.1170.8346.5979.78
2D MeLU95.0890.2393.0091.4554.0057.4269.6090.4296.0087.4391.8487.6790.7673.4454.5581.52
MeLU + GaLU93.2387.3392.2090.9154.0058.6673.8789.5895.3388.7695.4286.3386.7470.9148.8680.92
Splash96.0087.6792.8093.8250.5060.6278.1389.5896.6787.8195.1890.3391.3668.8151.7082.06
Symmetric GaLU92.0085.5891.2089.6443.5057.9470.9379.5891.3385.1495.3487.3385.9869.3747.1678.13
Symmetric MeLU92.9288.3793.4092.0044.0058.5669.6091.6793.3384.0094.9487.3388.7970.3044.8979.60
ENS93.8591.2896.2093.2759.0063.3077.6091.6798.0087.4396.3089.0089.1771.1150.0083.14
ENS_G95.0891.2896.2094.1863.0064.8578.6792.5097.6787.6296.5489.6789.7771.3651.1483.96
ALL96.0091.1696.6094.5560.5064.7477.6092.9297.6789.5296.6289.3390.6874.3752.2784.30
eENS 94.7791.4097.0092.9160.0064.7477.8788.7598.0090.1096.5090.0089.7773.2350.5783.70
eENS_G 95.0891.2896.8093.4562.5065.2678.9391.6796.6790.4896.6089.3389.8573.6050.0084.10
eALL 96.9291.2897.2095.4560.5064.6477.8793.7597.6790.1096.5889.6790.6874.3752.2784.59
15ReLU 95.4091.1096.2095.0158.5064.8076.0092.9097.3089.3096.3090.0090.0473.0050.5783.76
Selection 96.6291.4097.0095.0960.0064.8577.8793.7598.0090.2996.7890.0090.9874.0454.5584.74
Stoc_1 97.8191.5196.6695.8760.0465.8380.0292.9699.0991.2496.6190.7791.0374.2050.5784.95
Stoc_2 98.8293.4297.8796.4865.5866.9285.6592.9499.7794.3396.6391.3692.3476.8354.5586.89
Stoc_3 99.4393.9398.0496.0664.5566.4183.2490.0496.0493.9396.7292.0591.3475.8951.7085.95
Stoc_4 98.7792.0997.4096.5563.0067.0181.8793.3310093.5296.7293.0092.2776.3851.7086.24
Table 5. Activation performance (accuracy) on VGG16.
Table 5. Activation performance (accuracy) on VGG16.
ACTIVATIONCHHELOTRRNTBLYMALGLACOBGLARRTHPAVG
VGG16 MAXINPUT = 1MeLU (k = 8)99.6992.0998.0092.9159.0060.9378.6787.9286.6793.1495.2089.6790.5373.7342.6182.71
Leaky ReLU99.0891.9898.0093.4566.5061.1380.0092.0886.6791.8195.6291.3388.9474.8638.0783.30
ELU98.7793.9597.0092.3656.0059.6981.6090.8378.3385.9095.7893.0090.4571.5540.9181.74
MeLU (k = 4)99.3891.1697.6092.7364.5062.3781.0789.5886.0089.7195.8289.6793.1875.2042.6183.37
PReLU99.0890.4797.8094.5564.0060.0081.3392.9278.3391.0595.8092.6790.3873.7435.2382.49
SReLU99.0891.1697.0093.6465.5060.6282.6790.0079.3393.3396.1094.0092.5876.8045.4583.81
APLU99.0892.3397.6091.8263.5062.2777.3390.0082.0092.3896.0091.3390.9876.5834.6682.52
ReLU99.6993.6098.2093.2769.5061.4480.8085.0085.3388.5795.5093.0091.4473.6840.3483.29
Small GaLU98.4691.6397.8091.3564.5059.7980.5389.5877.3392.7695.7091.6791.9772.6344.3282.66
GaLU98.4694.0797.4092.3665.0059.0781.0792.0875.6793.7195.6888.6791.7475.8139.2082.66
Flexible MeLU97.5494.1996.6094.9159.0062.6877.0790.0089.0091.8195.9492.6789.9272.1538.6482.80
TanELU97.8593.1497.0092.3661.0061.4472.8089.1777.3391.6295.2889.6790.2372.8443.7581.69
2D MeLU97.8593.7297.2092.7361.0061.3481.6091.2592.3394.4895.8689.6792.3571.9138.6483.46
MeLU + GaLU98.1593.7298.2093.6460.0060.8277.6092.0881.0093.1495.5492.3389.4775.6047.1683.23
Splash97.8592.7997.8092.1858.5062.0675.7388.3383.6785.9095.0291.6790.1574.2942.0581.86
Symmetric GaLU99.0892.7997.2092.9160.5060.0078.9388.3379.3391.6295.5292.6791.6773.9140.3482.32
Symmetric MeLU98.4692.9196.6092.1856.5059.6974.9390.0085.0087.0594.7690.3390.6872.8741.4881.56
Soft Learnable v295.6987.9194.6093.4534.5055.5750.6777.5064.6729.7194.0867.6792.3568.9635.8069.54
Soft Learnable98.1592.9197.0091.8247.5054.3362.1386.6795.6765.9095.0484.3390.3871.0840.3478.21
PDELU98.7793.6096.4092.1859.0058.2576.8087.9287.6789.3395.3690.3391.7475.2442.0582.30
Mish96.3190.7094.6093.6418.5046.8054.1366.6773.6756.3893.8880.0082.7373.8944.3271.08
SRS71.0859.1945.0051.6429.5031.4457.6061.2561.0045.3386.8857.0067.5039.7419.3252.23
Swish Learnable97.5491.8697.0093.6443.5054.6466.6787.0881.0079.4394.4681.0085.2370.0235.2377.22
Swish98.7792.5696.8093.6463.5058.9780.8090.0089.0093.1494.6893.3391.7475.2439.7783.46
ENS99.3893.8498.4095.6468.0065.6785.0792.0885.0096.3896.7494.3392.6575.5544.8985.57
ENS_G99.6994.6599.0095.4572.0064.9586.9392.5083.3397.1496.7294.6792.6575.5645.4586.07
ALL99.6995.3598.8095.4572.0066.8084.0094.1785.6797.1496.6695.0093.1875.8548.3086.53
VGG16
MAXINPUT = 255
MeLU (k = 8)99.6992.0997.4093.0959.5060.8280.5388.7580.3388.5795.9490.3388.3373.0147.7382.40
MeLU (k = 4)99.3891.9898.6092.5566.5059.5984.5391.6788.0094.8695.4693.0093.0372.2138.6484.00
SReLU98.7793.1497.0092.1865.0062.4777.6089.5876.0096.0095.8494.3389.8574.0442.6182.96
APLU98.7792.9197.4093.0963.0057.3282.6790.4277.0090.6794.9093.0091.2175.6536.3682.29
Small GaLU99.3892.9197.0092.7350.5062.1678.4090.4273.0094.4895.3292.0090.9873.6142.6181.70
GaLU98.7792.9197.6093.0966.5059.4883.4790.8395.0085.5295.9691.6793.4175.4538.6483.88
Flexible MeLU99.0895.0097.2093.4562.0055.9876.8089.1783.0088.5795.6491.3391.2973.0037.5081.93
MeLU + GaLU98.4694.4296.8092.0054.5060.8279.7390.8378.6793.3396.2689.6791.1474.7940.3482.11
Symmetric GaLU97.8592.2197.4093.6458.0058.1473.8791.6779.3391.4395.1890.3389.5574.4734.0981.14
Symmetric MeLU98.4692.3396.8092.1856.5061.2475.4789.1782.0088.0095.3292.6788.8674.2738.0781.42
ENS99.3893.8498.8095.2768.5064.2384.5392.5081.3396.5796.6695.0092.2075.2743.7585.18
ENS_G99.3894.8898.8095.6470.5065.8885.8793.7581.6796.3896.7095.6792.8075.2644.3285.83
ALL99.6995.4798.4095.4570.0063.9283.7394.1782.6796.3896.6095.0092.7375.7845.4585.69
EENS 99.3894.0798.8095.6469.0065.8885.8793.3382.6796.5796.8895.3392.5074.9943.1885.60
EENS_G 99.6994.6599.0095.2770.5065.5786.9392.9283.3397.7196.8295.0092.4276.0944.3286.01
EALL 99.6995.7098.8095.4571.5065.9883.7394.5885.6796.3896.7095.0092.5075.4247.1686.28
15RELU 99.0895.3598.6094.9164.5064.6479.2095.0083.0092.7696.3894.0092.4274.3450.5784.98
SELECTION 99.6995.2698.6094.9171.0064.8586.6794.5884.6795.2496.7294.3393.5675.4847.1686.18
STOC_4 99.6996.0598.6095.2774.5067.5383.4795.0084.0095.6296.7892.6793.4874.8751.7086.61
Table 6. Ensemble performance (accuracy) on a set of different topologies (due to the high computational time for CO we have run only 4 Sto_4 Densenet201).
Table 6. Ensemble performance (accuracy) on a set of different topologies (due to the high computational time for CO we have run only 4 Sto_4 Densenet201).
EfficientNetB0CHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ReLU94.4691.2894.8092.1868.5062.5888.8092.5097.3396.7695.0490.6787.3571.2152.2785.05
15Reit96.0092.0995.4093.8274.0065.9889.0793.3397.0098.2995.6090.0088.9471.6161.3686.83
MobileNetV2CHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ReLU98.1592.9197.4092.9169.0064.5476.0091.6796.6796.7694.5489.0090.2369.5350.5784.65
15ReLU99.0895.2398.8095.6475.0070.4180.2795.4298.0097.7195.4690.6791.5269.2455.1187.17
Stoc_499.0895.3599.2098.3684.0076.9187.2094.5810099.6295.5094.0095.0877.0263.6490.63
DarkNet53CHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ReLU98.7793.6098.0095.8271.0067.8481.3371.2598.0096.9592.0291.6791.4467.1253.9884.58
15Leaky99.6995.1299.2099.4589.0077.9491.7389.1710099.8195.5693.0093.5676.0261.9390.74
Stoc_499.6995.9398.8098.8088.0077.7396.0088.3310099.8195.2891.0092.1274.3367.0590.86
ResNet50CHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ReLU97.5494.1998.4095.8274.5065.1580.0092.0898.0096.7696.2689.6791.4477.2155.6886.84
15ReLU99.0895.7099.2097.2779.0069.3884.2795.4297.3398.1097.0091.0093.7977.1559.6688.89
Stoc_499.6995.4799.2098.0085.0075.2691.4795.0099.0099.6297.0293.0094.8575.1862.5090.68
DenseNet201CHHELOTRRNTBLYMALGLACOBGLARRTHPAvg
ReLU98.7395.2998.3796.9271.4066.8082.2091.3198.2298.1295.8891.6993.9649.9254.7085.56
15ReLU99.3896.4098.4098.5579.0071.2486.4094.5899.6799.2497.8495.3396.1477.5761.3690.07
Stoc_499.6994.8899.2099.2784.0076.2993.8796.6710010097.8493.0095.3877.6769.8991.84
Table 7. Performance (accuracy) with optimized BS and LR.
Table 7. Performance (accuracy) with optimized BS and LR.
ResNet50 CHHEMALAR
ReLU98.1595.9395.8394.77
15ReLU99.0896.2897.0895.91
Sto_499.6996.4097.5096.74
Table 8. Inference time of a batch size of 100 images.
Table 8. Inference time of a batch size of 100 images.
GPUYear GPUSingle ResNet50Ensemble 15 ResNet50
GTX 108020160.36 s5.58 s
Titan Xp20170.31 s4.12 s
Titan RTX20180.22 s2.71 s
Titan V10020180.20 s2.42 s
Table 9. The four best AFs are reported (TopXr means X-th position in the rank among the AFs).
Table 9. The four best AFs are reported (TopXr means X-th position in the rank among the AFs).
Topology MITop1rTop2rTop3rTop4r
ResNet501MeLU + GaLUSRSPDELUSoft Learnable v2
ResNet50255MeLU (k = 8)SplashMeLU (k = 4)2D MeLU
VGG161SReLUMeLU + GaLUMeLU (k = 4)ReLU
VGG16255GaLUMeLU (k = 4)SReLUAPLU
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nanni, L.; Brahnam, S.; Paci, M.; Ghidoni, S. Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. Sensors 2022, 22, 6129. https://doi.org/10.3390/s22166129

AMA Style

Nanni L, Brahnam S, Paci M, Ghidoni S. Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. Sensors. 2022; 22(16):6129. https://doi.org/10.3390/s22166129

Chicago/Turabian Style

Nanni, Loris, Sheryl Brahnam, Michelangelo Paci, and Stefano Ghidoni. 2022. "Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets" Sensors 22, no. 16: 6129. https://doi.org/10.3390/s22166129

APA Style

Nanni, L., Brahnam, S., Paci, M., & Ghidoni, S. (2022). Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. Sensors, 22(16), 6129. https://doi.org/10.3390/s22166129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop