Next Article in Journal
A Mathematical Model of Breast Cancer Growth and Drug Resistance Evolution Under Chemotherapy
Previous Article in Journal
Analyzing Coupled Delayed Fractional Systems: Theoretical Insights and Numerical Approaches
Previous Article in Special Issue
Truck Appointment Scheduling: A Review of Models and Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Neural Network Training Through Neuroevolutionary Models: A Hybrid Approach to Classification Optimization

by
Hyasseliny A. Hurtado-Mora
,
Luis A. Herrera-Barajas
*,
Luis J. González-del-Ángel
*,
Roberto Pichardo-Ramírez
,
Alejandro H. García-Ruiz
and
Katea E. Lira-García
Faculty of Engineering Tampico, Autonomous University of Tamaulipas, Centro Universitario Sur, Tampico 89109, Mexico
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1114; https://doi.org/10.3390/math13071114
Submission received: 19 February 2025 / Revised: 24 March 2025 / Accepted: 26 March 2025 / Published: 28 March 2025

Abstract

:
The optimization of Artificial Neural Networks (ANNs) remains a significant challenge in machine learning, particularly in overcoming local-optima limitations during training. Traditional classification algorithms, such as k-Nearest Neighbors (KNN), decision trees, Support Vector Machines (SVMs), and ANNs, often suffer from convergence to suboptimal solutions due to their training methods. This research proposes a hybrid neuroevolutionary approach that integrates a genetic algorithm with a NEAT-based structure to enhance ANN performance. Additionally, a Cellular Processing Algorithm (PCELL) is employed to expand the search space and improve solution quality. The methodology involves designing an initial neural network trained via backpropagation, followed by the application of genetic operators to evolve network structures. Experimental results from diverse benchmark datasets demonstrate that the proposed algorithm outperforms conventional ANN training methods and achieves performance levels comparable to evolutive solutions. The results suggest that integrating evolutionary strategies with cellular processing enhances classification accuracy and contributes to the advancement of neuroevolutionary learning techniques.

1. Introduction

Machine learning is widely used to address complex classification problems across diverse research domains [1,2,3]. Among its techniques, Artificial Neural Networks (ANNs) stand out due to their adaptability and ability to model intricate patterns [4]. However, conventional ANN training methods, such as backpropagation, often struggle with hyperparameter optimization and the risk of becoming trapped in local optima, limiting their generalization ability. Additionally, configuring ANN architectures requires extensive experimentation, making the training process computationally expensive and time-consuming [5]; to overcome these challenges, researchers have explored diverse optimization strategies, with evolutionary algorithms emerging as a promising approach [6,7]. These methods, inspired by biological evolution, effectively search for optimal solutions to high-complexity problems. Specifically, combining ANNs with evolutionary algorithms has led to neuroevolutionary networks, which integrate genetic evolution mechanisms into the training phase to enhance performance and avoid suboptimal convergence [8,9].
Artificial Neural Networks (ANNs) have been widely applied in diverse domains, including image recognition [10], natural language processing [11], and medical diagnosis [12], among others. The ability of ANNs to model complex patterns makes them essential tools in machine learning. However, traditional training methods for ANNs often face challenges with local optima, which limits the network’s ability to achieve a better solution. The complexity of hyperparameter tuning further exacerbates this issue, requiring extensive experimentation to find configurations that enhance performance [13].
Evolutionary algorithms (EAs) offer an alternative approach to optimizing neural networks by mimicking biological evolution through selection, mutation, and crossover operators. Neuroevolution, which integrates EAs into ANN training, optimizes neural networks by evolving a population of candidate models through genetic operators. Each iteration generates a new population where networks gradually adapt their architectures and weights based on their performance, enabling a form of evolutionary learning. This approach has demonstrated improvements in network adaptability and convergence [14,15], as neural networks function as adaptive agents that refine their structure and parameters over time to enhance performance in a given task.
Unlike conventional methods that require predefined architectures, the NeuroEvolution of Augmenting Topologies (NEAT) allows networks to grow in complexity by introducing new neurons and connections during the training phase. Additionally, NEAT employs reinforcement mechanisms to guide the evolutionary process, enabling unsupervised learning and fostering structural innovation within evolving populations [16]. Several variants of NEAT, such as HyperNEAT [17], Hybrid Self-Attention NEAT [18], and CoDeepNEAT [19], have been proposed to enhance the efficiency and scalability of neuroevolution. However, these approaches typically involve higher computational costs and increased complexity due to the need for more sophisticated search strategies and larger solution spaces. In this work, we focus on the base NEAT approach to maintain a balance between computational efficiency and model complexity. In [20], Li et al. proposed a novel Recurrent NEAT-based MUSIC (RNEAT-MUSIC) algorithm to optimize 2D-DOA estimation by reducing computational complexity without compromising accuracy. Their approach leverages the NEAT framework, which evolves neural network architectures dynamically, allowing for adaptive learning without requiring predefined structures. To further enhance efficiency, the proposed model incorporates a recurrent structure within NEAT, minimizing the number of required neural network parameters and reducing the overall computational burden. The algorithm extracts phase components from the received signal covariance matrix, applying a reduced-dimensional neural network to restrict the scanning region before executing the computationally expensive MUSIC stage. Experimental validation demonstrates that RNEAT-MUSIC achieves a 75% reduction in computational workload compared to traditional 2D-MUSIC while preserving high-resolution DOA estimation. The results indicate that this approach effectively balances exploration and exploitation in neuroevolutionary learning, improving adaptability and convergence speed. These findings position RNEAT-MUSIC as a promising alternative for real-time satellite tracking, offering a computationally efficient solution for high-precision DOA estimation in space communication applications.
Despite NEAT advantages, maintaining an effective balance between exploration and exploitation remains challenging, as genetic algorithms risk premature convergence or high computational costs when optimizing complex network structures. To address these limitations, this study proposes a hybrid neuroevolutionary approach that integrates Cellular Processing Algorithms (PCELL) with NEAT-based networks. PCELL is inspired by cellular automata, where a system is divided into discrete units (cells) that evolve based on local interactions and predefined rules [21]. This decentralized processing enhances computational efficiency and facilitates parallel optimization, making it particularly useful for evolving complex neural architectures. By structuring the solution space into smaller, interacting regions, PCELL mitigates premature convergence and fosters greater diversity in evolutionary processes [22]. Integrating PCELL with NEAT expands the solution search space, refines network topologies, and improves classification accuracy by promoting structural adaptation throughout the evolutionary process.
In this paper, we propose optimizing neural network training using neuroevolutionary models. It proposes developing a Cellular Processing Algorithm to control the evolution of these networks, improving hyperparameter tuning and learning efficiency. The experimental phase involves training and evaluating the proposed model on benchmark datasets to assess its effectiveness against state-of-the-art classification techniques.
This research contributes to the field of machine learning by developing novel genetic operators for neural networks, introducing a hybrid methodology that optimizes the training process and delivers superior classification results. The study builds on advanced optimization techniques and validates the proposed models experimentally to provide more robust and efficient solutions in artificial intelligence.
The remainder of this paper is organized as follows: Section 2 presents the materials and methods; Section 3 describe the experimentation results and the discussion; and, finally, Section 4 presents the conclusions and future work.

2. Materials and Methods

To assess the algorithm robustness, for this work, we consider a set of independent instances with varying characteristics, such as the number of registers and dataset dimensionality. The selection criterion included datasets for classification with at least two evaluation classes and no missing values. The chosen datasets were obtained from publicly available machine learning repositories, specifically Kaggle [23] and the UCI Machine Learning Repository [24], providing diverse and well-structured data for a comprehensive performance analysis. Table 1 shows the characteristics of each selected dataset, where columns show the name of the dataset, the repository from which the dataset was extracted, the data types of the corresponding attributes of each dataset, and the number of registers, attributes, and classes.

2.1. Preprocessing

Once the instances were defined, we applied a normalization method. For this work, we decided on the normalization method to be used to transform each dataset. We used a normalization; this process rescales data to a specific range, typically between 0 and 1 or −1 and 1, ensuring a consistent scale across feature. Moreover, standardization adjusts data by centering them around a mean of 0 and scaling them to have a standard deviation of 1. This technique preserves the original distribution and is particularly effective when working with normally distributed data.
For the selection of the normalization method, each attribute of each dataset was reviewed to evaluate which type of standardization would be applied, considering if the attributes presented a normal distribution after we applied standardization, and, in cases where the data needed rescaling, we applied normalization.
Table 2 shows a sample of the results of the analysis performed for the selection of the normalization technique for all datasets, where columns show the attribute number of the dataset, the statistical information for each attribute (minimum value, maximum value, mean, and standard deviation), and the decision taken to normalize each attribute, where the norm is obtained from Equation (1), std is calculated with Equation (2), and the − symbol indicates that no normalization method was required for this attribute.
N o r m ( x i ) = x i m i n v a l m a x v a l m i n v a l
S t d ( x i ) = x i x ¯ σ
In addition, we perform an Artificial Neural Network with the TensorFlow library in Python to have an accuracy reference value. The Artificial Neural Network structure for this experiment was an input layer with the number of neurons equal to the number of attributes, a hidden layer with the same number of neurons as the input layer, and an output layer with one neuron. Each dataset was divided into training and testing. The production percentages of each set in the experimentation were 70% training and 30% test.

2.2. NEAT Algorithm

For the neuroevolution of neural networks, we apply the NeuroEvolution of Augmenting Topologies (NEAT) algorithm [25], which evolves Artificial Neural Networks through mutation and selection. The genetic coding scheme represents the neural network in two phases: genotype and phenotype.
To encode the genome, NEAT utilizes a list of nodes and a list of connections. The Nodes list defines neurons within the network, each uniquely identified by an I D and categorized as input, hidden, or output. The Connections list specifies synaptic links between neurons, encoding key attributes such as input and output nodes, connection weight w, and innovation number i n n o v , which ensures the historical tracking of mutations.
Figure 1 also shows the phenotype, where the neural network structure emerges from the encoded genome. Input neurons receive data, hidden neurons process information, and output neurons generate final predictions. The structural complexity of the network increases as mutations introduce new nodes and connections, enabling adaptive evolution.
This encoding strategy ensures a direct mapping between genotype and phenotype, supporting both structural and parametric evolution while preserving innovation through historical markings. As a result, NEAT effectively optimizes neural network architectures by balancing the exploration of new topologies and the inheritance of beneficial traits.

2.3. Evolution Process

In this paper, we propose a modified genetic encoding scheme for a neural network within the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. This adaptation incorporates structural modifications to the genotype by integrating additional connection parameters and representing the phenotype as a structured list format as shown in Figure 2.
At the genotypic level, the encoding includes a set of connection attributes, specifying details such as activation states (e.g., true/false flags), numerical weights (float values), and additional properties that refine the mutation and selection mechanisms. These modifications enhance the expressiveness of the genome, allowing for greater control over the evolutionary process.
The phenotype is represented as a list (network), where the first element corresponds to the number of neurons in the input layer, followed by elements representing the number of neurons in each hidden layer. The final element in the list denotes the number of neurons in the output layer. This structured format provides an intuitive representation of the neural network architecture, enabling the efficient manipulation and analysis of evolving topologies.
The proposed genome was used in conjunction with a genetic algorithm to develop a genetic algorithm aimed at enhancing neural network development.
Genetic algorithms (GAs) [26] are optimization techniques inspired by the principles of natural selection and genetics. GAs operate by evolving a population of candidate solutions through iterative processes of selection, crossover, and mutation, aiming to find optimal or near-optimal solutions to complex problems. In the context of neural networks, GAs have been employed to optimize various aspects, including network architecture [27,28], weights [29,30], and hyperparameters [31,32].
The genome structures of the neural network were developed for the initial population of the genetic algorithm. The algorithm began by selecting candidates for the generation through a binary tournament.
For genetic operators, mutation and crossover methods were designed based on the NEAT genome. The mutation method introduced changes at the genotype level, specifically modifying the weight on a specific connection. Algorithm 1 modifies the structure of neural networks by adjusting the weights and innovation numbers within the hidden layers. The procedure begins by selecting a random hidden layer and a neuron within that layer. A specific weight from the selected neuron is then chosen and modified. Then, the algorithm increments the innovation numbers for the hidden layer and the neuron to reflect the changes and updates the chosen weight with a new random value.
Algorithm 1 Genoytpe Mutation Operator
 1:
procedure  G e n O p e r a t o r ( N e u r a l N e t w o r k   O f   f s p r i n g s [ ] )
 2:
      for each  i in Offs  do
 3:
        H L =   r a n d (0, ( O f   f s [ i ] . h i d L ) 1 )
 4:
        N =   r a n d (0, ( O f   f s [ i ] . h i d L [ H L ] . n e u r o n s ) 1 )
 5:
        w m =   r a n d (0, ( O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . w ) 1 )
 6:
        O f   f s [ i ] . h i d L [ H L ] . i n n o v = O f   f s [ i ] . h i d L [ H L ] . i n n o v + 1
 7:
        O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v = O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v + 1
 8:
        O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . w = O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . w [ w m ]
 9:
        O f   f s [ i ] . h i d L [ H L ] . n e u r o n s [ N ] . w [ w m ] = r a n d ( )
10:
      end for
11:
end procedure
For crossover, modifications were applied to the genome’s information regarding the connections within the neural network. The crossover operator consists of a differential evolution strategy within a neural network framework. It adjusts the network weights based on differences computed from two-parent networks and introduces stochastic mutations while tracking modifications via innovation counters, as illustrated in Figure 3.
Algorithm 2 modifies the neural network structure based on evolutionary principles. It processes three neural networks, denoted as p 1 , p 2 , and p 3 , along with an integer parameter numEdges, which represents the total number of edges in the network.
Algorithm 2 Differential Crossover Operator
 1:
procedure gendc_operator( p 1 , p 2 , p 3 , n u m E d g e s )
 2:
     m o d s = n u m E d g e s × 0.05
 3:
    for  j = 0 to m o d s 1  do
 4:
          H L = rand ( 0 , length ( p 1 [ 0 ] . h i d L ) 1 )
 5:
          N = rand ( 0 , length ( p 1 [ 0 ] . h i d L [ H L ] . n e u r o n s ) 1 )
 6:
          w m = rand ( 0 , length ( p 1 [ 0 ] . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s ) 1 )
 7:
          w p 1 = p 1 [ 0 ] . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ]
 8:
          w p 2 = p 2 [ 0 ] . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ]
 9:
          w T e m p = ( w p 1 w p 2 ) × v
10:
          p = rand ( 0 , 1 )
11:
         if  p     f a t h e r p r o b a b i l i t y  then
12:
          w m o d = p 3 . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ]
13:
          w T e m p = w T e m p + w m o d
14:
          p 3 . h i d L [ H L ] . i n n o v = p 3 . h i d L [ H L ] . i n n o v + 1
15:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v = p 3 . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v + 1
16:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . l a s t W = w m o d
17:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ] = w T e m p
18:
         else
19:
          p 3 . h i d L [ H L ] . i n n o v = p 3 . h i d L [ H L ] . i n n o v + 1
20:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v = p 3 . h i d L [ H L ] . n e u r o n s [ N ] . i n n o v + 1
21:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . l a s t W = p 3 . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ]
22:
          p 3 . h i d L [ H L ] . n e u r o n s [ N ] . w e i g h t s [ w m ] = w T e m p
23:
         end if
24:
    end for
25:
    return  p 3
26:
end procedure
The number of modifications applied is computed as
mods = numEdges × 0.05
The algorithm iterates for j = 0 , 1 , , mods 1 , and, in each iteration, it performs the following steps. A hidden layer index HL is selected randomly, followed by a neuron index N within that layer and a weight index w m from the neuron’s weight vector. These indices define a specific weight to modify. The corresponding weights from p 1 and p 2 are retrieved:
w p 1 = p 1 . hidL [ H L ] . neurons [ N ] . weights [ w m ]
w p 2 = p 2 . hidL [ H L ] . neurons [ N ] . weights [ w m ]
The new weight is computed as
w Temp = ( w p 1 w p 2 ) · v
where v is a predefined scalar multiplier defined by experimentation.
A probability value p is sampled from a uniform distribution in [ 0 , 1 ] . If p     f a t h e r p r o b a b i l i t y , which consists of the probability of selecting a third parent as a candidate to improve the solution as determined using Grid Search, where the final value selected was 0.30, a crossover occurs by incorporating a weight modification from p 3 :
w mod = p 3 . hidL [ H L ] . neurons [ N ] . weights [ w m ]
The temporary weight is then updated:
w Temp = w Temp + w mod
In both cases, the algorithm updates the innovation counters:
p 3 . hidL [ H L ] . innov = p 3 . hidL [ H L ] . innov + 1
p 3 . hidL [ H L ] . neurons [ N ] . innov = p 3 . hidL [ H L ] . neurons [ N ] . innov + 1
The last stored weight is also updated accordingly:
p 3 . hidL [ H L ] . neurons [ N ] . w m = p 3 . hidL [ H L ] . neurons [ N ] . weights [ w m ]
Finally, the modified weight is assigned:
p 3 . hidL [ H L ] . neurons [ N ] . weights [ w m ] = w Temp
Once the offspring are generated, the neural network’s accuracy serves as the objective function for the next phase.
The genetic algorithm (GA) used follows a structured approach to optimizing neural network configurations. An individual represents a neural network genotype, and each candidate solution consists of a structured list defining the number of neurons in each layer and their associated connection weights.
The binary tournament technique was used as a selection operator, ensuring diversity while prioritizing high-performing individuals [33]. For example, given two candidates A and B with fitness scores f ( A ) = 0.85 and f ( B ) = 0.78 , the selection of the best individual is as follows:
S = arg max ( f ( A ) , f ( B ) ) = A
We applied the differential evolution strategy for crossover, where weight adjustments are computed based on differences between two parent networks. If two-parent weights are w 1 = 0.5 and w 2 = 0.3 with a scaling factor v = 0.7 , the new weight is computed as
w new = w 1 + v · ( w 2 w 1 ) = 0.5 + 0.7 · ( 0.3 0.5 ) = 0.36
With a probability of 0.3, an additional modification is introduced using a third parent’s weight w 3 = 0.6 :
w final = w new + 0.3 · ( w 3 w new ) = 0.36 + 0.3 · ( 0.6 0.36 ) = 0.43
The mutation process occurs by introducing weight perturbations in randomly selected connections. For example, if we consider a mutation rate of 0.1 to a weight w = 0.43 , the mutation modifies it as
w = w + N ( 0 , σ 2 ) , where σ = 0.05
Thus, if N ( 0 , 0 . 05 2 ) generates a value of 0.02, the new weight is
w = 0.43 + 0.02 = 0.45
The replacement technique adopts an elitist strategy, preserving the best solutions across generations to maintain stable convergence. If the best solution in generation t has f max = 0.92 and in generation t + 1 a candidate achieves f = 0.95 , the elite strategy ensures retaining the new best solution.
The GA parameters, mutation, and crossover probability were tuned through Grid Search, ensuring an optimal balance between maintaining diversity and refining promising network architectures.

2.4. Cellular Processing

To enhance the quality of the solution obtained by the genetic algorithm, the search space was expanded using a Cellular Processing Algorithm (PCELL). This model simulates the communication and information-processing mechanisms of biological cells.
In this work, PCELL utilized six processing cells, each containing a distinct neural network phenotype derived from the initial solution. Additionally, each cell incorporated the proposed genetic algorithm to explore local solutions effectively.
Algorithm 3 begins by initializing an initial solution S 0 , which serves as the basis for generating a set of N processing cells C = { C 1 , C 2 , , C N } . Each cell C i is assigned a unique phenotype ϕ i based on a transformation function applied to S 0 , given by ϕ i = f ( S 0 , θ i ) , where θ i represents a diversity parameter to explore different regions in the search space. Within each processing cell, a GA iteratively optimizes the assigned solution, leading to an update step defined as S i ( t + 1 ) = G A ( S i ( t ) ) . The best solution found in each cell is evaluated and stored as x * (Equation (13)), where f ( x ) is the objective function to maximize.
x * = arg max x S f ( x )
After processing all cells, the global best solution x g l o b a l * updates according to
x g l o b a l * = max { x g l o b a l * , x C i * } , i { 1 , . . . , 6 } .
This process continues iteratively until a stopping criterion is met, which is defined when a predefined number of iterations is reached or when the improvement in x g l o b a l * global falls below a threshold ε . Finally, the algorithm returns the most-optimized solution obtained.
Algorithm 3 Cellular Processing
1:
Input: Initial solution S 0 , number of cells N
2:
Generate N processing cells C = { C 1 , C 2 , , C N }
3:
for each cell C i  do
4:
     ϕ i = f ( S 0 , θ i )
5:
     S i ( t + 1 ) = G A ( S i ( t ) )
6:
     x C i * = arg max x C i f ( x )
7:
end for
8:
x g l o b a l * = max { x g l o b a l * , x C i * }
9:
Output:  x g l o b a l *

Algorithm Complexity

The PCELL algorithm divides the search into N independent cells, where N is the number of cells, G represents the number of generations per cell, P denotes the population size in each cell, C stands for the cost of applying genetic operators (mutation, crossover, etc.) to a candidate, and E indicates the cost of evaluating the objective function for each candidate; in this case, the overall temporal complexity is expressed as
T PCELL = O N · G · P · ( C + E )
A conventional genetic algorithm that operates on a single population has a temporal complexity of
T GA = O G · P · ( C + E )
Comparing Equations (15) and (16) shows that the computational cost of the PCELL approach increases by a factor of N. Although this factor raises the computational load, the parallel exploration of multiple cells enhances the diversity in the search for solutions.
In backpropagation, the training process iterates over the dataset to update the network parameters, where I represents the number of iterations (epochs) and M denotes the total number of parameters (weights and biases) in the network; accordingly, the complexity approximates to
T BP = O I · M
Equation (17) indicates a lower cost per iteration compared to Equation (15); this method risks becoming trapped in local optima and does not explore the solution space as efficiently.
The PCELL approach exhibits a temporal complexity of O N · G · P · ( C + E ) , which exceeds that of a traditional ( O G · P · ( C + E ) ) due to the factor N. This extra cost justifies itself by the improvement in the exploration of the solution space. Additionally, the backpropagation method has a complexity of O I · M , which generally offers higher efficiency per iteration but may not provide the global search robustness achieved by evolutionary methods.
This comparison clarifies the implications of the increased computational cost of the PCELL approach compared to traditional methods and highlights the trade-off between computational efficiency and the ability to explore diverse solutions.

3. Results and Discussion

For the development of this work, we used an HP ENVY Laptop 13-ad0xx, equipped with an Intel Core i7-7500U CPU @ 2.70GHz (2.094 GHz), 8 GB DDR3 SDRAM, and running Microsoft Windows 10 Home Single Language, version 10.0.18363. For the programming of this research, we used C# on Visual Studio Community version 2022 and Python 3.13.1 on PyCharm Community Edition 2024.3.1.
To begin, the initial structure of the Artificial Neural Network (ANN) was developed using the following parameters: learning rate of 0.3, momentum of 0.2, and 500 iterations. The basic architecture consists of an input layer with x neurons, where x represents the number of attributes in the dataset, a hidden layer with the same number of neurons as the input layer, and an output layer with y neurons, where y corresponds to the number of classes in the dataset (Figure 4).
The preprocessed datasets were used to train the ANN with a backpropagation algorithm, which optimizes the network’s weights by minimizing error through gradient descent. Additionally, we evaluate the performance of the algorithm with a 10-fold cross-validation approach to reduce the dependence on the initial data partitioning.
The preliminary results for the initial ANN structure are presented in Table 3, where the first column lists the tested datasets and the second column reports the accuracy achieved.
For Artificial Neural Network evolution process, we developed a genetic algorithm with the operators designed (Genotype Mutation Operator and Differential Crossover Operator) based on the genetic encoding structure of NEAT.
The evolutive process includes three hyperparameters that modify its behavior. First, we analyzed the population size, considering that each population element represents a genotype of the initial neural network. Additionally, we determined the number of generations for the genetic algorithm’s evolution and defined the number of training iterations for ANN learning. Based on the preliminary experiment results, the value ranges shown in Table 4 were established for the hyperparameters.
To define the parameters of the genetic algorithm, we used Grid Search to explore the hyperparameter space within the ranges specified in Table 4. Table 5 presents the test results, highlighting the configurations that achieved an accuracy above 80%.
We determined other parameters through Grid Search, selecting a mutation probability from a tested range of 10% to 100% in increments of 10%. The final choice was a 30% mutation probability. For the modification of the total number of weights w in the network genotype, we used Random Search within a range of 1% to 40%, ensuring that the network structure was not significantly altered. Similarly, for the differential crossover operator, we used Random Search with a range of 0 to 1 and obtained a scale factor v of 0.7.
To evaluate the performance of the proposed genome, we used the parameters that achieved the highest accuracy. Based on the previous table, we selected the configuration from test 8.
The accuracy of the algorithms evaluated is presented in Table 6. The first column corresponds to the dataset name. The second column shows the performance of the initially proposed genome. To assess the effectiveness of the evolutionary approach, we compared our proposed model against the NEAT Python package, as shown in the third column. We developed the NEAT model using Python 3.13.1 and the following libraries: TensorFlow-aarch64 1.2, Scikit-learn 1.6.1, Keras 3.8.0, NumPy 2.2.3, Pandas 2.2.0, and Neat-python 0.92. Finally, the fourth column presents the results obtained with the proposed Cellular-NEAT.
Figure 5 presents the accuracy results to compare the performance of the tested models. The results indicate that the Cellular-NEAT model consistently outperforms the Initial ANN across all datasets. While NEAT Python and Cellular-NEAT exhibit similar performance trends, Cellular-NEAT generally achieves slightly higher accuracy.
The initial phenotype was used as the basis for the cellular processing algorithm (PCELL) to expand the solution space and improve the algorithm’s performance. At this stage, we employed six processing cells, each receiving the initial phenotype as a parameter. Independently, each cell modified the phenotype by adjusting the number of neurons or the number of layers, as illustrated in Figure 6.
Table 7 presents an example of the phenotype modification process for the diabetes dataset, showing the accuracy obtained with Cellular-NEAT using the initial genome. It also shows the results of structural modifications in the PCELL processing cells, allowing us to identify the configuration with the best performance.
Finally, Table 8 shows the accuracy obtained by integrating the PCELL algorithm into the proposed model. The results indicate that, for most of the tested datasets, performance improves when the phenotype is modified during experimentation.
We applied a Friedman test to evaluate the performance differences among the three evaluated algorithms: NEAT-Python, Cellular-NEAT, and Cellular-NEAT with PCELL. The obtained Friedman statistic of 6.6545 and a p-value of 0.0359 provide statistical evidence of a significantly different performance. Additionally, we compared NEAT-Python and Cellular-NEAT using the Wilcoxon signed-rank test. The test produced a statistic of 15.00 with a p-value of 0.0186, indicating significant differences between the two algorithms. Cellular-NEAT achieved a higher mean and median. These results suggest that Cellular-NEAT consistently outperforms NEAT-Python in terms of accuracy. Moreover, the Wilcoxon signed-rank test between NEAT-Python and Cellular-NEAT with PCELL algorithms produced a statistic of 18.00 with a p-value of 0.0303. In that case, Cellular-NEAT with PCELL achieved a higher mean and median. Finally, the statistical test of Cellular-NEAT and Cellular-NEAT with PCELL produced a statistic of 19.00 with a p-value of 0.0355, and Cellular-NEAT with PCELL achieved a higher mean. These results suggest that Cellular-NEAT with PCELL outperforms Cellular-NEAT in terms of accuracy.
The PCELL algorithm significantly alters the network phenotype by adjusting the number of neurons and layers in each processing cell, consequently improving the accuracy of the results. Each of the six processing cells uses the initial phenotype as a baseline to independently modify the number of neurons or layers, creating various structural configurations. These changes enable the algorithm to explore a more exhaustive solution space and optimize its performance. Table 7 shows that, for the diabetes dataset, cell 6 modified its structure to [8, 6, 2, 2] and reached an accuracy of 89.20%, surpassing the initial phenotype. Moreover, the overall results indicate that integrating the PCELL algorithm into the proposed model enhances accuracy across most tested datasets, as shown in Table 8. Statistical tests, such as the Wilcoxon test and Friedman test, confirm the significance of this improvement and demonstrate that Cellular-NEAT with PCELL outperforms the compared algorithms in average accuracy. This evidence suggests that adaptive phenotype modification through processing cells optimizes neural network performance and more efficiently explores the solution space.

4. Conclusions

This research introduced a hybrid neuroevolutionary approach that integrates a genetic algorithm with a NEAT-based structure and a Cellular Processing Algorithm (PCELL) to enhance Artificial Neural Network (ANN) training. The results demonstrated that modifying the phenotypic structure through PCELL improves classification accuracy across multiple datasets. By leveraging evolutionary strategies, the proposed method effectively overcomes the limitations of conventional backpropagation-based training, particularly in avoiding local optima and optimizing network architecture dynamically. The findings confirm that combining cellular processing with evolutionary optimization significantly enhances ANN performance.
Furthermore, the experimental analysis revealed that the PCELL algorithm contributes to structural refinement, allowing the ANN to adapt more efficiently to different classification tasks. The results indicate that Cellular-NEAT achieves performance comparable to evolutive and neuroevolutionary models while maintaining computational efficiency. The observed improvements suggest that incorporating phenotype-level modifications during training can lead to more adaptable and robust neural networks.
Future research could explore extending this approach by integrating additional evolutionary mechanisms, such as differential evolution or swarm intelligence, to optimize network architectures further. Additionally, investigating its applicability to deep learning models and real-world datasets with high-dimensional features could provide valuable insights into its scalability. Finally, enhancing the PCELL algorithm with reinforcement learning techniques may lead to more adaptive and autonomous learning strategies, further advancing the field of neuroevolutionary computation. Since this study aimed to evaluate the evolution of fixed-structure neural networks against solutions adapted for dynamic topologies, we conducted tests using evolutionary strategies. However, these were not included in the final analysis, as the results showed a better average on NEAT approach accuracy when compared with those obtained with the basic neural network (see Table 6).

Author Contributions

Conceptualization, H.A.H.-M. and L.A.H.-B.; methodology, A.H.G.-R.; software, A.H.G.-R. and H.A.H.-M.; validation, L.A.H.-B., H.A.H.-M. and A.H.G.-R.; formal analysis, R.P.-R. and K.E.L.-G.; investigation, L.J.G.-d.-Á.; resources, R.P.-R.; data curation, L.A.H.-B.; writing—original draft preparation, L.A.H.-B.; writing—review and editing, L.A.H.-B. and A.H.G.-R.; visualization, K.E.L.-G.; supervision, L.J.G.-d.-Á.; project administration, L.A.H.-B., A.H.G.-R. and L.J.G.-d.-Á.; funding acquisition, R.P.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Autonomous University of Tamaulipas.

Data Availability Statement

The benchmark instances for this study are available at https://github.com/AnBarajas/Cellular-NEAT.git, accessed on 16 February 2025.

Acknowledgments

A.H.G.-R. would like to acknowledge SECIHTI Mexico for the SNII salary award.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Kabanov, A.A. Application of Support Vector Machines to the Multiclass Classification Electromyography Signal Patterns. In Proceedings of the 2021 XV International Scientific-Technical Conference on Actual Problems of Electronic Instrument Engineering (APEIE), Novosibirsk, Russia, 19–21 November 2021; pp. 92–95. [Google Scholar] [CrossRef]
  2. Zhu, M.; Wang, J.; Yang, X.; Zhang, Y.; Zhang, L.; Ren, H.; Wu, B.; Ye, L. A review of the application of machine learning in water quality evaluation. Eco-Environ. Health 2022, 1, 107–116. [Google Scholar] [CrossRef] [PubMed]
  3. Yadav, A.L.; Soni, K.; Khare, S. Heart Diseases Prediction using Machine Learning. In Proceedings of the 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 6–8 July 2023; pp. 1–7. [Google Scholar] [CrossRef]
  4. Nikhil, A.; Thalanki, S.K.; Rajaram, G.; Renganathan, B. Enhancing The Accuracy In Food Image Recognition Using Recurrent Neural Network Model In Comparison with Graph Neural Network Model. In Proceedings of the 2024 9th International Conference on Applying New Technology in Green Buildings (ATiGB), Da Nang City, Vietnam, 30–31 August 2024; pp. 507–511. [Google Scholar] [CrossRef]
  5. Ahmed, S.F.; Alam, M.S.B.; Hassan, M.; Rozbu, M.R.; Ishtiak, T.; Rafa, N.; Mofijur, M.; Ali, A.B.M.S.; Gandomi, A.H. Deep learning modelling techniques: Current progress, applications, advantages, and challenges. Artif. Intell. Rev. 2023, 56, 13521–13617. [Google Scholar] [CrossRef]
  6. Tang, Q.; Wang, P. Quantum Dynamics Framework with Quantum Tunneling Effect for Numerical Optimization. Entropy 2025, 27, 150. [Google Scholar] [CrossRef] [PubMed]
  7. Hao, R.; Hu, Z.; Xiong, W.; Jiang, S. A composite particle swarm optimization algorithm with future information inspired by non-equidistant grey predictive evolution for global optimization problems and engineering problems. Adv. Eng. Softw. 2025, 202, 103868. [Google Scholar] [CrossRef]
  8. Bozoğullarından, E.; Öztürk, C. Artificial Bee Colony Algorithm Based Convolutional Neural Network for Cross Site Scripting Attacks Classification. In Proceedings of the 2023 Innovations in Intelligent Systems and Applications Conference (ASYU), Sivas, Turkey, 11–13 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
  9. Wang, H. Intelligent Prediction and Training Optimization of Sports using Enhanced Whale Optimized Artificial Neural Network. In Proceedings of the 2024 International Conference on Intelligent Algorithms for Computational Intelligence Systems (IACIS), Hassan, India, 23–24 August 2024; pp. 1–5. [Google Scholar] [CrossRef]
  10. Xiao, M.; Li, Y.; Yan, X.; Gao, M.; Wang, W. Convolutional neural network classification of cancer cytopathology images: Taking breast cancer as an example. In Proceedings of the 2024 7th International Conference on Machine Vision and Applications (ICMVA ’24), Singapore, 12–14 March 2024; ACM: New York, NY, USA, 2024; pp. 145–149. [Google Scholar] [CrossRef]
  11. Liu, J.; Li, K.; Zhu, A.; Hong, B.; Zhao, P.; Dai, S.; Wei, C.; Huang, W.; Su, H. Application of Deep Learning-Based Natural Language Processing in Multilingual Sentiment Analysis. Mediterr. J. Basic Appl. Sci. 2024, 8, 243–260. [Google Scholar] [CrossRef]
  12. Naeem, A.B.; Senapati, B.; Bhuva, D.; Zaidi, A.; Bhuva, A.; Sudman, M.S.I.; Ahmed, A.E.M. Heart Disease Detection Using Feature Extraction and Artificial Neural Networks: A Sensor-Based Approach. IEEE Access 2024, 12, 37349–37362. [Google Scholar] [CrossRef]
  13. Dao, F.; Zeng, Y.; Qian, J. Fault diagnosis of hydro-turbine via the incorporation of bayesian algorithm optimized CNN-LSTM neural network. Energy 2024, 290, 130326. [Google Scholar] [CrossRef]
  14. Zhao, H.; Wu, Y.; Deng, W. Fuzzy Broad Neuroevolution Networks via Multi-Objective Evolutionary Algorithms: Balancing Structural Simplification and Performance. IEEE Trans. Instrum. Meas. 2024, 74, 2505910. [Google Scholar] [CrossRef]
  15. Pei, S.; Hoang, L.; Fu, G.; Butler, D. Real-time multi-objective optimization of pump scheduling in water distribution networks using neuro-evolution. J. Water Process. Eng. 2024, 68, 106315. [Google Scholar] [CrossRef]
  16. Huang, J.; Chen, Y.; Mumtaz, J.; Zhong, L. Neuro-Evolution of Augmenting Topologies for Dynamic Scheduling of Flexible Job Shop Problem. Eng. Proc. 2024, 75, 19. [Google Scholar] [CrossRef]
  17. Tenstad, A.; Haddow, P.C. DES-HyperNEAT: Towards Multiple Substrate Deep ANNs. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; pp. 2195–2202. [Google Scholar] [CrossRef]
  18. Khamesian, S.; Malek, H. Hybrid Self-Attention NEAT: A novel evolutionary self-attention approach to improve the NEAT algorithm in high dimensional inputs. Evol. Syst. 2024, 15, 489–503. [Google Scholar] [CrossRef]
  19. Bohrer, J.d.S.; Grisci, B.I.; Dorn, M. Neuroevolution of Neural Network Architectures Using CoDeepNEAT and Keras. arXiv 2020, arXiv:2002.04634. [Google Scholar]
  20. Li, Y.; Huang, Y.; Pedersen, G.F.; Shen, M. Recurrent NEAT Assisted 2D-DOA Estimation with Reduced Complexity for Satellite Communication Systems. IEEE Access 2022, 10, 11551–11563. [Google Scholar] [CrossRef]
  21. Chua, L.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  22. Terán-Villanueva, J.D.; Fraire-Huacuja, H.J.; Martínez, S.I.; Cruz-Reyes, L.; Rocha, J.A.C.; Santillán, C.G.; Menchaca, J.L. Cellular processing algorithm for the vertex bisection problem: Detailed analysis and new component design. Inf. Sci. 2019, 478, 62–82. [Google Scholar] [CrossRef]
  23. Kaggle. Machine Learning Datasets. 2024. Available online: https://www.kaggle.com/datasets (accessed on 15 June 2024).
  24. UCI Machine Learning Repository. 2024. Available online: https://archive.ics.uci.edu/ml/index.php (accessed on 15 June 2024).
  25. Stanley, K.O.; Miikkulainen, R. Evolving Neural Networks through Augmenting Topologies. Evol. Comput. 2002, 10, 99–127. [Google Scholar]
  26. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  27. Lin, C.M.; Lin, Y.S. TPTM-HANN-GA: A Novel Hyperparameter Optimization Framework Integrating the Taguchi Method, an Artificial Neural Network, and a Genetic Algorithm for the Precise Prediction of Cardiovascular Disease Risk. Mathematics 2024, 12, 1303. [Google Scholar] [CrossRef]
  28. Landa, R.; Tovias-Alanis, D.; Toscano, G. Optimization of Deep Neural Networks Using a Micro Genetic Algorithm. AI 2024, 5, 2651–2679. [Google Scholar] [CrossRef]
  29. Sarker, M.I.; Mannan, Z.I.; Kim, H. Optimizing method for Neural Network based on Genetic Random Weight Change Learning Algorithm. arXiv 2019, arXiv:1907.07254. [Google Scholar]
  30. Hebbar, A. MCTS guided Genetic Algorithm for optimization of neural network weights. arXiv 2023, arXiv:2308.04459. [Google Scholar]
  31. Lee, S.; Kim, J.; Kang, H.; Kang, D.Y.; Park, J. Genetic Algorithm Based Deep Learning Neural Network Structure and Hyperparameter Optimization. Appl. Sci. 2021, 11, 744. [Google Scholar] [CrossRef]
  32. Tiep, N.H.; Jeong, H.Y.; Kim, K.D.; Mung, N.X.; Dao, N.N.; Tran, H.N.; Hoang, V.K.; Anh, N.N.; Vu, M.T. A New Hyperparameter Tuning Framework for Regression Tasks in Deep Neural Network: Combined-Sampling Algorithm to Search the Optimized Hyperparameters. Mathematics 2024, 12, 3892. [Google Scholar] [CrossRef]
  33. Hassanat, A.; Almohammadi, K.; Alkafaween, E.; Abunawas, E.; Hammouri, A.; Prasath, V.B.S. Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach. Information 2019, 10, 390. [Google Scholar] [CrossRef]
Figure 1. Genetic encoding from NEAT algorithm.
Figure 1. Genetic encoding from NEAT algorithm.
Mathematics 13 01114 g001
Figure 2. Proposed structure for Artificial Neural Networks based on NEAT algorithm.
Figure 2. Proposed structure for Artificial Neural Networks based on NEAT algorithm.
Mathematics 13 01114 g002
Figure 3. Genotype operator.
Figure 3. Genotype operator.
Mathematics 13 01114 g003
Figure 4. Initial structure for Artificial Neural Network.
Figure 4. Initial structure for Artificial Neural Network.
Mathematics 13 01114 g004
Figure 5. Accuracy comparison of initial ANN, NEAT Python, and Cellular-NEAT proposed algorithms.
Figure 5. Accuracy comparison of initial ANN, NEAT Python, and Cellular-NEAT proposed algorithms.
Mathematics 13 01114 g005
Figure 6. Processing cells’ structure.
Figure 6. Processing cells’ structure.
Mathematics 13 01114 g006
Table 1. Characteristics of the selected datasets for experimentation.
Table 1. Characteristics of the selected datasets for experimentation.
Dataset NameRepositoryData TypeRegistersAttributesClasses
banknote_authUCIInteger/Real137242
divorceUCIInteger/Real170542
irisKaggleInteger/Real15043
pulsar_starsKaggleInteger/Real17,89882
car_evaluationUCICategorical172864
contact-lensesUCICategorical2443
weather.nominalUCICategorical1442
diabetesKaggleInteger/Real76882
weather.numericUCICategorical1442
QSAR BioconcentrationKaggleInteger/Real77993
seedsUCIInteger/Real21073
cryotherapyUCIInteger/Real9062
Heart_diseaseKaggleInteger/Real303132
glassKaggleInteger/Real21497
abaloneUCICategorical4177829
Table 2. Attribute information sample for preprocessing.
Table 2. Attribute information sample for preprocessing.
AttributeMinMaxMeanStandar DeviationDecision
banknote_auth
a1−7.04216.82480.433735262.84172641Std
a2−13.773112.95161.922353125.86690749Norm
a3−5.286117.92741.397627124.30845909Std
a4−8.54822.44951.191656522.10024732Std
Iris
a14.37.95.843333330.82530129Norm
a224.43.0540.43214658Norm
a316.93.758666671.75852918Norm
a40.12.51.198666670.76061262Norm
pulsar_stars
a15.8125192.617188111.07996825.6522187Std
a224.772041898.778910746.54953166.84299824Std
a3−1.876011188.069522050.477857261.06400999Std
a4−1.7918859868.10162171.7702796.16774094Std
a50.2132107223.39214112.614399729.4720738Std
a67.37043217110.64221126.326514719.4700284Std
a7−3.1392696134.53984428.303556124.50596597Std
a8−1.97697561191.00084104.857709106.511564Std
Heart_disease
a1297754.36633669.06710164Norm
a2010.683168320.46524119-
a3030.96699671.03034803Norm
a494200131.62376217.5091781Std
a5126564246.26402651.745151Std
a6010.148514850.3556096-
a7020.528052810.52499112Std
a871202149.64686522.8673326Std
a9010.326732670.46901859-
a1006.21.039603961.15915747Std
a11021.399339930.61520843Std
a12040.729372941.0209175Std
a13032.313531350.61126531Std
Abalone
a1132.044529570.8277156-
a215163104.7984224.0157072Std
a31113081.576250919.8455972Std
a4022627.90327998.3644099Std
a50.4565.1165.74843298.0660627Std
a60.2297.671.873497744.3872756Std
a70.115236.118721621.9202257Std
a80.320147.766171927.8372011Std
Table 3. Accuracy obtained with proposed initial ANN structure.
Table 3. Accuracy obtained with proposed initial ANN structure.
DatasetAccuracy
banknote_auth57.20%
divorce98.24%
iris96.67%
pulsar_stars92.64%
car_evaluation91.38%
contact-lenses79.17%
weather.nominal71.43%
diabetes74.84%
weather.numeric64.29%
QSAR Bioconcentration61.87%
seeds59.52%
cryotherapy53.33%
Heart_disease49.18%
glass35.05%
abalone14.29%
Table 4. Value range for evolutive hyperparameters.
Table 4. Value range for evolutive hyperparameters.
HyperparameterValue Range
Population size10–100
Generations10–500
Training iterations100–1000
Table 5. Parameter tests for genotype operator.
Table 5. Parameter tests for genotype operator.
Test12345678910
Population size100101010100100101001010
Generations1010101001010010010100100
Training iterations100100100100100100500500500500
Accuracy87838685868682898882
Table 6. Results obtained via evolutive process.
Table 6. Results obtained via evolutive process.
DatasetANN BasicNEAT PythonCellular-NEAT
banknote_auth57.20%55.93%59.30%
divorce98.24%98.50%98.12%
iris96.67%98.0%98.0%
pulsar_stars92.64%89.17%89.31%
car_evaluation91.38%94.6%96.0%
contact-lenses79.17%82.85%83.76%
weather.nominal71.43%77.8%84.10%
diabetes74.84%85.10%89.20%
weather.numeric64.29%66.67%69.34%
QSAR Bioconcentration61.87%65.37%67.10%
seeds59.52%64.40%66.20%
cryotherapy53.33%51.84%53.30%
Heart_disease49.18%60.02%59.70%
glass35.05%69.0%68.0%
abalone14.29%21.18%20.40%
Table 7. Modified genome structure and accuracy obtained with PCELL algorithm integration.
Table 7. Modified genome structure and accuracy obtained with PCELL algorithm integration.
SolutionStructureCellular-NEAT
initial8, 7, 2, 287.80%
cell 18, 7, 286.19%
cell 28, 7, 185.93%
cell 38, 7, 2, 287.63%
cell 48, 7, 7, 6, 284.30%
cell 58, 6, 2, 287.23%
cell 68, 6, 2, 289.20%
Table 8. Comparison of Cellular-NEAT accuracy.
Table 8. Comparison of Cellular-NEAT accuracy.
DatasetCellular-NEATCellular-NEAT with PCELL
banknote_auth59.30%60.30%
divorce98.12%96.81%
iris98.0%98.0%
pulsar_stars89.31%90.76%
car_evaluation96.0%95.85%
contact-lenses83.76%84.90%
weather.nominal84.10%84.33%
diabetes89.20%89.10%
weather.numeric69.34%71.14%
QSAR Bioconcentration67.10%67.30%
seeds66.20%66.82%
cryotherapy53.30%57.20%
Heart_disease59.70%60.04%
glass68.0%67.72%
abalone20.40%21.18%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hurtado-Mora, H.A.; Herrera-Barajas, L.A.; González-del-Ángel, L.J.; Pichardo-Ramírez, R.; García-Ruiz, A.H.; Lira-García, K.E. Enhancing Neural Network Training Through Neuroevolutionary Models: A Hybrid Approach to Classification Optimization. Mathematics 2025, 13, 1114. https://doi.org/10.3390/math13071114

AMA Style

Hurtado-Mora HA, Herrera-Barajas LA, González-del-Ángel LJ, Pichardo-Ramírez R, García-Ruiz AH, Lira-García KE. Enhancing Neural Network Training Through Neuroevolutionary Models: A Hybrid Approach to Classification Optimization. Mathematics. 2025; 13(7):1114. https://doi.org/10.3390/math13071114

Chicago/Turabian Style

Hurtado-Mora, Hyasseliny A., Luis A. Herrera-Barajas, Luis J. González-del-Ángel, Roberto Pichardo-Ramírez, Alejandro H. García-Ruiz, and Katea E. Lira-García. 2025. "Enhancing Neural Network Training Through Neuroevolutionary Models: A Hybrid Approach to Classification Optimization" Mathematics 13, no. 7: 1114. https://doi.org/10.3390/math13071114

APA Style

Hurtado-Mora, H. A., Herrera-Barajas, L. A., González-del-Ángel, L. J., Pichardo-Ramírez, R., García-Ruiz, A. H., & Lira-García, K. E. (2025). Enhancing Neural Network Training Through Neuroevolutionary Models: A Hybrid Approach to Classification Optimization. Mathematics, 13(7), 1114. https://doi.org/10.3390/math13071114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop