Next Article in Journal
Implementation of an Academic Learning Module for CNC Manufacturing Technology of the Part ”Double Fixing Fork”
Previous Article in Journal
Analytical and Numerical Investigation of Adhesive-Bonded T-Shaped Steel–Concrete Composite Beams for Enhanced Interfacial Performance in Civil Engineering Structures
Previous Article in Special Issue
Intelligent Mobile-Assisted Language Learning: A Deep Learning Approach for Pronunciation Analysis and Personalized Feedback
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Current Trends and Challenges in Applying Metaheuristics to the Innovative Area of Weight and Structure Determination Neuronets

1
Department of Economics, Mathematics-Informatics and Statistics-Econometrics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
2
Faculty of Information Technology, University of Oulu, 90570 Oulu, Finland
3
School of Business, Jiangnan University, Lihu Blvd, Wuxi 214122, China
4
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Inventions 2025, 10(4), 62; https://doi.org/10.3390/inventions10040062
Submission received: 25 June 2025 / Revised: 11 July 2025 / Accepted: 19 July 2025 / Published: 24 July 2025

Abstract

The weights and structure determination (WASD) neuronet (or neural network) is a single-hidden-layer feedforward neuronet that exhibits an excellent approximation ability, despite its simple structure. Thanks to its strong generalization, fast speed, and ease of implementation, the WASD neuronet has been the subject of many modifications, including metaheuristics, and applications in a wide range of scientific fields. As it has garnered significant attention in the last decade, the aim of this study is to provide an extensive overview of the WASD framework. Furthermore, the WASD has been effectively used in numerous real-time learning tasks like regression, multiclass classification, and binary classification due to its exceptional performance. In addition, we present WASD’s applications in social science, business, engineering, economics, and medicine. We aim to report these developments and provide some avenues for further research.

1. Introduction

Artificial neuronets (ANs), also known as artificial neural networks, have been extensively studied and used in numerous scientific, engineering, and practical domains because of their exceptional characteristics, which include distributed storage, parallel processing, self-adaptive, and self-learning capabilities. With a history spanning more than 70 years, AN was initially proposed in the last century [1]. Neural network’s fundamental architecture resembles the nerve system in the human brain and was inspired by biological neuronets. A typical neuronet, which is a type of machine learning technique, is made up of neurons, connections, and weights. A neuron in a neuronet, often referred to as a perception [2], gives out output signals to subsequent neurons after receiving input signals from earlier neurons. A neuronet’s neurons are frequently arranged in multiple layers. The relationship between neurons in different layers was described as a connection. A neuron may be highly connected to neurons in both its former and subsequent layers. Additionally, weight is a crucial parameter that each connection provides. Note that weights are crucial because they determine how a neuron interprets input signals and generates output signals.
One of the most basic AN types ever created is the feed-forward neuronet, which is a universal approximator. The information moves forward through the input layer, multiple hidden layers, and a final layer of output nodes. The process of training a neuronet involves identifying the layer weights that optimize its ability to approximate data. One of the most well-known and significant feed-forward neuronet models, with numerous theoretical examinations and practical applications, is the one based on the error back-propagation (BP) training algorithm or its variations. BP-type neuronets, which were first proposed in the mid-1980s (or even earlier, in 1974), are a sort of multilayer feed-forward neuronet [3]. BP-type neuronets and their associated training methods remain widely used in several theoretical and practical applications to this day.
By adopting a gradient-based descent path, BP algorithms are essentially gradient-based iterative methods that modify the AN weights to bring the input/output behavior into a desired mapping. BP-type neuronets do, however, seem to have the following drawbacks: (i) the potential to become stuck in local minima; (ii) the challenge of selecting suitable learning rates (i.e., training speed); and (iii) the incapacity to deterministically create the smallest or optimal neuronet structure. Numerous enhanced BP-type algorithms have been suggested and studied as a result of the aforementioned intrinsic flaws [4]. There are generally two main categories of improvements. One could argue that the standard gradient-descent method could serve as a foundation for improving the BP-type algorithms [5]. On the other hand, the neuronet model might be trained using numerical optimization approaches. It is important to note that in an effort to enhance the performance of BP-type neuronets, many researchers typically concentrate on the learning algorithm itself. Still, nearly all of the enhanced BP-type algorithms are unable to overcome the aforementioned intrinsic flaws [6].
Deep neuronets were conceptualized with the aim of fixing the aforementioned problem of the traditional single-hidden-layer feedforward neuronet (SLFN), which has been the primary focus of research since the late 1990s. It should be noted that in order to create a neuronet model for any application, three main problems must be solved: (i) selecting the activation function; (ii) determining the required number of hidden-layer neurons; and (iii) computing the linking weights between two distinct layers. The preceding analysis indicates that finding the ideal number of hidden-layer neurons and linking weights for the neuronet is beneficial and significant as it can significantly lower the computational complexity and advance hardware realization. In other words, they increase the neuronets’ efficiency [7]. Several weights and structure determination (WASD) algorithms are presented in [8] as superior solutions to address the issues raised by the BP algorithms and to describe the ideal neuronet structure for more practical applications. The weights direct determination (WDD) subalgorithm is used by the WASD algorithm, which simultaneously determines the ideal neuronet structure during training and directly defines the optimal connecting weights between the hidden layer and output layer (i.e., only in one step). In this way, the WASD algorithm for feed-forward neuronets performs more accurately, deterministically, and efficiently than other algorithmic improvements on the training process.
In the last eighteen years, a great deal of research has been carried out on WASD with three goals in mind: reduced training time, increased classification accuracy, and less manual intervention. Investigations into the generalization and global approximation capabilities of WASD have also been conducted theoretically. It is important to bear in mind that WASD is often selected above other state-of-the-art approaches for artificial intelligence-related challenges across several disciplines due to its rapid speed, robust generalization, and ease of implementation.
We attempt to present a thorough overview of the evolution of WASD in this review, covering its theoretical analysis, variations, current developments, and practical applications. The remainder of the paper is structured as follows: We shall provide a thorough theoretical examination of the fundamental WASD in Section 2. Recent developments on WASD for regression and classification are presented in Section 3. In Section 4, the performance of WASD neuronets is evaluated. It specifically compares WASD to different machine learning methods and shows how well WASD neuronets perform across different application domains. Section 5 brings the review to a close.

2. WASD Neuronets

Figure 1 depicts the basic layout of the WASD neuronet, which is an example of a typical SLFN. All of the adjacent layers are connected; however, not all connections are shown in the diagram for simplicity’s sake. In this work, m, n, and k represent the number of neurons in the input layer, hidden layer, and output layer, respectively, whereas W i n and W o u t represent the input and output weight matrices. B = [ b 1 , , b n ] R n is the bias matrix; f ( · ) is the activation function of the hidden layer neurons; and matrices X R m and Y , Y ¯ R k are the neuronet’s input, output, and desired output (target), respectively. As a result, the neuronet produces the output shown in the equation below:
Y = f ( X · W i n B ) · W o u t ,
which is the target function approximation. W i n , W o u t , B , and f ( · ) in Equation (1) must be appropriately determined prior to its application. Typically, f ( · ) is a manually chosen element-wise function. The WASD procedure sets W i n and B as fixed or random values. The WDD approach described below can be used to obtain W o u t [9].
W o u t = A · Y ¯ ,
where the superscript ( · ) denotes the pseudoinverse operator and A = f ( X · W i n B ) .
The error function curve between Y and Y ¯ is typically used to automatically identify the neuronet structure, or the number n of hidden layer neurons, through the WASD approach. In particular, during the training phase, the number of hidden neurons is progressively increased, and the error on the validation set that corresponds to the current number is computed. Plotting of the error curve occurs concurrently with this increase. The optimal number of hidden neurons is determined by finding the number that corresponds to the lowest error point on the error curve, which generally shows an ascending trend after a declining trend. This idea is related to the SAD algorithm’s fundamental theory [9,10,11]. The method used to find the lowest point of error may vary significantly in practice, but the process ceases automatically when a sufficient level of accuracy is reached.

3. Modifications of WASD Neuronets

The fundamental framework of the WASD neuronet is shown in Section 2. In more useful applications, some enhanced versions have been developed.
In particular, the incorporation of Chebyshev polynomials as an activation function has improved the fundamental structure of the WASD neuronet. Using the WASD theory as a basis, ref. [12] employed fixed weights between the input layer and the hidden layer, rather than randomly generated points, a sequence of Chebyshev polynomials as the activation function of the hidden layer neurons, and the cross-validation approach to enhance the generalization. Thus, the Chebyshev polynomial feedforward neuronet with a single output (SOCPNN) and the multi-output Chebyshev polynomial feedforward neuronet with multiple outputs (MOCPNN) were presented. Additionally, the initial network model was trained using a subset technique in [13]; after that, the model was optimized, which enhanced the SOCPNN’s performance.
Furthermore, the random assignment of output weights has improved the fundamental structure of the WASD neuronet. An amended weights and structure determination neuronet (MWASDNN) model was created in [14,15], where the input weights were obtained in a single step by addressing the pseudoinverse issue and the output weights were allocated randomly. This means that in the MWASDNN, the input and output weight calculation techniques used in the original WASD neuronet were switched. Additionally, a model called the weights and structure determination neuronet assisted by double pseudoinversion (WASDNN-DP) was created in [16]. Prior to updating the output weight matrix, the WASDNN-DP randomly allocates the input weight matrix, solves the pseudoinverse’s problem to acquire the input weight matrix, and then solves the pseudoinverse’s problem once more.
A number of polynomials and power functions have been added to the WASD neuronet’s basic architecture as activation functions, along with pruning approaches to help the neuronet’s hidden-layer neurons be chosen during training. It is noteworthy to emphasize that less computational complexity is achieved in neuronet models by pruning neurons deemed less significant using pruning strategies. In particular, ref. [8] reintroduces the following neuronet models: a neuronet model with hidden layer neurons triggered by Legendre polynomials, which is trained using one of two WASD methods with varying growth rates to ascertain the ideal weights and configuration of the specific neuronet model [17]; a neuronet model that uses Chebyshev orthogonal polynomials of Class 1 to activate hidden-layer neurons, where the WASD technique involves pruning the given neuronet once it has grown [18]; a neuronet model where the structure automatic determination (SAD) subalgorithm of the WASD algorithm is used to identify the ideal number of hidden-layer neurons in the neuronet model, which is activated by a set of Chebyshev orthogonal polynomials of Class 2 [19,20]; a neuronet model with hidden-layer neurons activated by a power-activation function, where the WASD algorithm incorporates a growing and pruning technique for selecting the hidden-layer neurons of the neuronet through the training process [21]; a neuronet model with hidden-layer neurons activated by a group of Euler polynomials (MIEPN), where the WASD algorithm is equipped with the twice-pruning (TP) and pruning-while-growing (PWG) techniques for selecting the hidden-layer neurons of the neuronet during the training process [22]; a neuronet model with hidden-layer neurons activated by Bernoulli polynomials (MIBPN), which is trained by a WASD algorithm with TP [23]; a neuronet model with hidden-layer neurons activated by Hermite orthogonal polynomials (MIHPN), which has no weakness of dimension explosion [10]; a neuronet model with hidden-layer neurons activated by a sine function (MISAN), which is trained by a two-stage WASD (TS-WASD) algorithm [24]; and a neuronet model with hidden-layer neurons activated by tan-sigmoid functions, where the WASD algorithm is equipped with the PWG, TP, and Levenberg–Marquardt (LM) techniques for selecting the hidden-layer neurons of the neuronet during the training process [25]. It is important to mention that there are also some simplified versions of the aforementioned neuronet models included in [8].
Additionally, the K-Taylor polynomial-based power-activation functions have been added to the WASD neuronet’s fundamental architecture to improve it. In [26,27], the weights among the input layer and the hidden layer were fixed at 1 rather than being generated at random, and the hidden layer neurons were activated using the K-Taylor polynomial. Consequently, the power-activation feed-forward neuronet (PFN) and the speedy PFN were introduced. In [28], a multi-input WASD for time-series neuronet (MI-WASDTSN), along with three new power-activation functions based on the K-Taylor polynomial to train the initial network model, were developed. Moreover, ref. [29] used a subset approach in which a multi-function activated by a WASD for time series (MAWTS) was employed, along with four additional power-activation functions that can be used concurrently to train the initial network model. After that, the model was optimized, which enhanced the MAWTS’s performance. In [30], a two-input WASD based neuronet (2I-WASDBN) model was developed as an enhancement of the PFN under a power activation function, where the WASD algorithm is equipped with a pruning technique for selecting the hidden-layer neurons of the neuronet through the training process.
Power-activation functions in the output layer have improved the fundamental architecture of the WASD neuronet. In [24,31,32], three WASD neuronet models with hidden-layer neurons activated by a set of square wave, sine, and signum functions were introduced in [31], and a WASD neuronet model with hidden-layer neurons activated by a sine function was introduced. Using three distinct steps, ref. [33] develops a WASD neuronet model called FCI-WASD that uses Fresnel cosine integrals to determine its activation functions. These steps include adding enough neurons to the hidden layer, fine-tuning the resulting structure by cutting some neurons, and saving the relevant parts of the network so that it may be successfully recreated and used in the future. An enhanced version of FCI-WASD, the power Gaussian Error Linear Unit (GELU) activated WASD (PG-WASD) neuronet, was introduced in [34]. The PG-WASD outperforms the FCI-WASD in terms of overall performance. Another enhanced version of FCI-WASD, the multi-input trigonometrically activated WASD (MTA-WASD) neuronet, was introduced in [35]. The MTA-WASD performs better overall than the FCI-WASD and is based on the Bernstein polynomial. It also includes a pruning approach for choosing the neuronet’s hidden-layer neurons during training. An enhanced version of PG-WASD, the power Sigmoid Linear Unit (SiLU) activated WASD (SWASD) neuronet, was introduced in [36]. The SWASD outperforms the PG-WASD in terms of overall performance. In [37], a power softplus WASD (PS-WASD) model was introduced as an enhancement of the PFN under a power softplus activation function that includes a pruning technique for selecting the hidden-layer neurons of the neuronet through the training process. An enhanced version of PS-WASD, the multi-input multi-function activated WASD neuronet (MMA-WASDN), was developed in [38]. In order to overcome bias and avoid becoming trapped in local optima, the MMA-WASDN model employs cross-validation. In [39], a multi-input WASD for a time-series neuronet (MI-WASDTSN) with hidden-layer neurons activated by a power-activation function was introduced. A heuristic power-activation function in the output layer keeps the model’s predictions in a realistic domain, improving the prediction ability of the neuronet in this way. In [40], a WASD for multiclass classification (WASDMC) model with hidden-layer neurons activated by a power maxout activation function was introduced, which is not negatively impacted by dimension explosions.
The basic framework of the WASD neuronet has been enhanced with the use of metaheuristics and power-activation functions along with pruning techniques for selecting the hidden-layer neurons of the neuronet through the training procedure. In [41], a multi-input beetle antennae search WASD neuronet (MI-BASWASDN) model with hidden-layer neurons activated by a power sigmoid function was introduced. In order to determine the ideal number and powers of the hidden-layer neurons, the WASD model integrates a meta-heuristic optimization process called beetle antennae search (BAS). A bio-inspired WASD neuronet (BIWASDNN) model was introduced in [42] as an updated version of the MI-BASWASDN. This model improves the accuracy of the WDD technique and uses the cross-validation approach to increase the generalization.
The basic framework of the WASD neuronet has been enhanced with the addition of power-activation functions in the output layer along with fuzzy systems, metaheuristics, power-activation functions, and pruning techniques for selecting the hidden-layer neurons of the neuronet through the training process. A multi-input fuzzy WASD neuronet (MI-FUZWASDN) model was presented in [43] as an improvement of the PFN under a power softplus activation function for classification problems with no dimension explosion weakness. Furthermore, the WASD algorithm employs the cross-validation approach to increase the generalization and a fuzzy logic controller to map the input data into a specified interval that improves the accuracy of the WDD method. In [44], a bio-inspired WASD for multiclass classification tasks (BWASDC) neuronet model with hidden-layer neurons activated by a power maxout activation function was introduced. The BWASDC model uses the metaheuristic BAS algorithm to find the optimal number and the power activation function of the hidden-layer neurons, enhancing in this manner the WASD’s learning process. In [45], a bio-inspired WASD (BWASD) neuronet model with hidden-layer neurons activated by four power activation functions was introduced. The BWASD model uses the metaheuristic BAS algorithm to find the optimal number, powers, and activation function of the hidden-layer neurons, enhancing in this way the WASD algorithm’s learning process.
Notably, the BAS algorithm is incorporated into all of the previously stated bio-inspired WASD algorithms that train NN models [46]. Beetles typically utilize both antennae to find food, depending on the strength of the fragrance they detect (see Figure 2). The optimal solution finder of the BAS algorithm mimics this trend, and this approach enables the employment of cutting-edge optimization techniques [47]. To determine the ideal number of hidden layer neurons in a NN, its power values, and the ideal activation function for each hidden layer neuron, the aforementioned bio-inspired WASD algorithms specifically imitate the behavior of the beetle. Additionally, they optimize the ratio of the validation and fitting sets (also known as cross-validation auto-adjustment).
Table 1 provides a summary of the previously mentioned evolution of WASD over time. Specifically, it displays the year of introduction, the type of target function approximation, the task, the enhancement method, the application, and the greatest dataset size (samples × variables) for each neuronet model. Notably, WASD neuronets have only been evaluated on relatively small datasets. Remember that by “small dataset”, we mean datasets that are small enough to fit in a personal computer’s memory. The majority of WASD neuronet studies are carried out on Matlab on personal computers with the following hardware configurations: Windows operating system, Intel CPU, and less than 24 GB of RAM. Table 1 provides some significant dataset size instances. The inability of WASD neuronets to handle greater data sizes with excellent accuracy may be highlighted by the fact that they have not yet been tested on medium or large datasets. Future studies will have to determine this.
In light of the aforementioned, the most well-known WASD modifications are as follows:
  • The favored option of the most recent WASD neuronets is that the input layer and the hidden layer were fixed as 1 rather than being produced at random;
  • Power-activation functions based on K-Taylor polynomials seems to have the lowest time consumption during training, along with high accuracy levels;
  • The pruning techniques for selecting the hidden-layer neurons of the neuronet through the training process seems to optimize the neuronet’s structure even more;
  • Fuzzy logic controllers to map the input data into a specified interval seems to increase the accuracy of the WDD method even more;
  • The cross-validation technique is typically applied to enhance the neuronet’s generalization;
  • The use of heuristic algorithms, such as BAS, for finding the optimal number, powers, and the activation function of the hidden-layer neurons seems to enhance the WASD algorithm’s learning process even more;
  • Power-activation functions in the output layer seem to be critical for the cross-domain adaptation of the WASD neuronets.
Finally, based on the different WASD neuronets discussed in this section, it is evident that certain combinations of the previous enhancements seem to improve the overall performance and accuracy of the WASD neuronets even more.

4. Performance of WASD Neuronets

Over the past eighteen years, a lot of research has been carried out on WASD analysis, particularly on how WASD compares to other cutting edge algorithms like Support Vector Machine (SVM), Gaussian process regression (GPR), K-nearest neighbors (KNN), Kernen Naive Bayes (KNB), Tree, and other deep learning techniques. As exemplary machine learning algorithms, these time-tested techniques reflect cutting-edge machine learning technologies in various eras. They have not only played a significant part in the history of machine learning development but are still actively engaged in the field today. We may more readily explore the significance of WASD in machine learning and more clearly detect WASD’s strength in related research by comparing it to these machine learning methods. In general, WASD is clearly faster than those techniques, and its variations are getting stronger and stronger. The accuracy performances of WASD are good when implemented simply.
It is important to mention that the accuracy, precision, mean absolute error (MAE), mean absolute percentage error (MAPE), symmetric MAPE (SMAPE), root mean square error (RMSE), time consumption (TC), mean directional accuracy (MDA), R 2 and F-score are the performance measures presented in the tables of this section. See [48,49] for more information and a thorough study of these measures.

4.1. Performance of WASD Versus SVM

Support Vector Machines (SVMs), one of the most well-known supervised methods for classification since the turn of the century, were introduced in 1995 by Cortes and Vapnik [50]. In general, given that data points are dispersed over space, there are several planes on which we might divide them into multiple classes. These planes are referred to as hyperplanes in SVMs. Among these hyperplanes, there is one that would optimize each data point’s distance to the plane. Thus, the easiest way to identify data points would be via this hyperplane. The goal of SVMs is to identify the optimal hyperplane, which also forms the foundation of the SVM concept.
In [28,29,41], three WASD neuronet models were compared with SVM models in time-series forecasting problems. In [33,35,37,38,39,43,45], six WASD neuronet models were compared with SVM models in binary classification problems. In [30], a WASD neuronet model was compared with SVM models in regression problems. All of these sources’ experimental findings showed that WASD was faster and had more accuracy than SVMs. Some performance metrics comparing WASD and SVMs are shown in Table 2 and originate from the previously stated references. It follows that in the instances covered there, the WASD neuronets perform better than the SVM.

4.2. Performance of WASD Versus GPR

In statistics and machine learning, one powerful and versatile non-parametric regression technique is Gaussian process regression (GPR) [51]. When dealing with continuous data, where the relationship between the input and output variables may be complicated or unclear, it is quite useful. Time-series forecasting, optimization, and other fields can benefit greatly from the use of GPR, a Bayesian technique that can simulate prediction certainty [52]. The foundation of GPR is the idea of a Gaussian process, which is a group of random variables with a joint Gaussian distribution for any finite number of them. A distribution of functions can be used to conceptualize a Gaussian process.
In [27,28,29,41], four WASD neuronet models were compared with GPR models in time-series forecasting problems. In [39], a WASD neuronet models was compared with GPR models in a binary classification problem. In [30,42], two WASD neuronet models were compared with GPR models in regression problems. The experimental results from all of these sources demonstrated that WASD outperformed GPR in speed and accuracy. Some performance metrics comparing WASD and GPR are shown in Table 3 and originate from the previously stated references. As a result, the WASD neuronets outperform the GPR in the cases discussed there.

4.3. Performance of WASD Versus KNN

Statistical pattern recognition frequently uses the k-nearest neighbor decision rule (KNN) as a classification technique [53]. Sample prototypes, or a training set of pattern vectors from that class, are provided for each class. To classify an unknown vector, its k nearest neighbors are selected from all prototype vectors, and a majority rule is used to determine the class label. The value of k should be odd in order to prevent ties on class overlap regions. Despite the simplicity and elegance of this rule, there is very little inaccuracy in its actual use. Theoretically, the asymptotic error rate approaches the optimal Bayes error rate and goes toward it as k increases, given a sufficiently high number of prototype samples [54]. Because of this, any new classifier, like neuronets, is compared to the KNN rule, which has become the industry standard.
The main issue with applying the KNN decision rule is its computing complexity, which arises from conducting a lot of distance calculations. Finding a variant of the rule that would be much lighter than the brute force method—which computes all the distances between the unknown pattern vector and the prototype vectors—is difficult for realistic pattern space dimensions [55]. As a result, numerous NN classifier improvements have been proposed; these are frequently based on editing or pruning approaches [53], which allow for a drop in prototype count without sacrificing accuracy.
In [34,35,36,37,38,43,45], seven WASD neuronet models were compared with KNN models in a binary classification problem. In [40,44], two WASD neuronet models were compared with KNN models in a multiclass classification problem. All of these sources’ experimental results showed that WASD performed faster and more accurately than KNN. Some performance metrics comparing WASD and KNN are shown in Table 4 and originate from the previously stated references. Consequently, in the instances covered there, the WASD neuronets perform better than the KNN.

4.4. Performance of WASD Versus KNB

One popular probabilistic classification algorithm is Naïve Bayes (NB). With many practical uses, this algorithm is straightforward but effective. In general, an algorithm for classification may be categorized as probabilistic or non-probabilistic. Distribution approximation is the cornerstone of probabilistic data classification. Probabilistic data classification techniques work well because the majority of connected feature distributions are probabilistic in nature. The basic topic of why the probabilistic approach performs well theoretically in a variety of real-world applications is addressed by Garg and Roth in [56]. A few of the probabilistic classifiers are multilayer perceptrons, logistic regression, and NB. Non-probabilistic classifiers include KNNs and SVMs.
NB holds a significant position in data categorization among the aforementioned methods for a variety of reasons. Two of the primary reasons are the algorithm’s accuracy and simplicity. The NB algorithm, as its name suggests, is a well-known probabilistic classification method used in data analytics and machine learning. It was founded on the widely-used Bayes theorem. The efficacy and resilience of the algorithm are contributing factors to NB’s appeal in addition to its simplicity [57]. The literature claims that NB is among the best algorithms for data mining [58].
In [33,34,35,37,38,45], six WASD neuronet models were compared with KNB models in binary classification problems. The experimental findings from all of these sources demonstrated that WASD outperformed KNB in terms of speed and accuracy. Some performance metrics comparing WASD and KNB are shown in Table 5 and originate from the previously stated references. As a result, the WASD neuronets outperform the KNB in the cases discussed there.

4.5. Performance of WASD Versus Tree

An essential part of machine learning are tree-based algorithms, which provide human-like, intuitive decision-making processes. These algorithms build decision trees, with a prediction or classification at the end of each branch representing a decision based on features. Tree-based algorithms are widely used in many different applications because they produce transparent and interpretable models by recursively partitioning the feature space [59].
A type of supervised machine learning models, generally known as tree-based algorithms, build decision trees to divide the feature space into regions, allowing for the hierarchical representation of intricate relationships between input variables and output labels [60]. Classification trees refer specifically to tree models in which the goal variable can take a discrete set of values; in these tree structures, leaves stand for class labels and branches for feature conjunctions that lead to those class labels. Regression trees are decision trees in which the goal variable is capable of taking continuous values, usually real numbers. More broadly, any type of object having pairwise dissimilarities, including categorical sequences, can be fitted with the regression tree concept [61].
Notable examples include decision trees, random forests, and gradient boosting approaches. Recursive binary split is used depending on criteria such as information gain or Gini impurity. These algorithms are strong against overfitting caused by ensemble approaches, demonstrate variety in terms of classification and regression functions, and produce a greater number of unique trees. Its cost-effective exploratory feature importance analysis capability leads to a wide range of applications in domains such as natural language processing and healthcare.
In [28,29], two WASD neuronet models were compared with tree models in time-series forecasting problems. In [33,34,36,38], four WASD neuronet models were compared with tree models in binary classification problems. In [40,44], two WASD neuronet models were compared with tree models in multiclass classification problems. All of these sources’ experimental results showed that WASD performed faster and more accurately than tree models. Some performance metrics comparing WASD and tree models are shown in Table 6 and originate from the previously stated references. Consequently, in the instances covered there, the WASD neuronets perform better than the tree models.

4.6. Performance of WASD Versus Other Deep Learning Methods

The WASD neuronet has been compared with several other deep learning methods, such as logistic regression (LR), narrow neural networks (NNNs), long short-term memory (LSTM), and transformer (TRA) models. Particularly, in [33], a WASD neuronet model was compared with an LR model in binary classification problems. In [40,44], two WASD neuronet models were compared with NNN models in multiclass classification problems. In [44], a WASD neuronet model was compared with a TRA in a multiclass classification problems. In [42], a WASD neuronet model was compared with an LSTM model in regression problems. Some performance metrics comparing WASD and other deep learning methods are shown in Table 7 and originate from the previously stated references. All of these sources’ experimental results showed that the WASD neuronets outperform all the above deep learning methods.
Additionally, we offer Table 8 to show the effectiveness and popularity of each machine learning algorithm in order to give a brief comparison between WASD and these methods.
Finally, a limitation of WASD performance is the absence of comparisons with more advanced models like lightweight networks [62], which are intended to be computationally efficient and compact, making them appropriate for resource-constrained environments like mobile and embedded devices [63]. However, this will have to be ascertained by future research.

4.7. Performance of WASD per Application Domain

Owing to its exceptional training speed, accuracy, and generalization, WASD was used in a wide range of application domains, including business, social science, engineering, economics, and medicine. We will examine a few exemplary works in these application domains in this section.

4.7.1. Medical Application

Machine learning-assisted diagnostic techniques have been used to diagnose a number of diseases in recent years, with encouraging diagnostic outcomes reported in the literature [64,65]. In machine learning-assisted medical diagnosis, the model typically learns from the dataset and then determines if a certain disease is positive or negative [66,67]. Because of its effectiveness and simplicity, the WASD method—among many machine learning algorithms—is frequently used for the detection of flat feet, breast cancer, and glomerular filtration rate prediction.
A group of disorders known as cancers have erratic cell proliferation and have the potential to infiltrate or spread to other bodily parts. Breast cancer is the most frequent type of cancer in women and has the greatest death rate. Rapidly proliferating cells in breast cancer cause breast lumps known as tumors, which are classified into two classes: benign and malignant. The benign tumors do not spread to other bodily parts, while the malignant ones do; that is to say, breast cancer is indicated by a malignant tumor located in the breast. These tumors can be identified and diagnosed by physical examination or image processing [68]. Consequently, treatment that may increase the chance of survival is made possible by early cancer stage discovery. A number of artificial intelligence approaches were proposed in [69,70,71] to improve the precision of the classification of breast cancer as benign and malignant; another artificial intelligence approach was proposed in [72] for the classification of breast tissue as breast tumor tissue or healthy breast tissue. This is because the conventional approach to cancer diagnosis mainly relies on the expertise of doctors and their visual examinations, which is restricted due to human errors. In [13,43], two WASD models were applied on several diagnostic breast cancer datasets and demonstrated the models’ excellent efficiency and precision for predicting if the breast tissue is breast tumor tissue or healthy breast tissue. Figure 3 displays the performance of the WASD model on breast cancer prediction, as obtained from [43].
The great majority of outpatient foot disease cases include flat feet [73]. The foot condition known as “flat feet” causes the foot’s arch to collapse [74]. The research lacks a unified standard for diagnosing flat feet; however, a crucial indicator is the height of the medial longitudinal arch [75]. Between the knees and the femur, the arch creates an elastic connection that significantly lessens the stress applied to the bone [76]. Insufficient cushioning frequently leads to increased stresses on the knees, ankles, hips, and feet, which hurts these body parts [77]. Timely diagnosis of flat foot is crucial due to its high occurrence and associated pain. In [14,15,16], three WASD models were applied on several diagnostic flat foot datasets and demonstrated the models’ excellent efficiency and precision for classifying if the the foot of the object is flat or not.
Chronic kidney disease (CKD) has emerged as a major public health concern that poses a serious threat to people’s lives and health along with societal development and environmental changes [78]. Given that the glomerular filtration rate (GFR) is a frequently utilized marker for the early identification of chronic kidney disease (CKD), a reliable, efficient, and accurate approach for estimating GFR is important for clinical applications [79]. A WASD model was used on a dataset of CKD patients in [8], where it was shown to have exceptional accuracy and efficiency in predicting the patients’ GFR.

4.7.2. Engineering Application

With the exponential rise of artificial intelligence, current electronic technology, and information technology, several significant theoretical research discoveries have been proposed and evolved, including software and hardware implementations of an artificial neuronet [80]. In general, the ability of an artificial neuronet to generalize noise and fault tolerance and the capacity to predict unseen test data with cost and time savings are all significant advantages over traditional data analysis [81]. Because of its effectiveness and simplicity, the WASD method is one of the numerous machine learning algorithms used in engineering for chaos and system stabilization control.
In particular, in recent years, managing chaos has gained popularity as a research issue in the nonlinear sciences. One common strategy for managing chaos is the Ott, Grebogi, and Yorke (OGY) method [82]. By using the existence of an unstable fixed point nested within the chaotic attractor, small disturbances to the system stabilize the chaotic orbit. One prominent method for controlling chaos is the delayed feedback control (DFC) method, which is based on the OGY method [83,84]. Its advantage is that it does not require knowledge of the position of an unstable fixed point or unstable periodic orbit. Neuronet controllers, in general, offer an alternative approach to managing chaos. These include fuzzy neuronet models for fractional-order chaotic system synchronization [85], convolutional neuronets for time-series prediction [86], recurrent neuronets for breaking chaos-based cryptography [87], SVM models for wind power forecasting [88], Markov chain neuronets for energy management in hybrid electric vehicles [89], and hybrid neuronets for bus trip time prediction [90]. A WASD model was used in [30] to control chaos, and it was shown to be quite competitive with, if not superior to, some of the most advanced deep learning techniques, as well as the conventional DFC and OGY approaches.
Conversely, since its invention several decades ago, proportional–integral–derivative (PID) controllers have been effectively applied in process-controlled industries such light industry, metallurgy, machinery, and power [91]. The feedback-based control loop technology known as the PID controller is widely used due to its ease of design and analysis, as well as its ease of implementation. Despite their straightforward design, ease of understanding, and versatility in controlling various control systems, PID controllers come with drawbacks [92]. The PID control system’s performance is set by tuning its parameters. A regulated system with improper tuning will function poorly, or even unstably [93]. In [42], a WASD-based method was introduced for replacing the PID controller in control systems with a neuronet feedback controller, leading to an equivalent or even higher performance of the feedback controlled system. Because integration and derivatives are not needed during the feedback process, the primary benefit of the WASD neuronet feedback controller over a PID controller is that it takes less computational power. Figure 4 displays the performance of the WASD models on chaos and system stabilization control, as obtained from [30,42].

4.7.3. Economics, Finance and Business Application

The finest practices in planning, financial management, and public administration have been implemented through the adoption of new, sophisticated computer techniques brought about by the advancement of information technology. Numerous scholarly investigations have indicated that the use of novel information and communication technology-driven methodologies is imperative for the reformation and improvement of public and financial management strategies [94]. The application of novel machine learning techniques has led to the adoption of enhanced intelligence methodologies for time-series forecasting problems in several scientific domains [95,96,97]. Because of its simplicity and efficiency, the WASD technique is one of the numerous machine learning algorithms that is frequently used in the business, finance, and economics domains.
Particularly, in the field of economics, the WASD method is widely applied in forecasting economic indicators such as the gross domestic product [39], balance of payments, listed shares, trade in goods, and liquidity conditions [41]. It has been also applied for predicting the Central Bank Intervention for interest rate stabilization [30] and predicting the United States public debt [8] and the Spanish public debt [98]. In the field of finance, the WASD method is widely applied in predicting investor’s views on stocks’ prices [27,29] and forecasting financial time-series [28]. In the field of business, the WASD method is widely applied in predicting employee attrition [33,35], credit card attrition [37] and customer churn [34], as well as classifying loan approval [38,45] and credit approval [36]. As obtained from [27,38], Figure 5 illustrates the effectiveness of the WASD models in predicting investor views on stock prices and classifying loan approval.

4.7.4. Social Science and Demography Application

Multiclass classification tasks are common in social science research; examples include characterizing occupational mobility [99], performing case–control studies in the healthcare industry [100], investigating the correlation between changes in occupational characteristics and cancer [101], and assessing the feasibility of teleworking in certain jobs [102]. In particular, a number of classification schemes serve as the basis for the methodical monitoring of any population, ranging from the global populous to the constituents of small and medium-sized enterprises or communities [103]. For a range of work-related subjects, including administrative usage, employment statistics, social sciences, international trade, and commerce, which includes surveillance and analytical methodologies, an individual’s occupation (job title) or industry can be used to reflect occupational exposure [104]. For these ideas, there exist several national and international classification schemes that are updated on a regular basis [105]. As a result, in [40,44], two WASD models were applied for multiclass classification tasks and demonstrated the models’ excellent efficiency and precision for classifying job titles into the International Standard Classification of Occupations (ISCO) and the Occupational Information Network Standard Occupational Classification (O*NET-SOC). As obtained from [40,44], Figure 6 illustrates the effectiveness of the WASD models in classifying job titles and achieving crosswalk between job classification systems.
In contrast, the statistical analysis of human populations is known as demography [106]. In a number of domains, including education and policy-making, population has grown in importance. Reliability in population forecasting could aid policymakers in arriving at justifiable choices. Thus, there is always interest in such a topic [107]. Population forecasting is difficult and very unreliable [108]. Numerous elements (such as the natural environment, policy, culture, and disease) have an impact on the population. Evidently, it is challenging to predict the population with accuracy and precision [109]. Consequently, in [8,11], eight WASD models were applied for regression tasks and demonstrated the models’ excellent efficiency and precision for predicting the Russian, Chinese, European, Oceanian, Northern America, Indian, Asian, and world populations.

5. Conclusions and Future Directions

We discussed the evolution and uses of WASD over the past eighteen years in this review. The theory and principle of WASD were presented, along with a comprehensive set of techniques, including metaheuristics, to enhance WASD performance for certain learning tasks. For artificial intelligence-related problems in a range of domains, WASD is frequently chosen over other cutting-edge methodologies because of its quick speed, strong generalization, and simplicity of implementation. We discovered a number of intriguing research topics and avenues for further investigation through the survey.
In the context of big data, WASD may become increasingly significant. Big data machine learning challenges can now be resolved thanks to the development of deep learning architectures and efficient training algorithms. Although deep learning has produced amazing results, it has a high computational complexity and frequently takes a long time to train. In classification tasks, WASD performs admirably because it may achieve high accuracy in a relatively short amount of time. To learn from massive data, it will be intriguing to mix WASD and deep learning. With minimal training time, the hybrid network might be able to produce outcomes comparable to those of deep learning. Consequently, one limitation of the WASD performance is the absence of comparisons with more advanced neuronet models, especially when it comes to comparisons in scenarios involving vast amounts of data. However, this will have to be ascertained by future research.
Evolution is never stopped by WASD’s theoretical rationale. The most important concept for WASD is power-activated functions in the hidden layer, which allows for universal approximation and speeds up training dramatically. However, there is still an issue with explaining the unpredictability. Comparing WASD to other random processes, such as random forests, could be useful. Biological learning may be linked to WASD development in another exciting direction. It has been discovered by researchers that our brain’s neurons likewise learn by random mechanisms [110]. Given that the human brain served as the primary inspiration for artificial neuronets, it will be intriguing to look into the relationship between biological learning and WASD.

Author Contributions

All authors (S.D.M., V.N.K., S.L., X.C. and B.L.) contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. McCulloch, W.S.; Pitts, W.H. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  2. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  3. Hecht-Nielsen, R. Theory of the backpropagation neural network. Neural Netw. 1988, 1, 445–448. [Google Scholar] [CrossRef]
  4. Sun, Y.; Zhang, S.; Miao, C.; Li, J. Improved BP neural network for transformer fault diagnosis. J. China Univ. Min. Technol. 2007, 17, 138–142. [Google Scholar] [CrossRef]
  5. Li, J.; Cheng, J.; Shi, J.; Huang, F. Brief introduction of back propagation (BP) neural network algorithm and its improvement. In Advances in Computer Science and Information Engineering; Advances in Intelligent and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2012; Volume 169, pp. 553–558. [Google Scholar] [CrossRef]
  6. Han, T.; Lu, Y.; Zhu, S.; Wu, Y.N. Alternating Back-Propagation for Generator Network. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Singh, S.P., Markovitch, S., Eds.; AAAI Press: Washington, DC, USA, 2017; pp. 1976–1984. [Google Scholar]
  7. Zhang, Y.; Wang, Y.; Li, W.; Chou, Y.; Zhang, Z. WASD algorithm with pruning-while-growing and twice-pruning techniques for multi-input Euler polynomial neural network. Int. J. Artif. Intell. Tools 2016, 25, 1650007. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Chen, D.; Ye, C. Deep Neural Networks: WASD Neuronet Models, Algorithms, and Applications; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  9. Zhang, Y.; Yu, X.; Xiao, L.; Li, W.; Fan, Z.; Zhang, W. Weights and structure determination of articial neuronets. In Self-Organization: Theories and Methods; Nova Science: New York, NY, USA, 2013. [Google Scholar]
  10. Zhang, Y.; Chen, J.; Fu, S.; Xiao, L.; Yu, X. Weights and structure determination (WASD) of multiple-input hermit orthogonal polynomials neural network (MIHOPNN). In Proceedings of the 2012 24th Chinese Control and Decision Conference (CCDC), Taiyuan, China, 23–25 May 2012. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Guo, D.; Luo, Z.; Zhai, K.; Tan, H. CP-activated WASD neuronet approach to Asian population prediction with abundant experimental verification. Neurocomputing 2016, 198, 48–57. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Yin, Y.; Guo, D.; Yu, X.; Xiao, L. Cross-validation based weights and structure determination of Chebyshev-polynomial neural networks for pattern classification. Pattern Recognit. 2014, 47, 3414–3428. [Google Scholar] [CrossRef]
  13. Jin, L.; Huang, Z.; Chen, L.; Liu, M.; Li, Y.; Chou, Y.; Yi, C. Modified single-output Chebyshev-polynomial feedforward neural network aided with subset method for classification of breast cancer. Neurocomputing 2019, 350, 128–135. [Google Scholar] [CrossRef]
  14. Li, H.; Huang, Z.; Fu, J.; Li, Y.; Zeng, N.; Zhang, J.; Ye, C.; Jin, L. Modified weights-and-structure-determination neural network for pattern classification of flatfoot. IEEE Access 2019, 7, 63146–63154. [Google Scholar] [CrossRef]
  15. Li, H.; Jin, L.; Zhang, R.; Huang, Z.; Liu, M.; Zhang, J.; Shi, Y. A new type of neural network for assisting diagnosis of flatfoot in juveniles. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  16. Chen, L.; Huang, Z.; Li, Y.; Zeng, N.; Liu, M.; Peng, A.; Jin, L. Weight and structure determination neural network aided with double pseudoinversion for diagnosis of flat foot. IEEE Access 2019, 7, 33001–33008. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Liu, W.; Yi, C.; Li, W. Weights immediate determination for feed-forward neuralnetwork with Legendre orthogonal basis function. J. Dalian Marit. Univ. 2008, 34, 32–36. [Google Scholar]
  18. Zhang, Y.; Yin, Y.; Yu, X.; Guo, D.; Xiao, L. Pruning-included weights and structure determination of 2-input neuronet using Chebyshev polynomials of Class 1. In Proceedings of the 10th World Congress on Intelligent Control and Automation, Beijing, China, 6–8 July 2012. [Google Scholar] [CrossRef]
  19. Xiao, X.; Zhang, Y.; Jiang, X.; Zou, A. Weights-direct-determination and structure-adaptivedeterminationof feed-forward neural network activated with the 2nd-class Chebyshev orthogonalpolynomials. J. Dalian Marit. Univ. 2009, 35, 80–84. [Google Scholar]
  20. Zhang, Y.; Yu, X.; Guo, D.; Yin, Y.; Zhang, Z. Weights and structure determination of multiple-input feed-forward neural network activated by Chebyshev polynomials of Class 2 via cross-validation. Neural Comput. Appl. 2014, 25, 1761–1770. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Guo, D. Optimal-structure determination of power-activation feed-forward neuralnetwork. Comput. Eng. Appl. 2011, 47, 29–31. [Google Scholar]
  22. Zhang, Y.; Li, L.; Yang, Y.; Ruan, G. Euler neural network with its weight-direct-determination and structure-automatic-determination algorithms. In Proceedings of the 2009 Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; pp. 319–324. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Ruan, G. Bernoulli neural network with weights directly determined and with the number of hidden-layer neurons automatically determined. In Advances in Neural Networks, Proceedings of the 6th International Symposium on Neural Networks, ISNN 2009, Wuhan, China, 26–29 May 2009; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Qu, L.; Liu, J.; Guo, D.; Li, M. Sine neural network (SNN) with double-stage weights and structure determination (DS-WASD). Soft Comput. 2016, 20, 211–221. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Ding, S.; Liu, X.; Liu, J.; Mao, M. WASP neuronet activated by bipolar-sigmoid functions and applied to glomerular-filtration-rate estimation. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Lao, W.; Jin, L.; Chen, T.; Liu, J. Growing-type WASD for power-activation neuronet to model and forecast monthly time series. In Proceedings of the 2013 10th IEEE International Conference on Control and Automation (ICCA), Hangzhou, China, 12–14 June 2013; pp. 1312–1317. [Google Scholar] [CrossRef]
  27. Simos, T.E.; Mourtas, S.D.; Katsikis, V.N. Time-varying Black-Litterman portfolio optimization using a bio-inspired approach and neuronets. Appl. Soft Comput. 2021, 112, 107767. [Google Scholar] [CrossRef]
  28. Mourtas, S.D. A weights direct determination neuronet for time-series with applications in the industrial indices of the federal reserve bank of St. Louis. J. Forecast. 2022, 14, 1512–1524. [Google Scholar] [CrossRef]
  29. Mourtas, S.D.; Katsikis, V.N. Exploiting the Black-Litterman framework through error-correction neural networks. Neurocomputing 2022, 498, 43–58. [Google Scholar] [CrossRef]
  30. Mourtas, S.D.; Katsikis, V.N.; Drakonakis, E.; Kotsios, S. Stabilization of stochastic exchange rate dynamics under central bank intervention using neuronets. Int. J. Inf. Technol. Decis. 2023, 22, 855–883. [Google Scholar] [CrossRef]
  31. Zeng, T.; Zhang, Y.; Li, Z.; Qiu, B.; Ye, C. Predictions of USA presidential parties from 2021 to 2037 using historical data through square wave-activated WASD neural network. IEEE Access 2020, 8, 56630–56640. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Wang, R.; Lao, W.; Deng, J. Signum-function-activated WASD neuronet and its XOR application. Acta Sci. Nat. Univ. Sunyatseni 2014, 53, 1–7. [Google Scholar]
  33. Alharbi, H.; Alshammari, O.; Jerbi, H.; Simos, T.E.; Katsikis, V.N.; Mourtas, S.D.; Sahas, R.D. A Fresnel cosine integral WASD neural network for the classification of employee attrition. Mathematics 2023, 11, 1506. [Google Scholar] [CrossRef]
  34. Mourtas, S.D. Customer churn classification through a weights and structure determination neural network. ITM Web Conf. 2024, 59, 01004. [Google Scholar] [CrossRef]
  35. Katsikis, V.N.; Mourtas, S.D.; Sahas, R.; Balios, D. A weights direct determination neural network for credit card attrition analysis. In Machine Learning Approaches in Financial Analytics; Intelligent Systems Reference Library; Springer: Cham, Switzerland, 2024; Volume 254, pp. 325–346. [Google Scholar] [CrossRef]
  36. Mourtas, S.D. Credit approval classification through a WASD neuronet. ITM Web Conf. 2025, 72, 05006. [Google Scholar] [CrossRef]
  37. Mourtas, S.D.; Katsikis, V.N.; Sahas, R. Credit card attrition classification through neuronets. In Hybrid Methods of Modeling and Optimization in Complex Systems; Stanimirović, P.S., Stupina, A.A., Semenkin, E., Kovalev, I.V., Eds.; European Proceedings of Computers and Technology; European Publisher: Brussels, Belgium, 2023; Volume 1, pp. 86–93. [Google Scholar] [CrossRef]
  38. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. A multi-input with multi-function activated weights and structure determination neuronet for classification problems and applications in firm fraud and loan approval. Appl. Soft Comput. 2022, 127, 109351. [Google Scholar] [CrossRef]
  39. Mourtas, S.D.; Drakonakis, E.; Bragoudakis, Z. Forecasting the gross domestic product using a weight direct determination neural network. AIMS Math. 2023, 8, 24254–24273. [Google Scholar] [CrossRef]
  40. Lagios, D.; Mourtas, S.D.; Zervas, P.; Tzimas, G. A weights direct determination neural network for international standard classification of occupations. Mathematics 2023, 11, 629. [Google Scholar] [CrossRef]
  41. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. Multi-input bio-inspired weights and structure determination neuronet with applications in European Central Bank publications. Math. Comput. Simul. 2022, 193, 451–465. [Google Scholar] [CrossRef]
  42. Mourtas, S.D.; Katsikis, V.N.; Kasimis, C. Feedback control systems stabilization using a bio-inspired neural network. EAI Endorsed Trans. AI Robot. 2022, 1, 1–13. [Google Scholar] [CrossRef]
  43. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. A fuzzy WASD neuronet with application in breast cancer prediction. Neural Comput. Appl. 2021, 34, 3019–3031. [Google Scholar] [CrossRef]
  44. He, Y.; Dong, X.; Simos, T.E.; Mourtas, S.D.; Katsikis, V.N.; Lagios, D.; Zervas, P.; Tzimas, G. A bio-inspired weights and structure determination neural network for multiclass classification: Applications in occupational classification systems. AIMS Math. 2024, 9, 2411–2434. [Google Scholar] [CrossRef]
  45. Mourtas, S.D.; Katsikis, V.N.; Stanimirović, P.S.; Kazakovtsev, L.A. Credit and loan approval classification using a bio-inspired neural network. Biomimetics 2024, 9, 120. [Google Scholar] [CrossRef] [PubMed]
  46. Jiang, X.; Li, S. BAS: Beetle antennae search algorithm for optimization problems. arXiv 2017, arXiv:1710.10724. [Google Scholar] [CrossRef]
  47. Xu, X.; Deng, K.; Shen, B. A beetle antennae search algorithm based on Lévy flights and adaptive strategy. Syst. Sci. Control Eng. 2020, 8, 35–47. [Google Scholar] [CrossRef]
  48. Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2020, 17, 168–192. [Google Scholar] [CrossRef]
  49. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Medica 2012, 22, 276–282. [Google Scholar] [CrossRef]
  50. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  51. Schulz, E.; Speekenbrink, M.; Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 2018, 85, 1–16. [Google Scholar] [CrossRef]
  52. Gershman, S.J.; Blei, D.M. A tutorial on Bayesian nonparametric models. J. Math. Psychol. 2012, 56, 1–12. [Google Scholar] [CrossRef]
  53. Devijver, P.A.; Kittler, J. Pattern Recognition: A Statistical Approach; Prentice Hall International: London, UK, 1982. [Google Scholar]
  54. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973. [Google Scholar]
  55. Laaksonen, J.; Oja, E. Classification with learning k-nearest neighbors. In Proceedings of the International Conference on Neural Networks (ICNN’96), Washington, DC, USA, 3–6 June 1996; IEEE: Piscataway, NJ, USA, 1996; Volume 3, pp. 1480–1483. [Google Scholar] [CrossRef]
  56. Garg, A.; Roth, D. Understanding probabilistic classifiers. In Machine Learning, Proceedings of the ECML 2001, Freiburg, Germany, 5–7 September 2001; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2167, pp. 179–191. [Google Scholar] [CrossRef]
  57. Arar, Ö.F.; Ayan, K. A feature dependent Naive Bayes approach and its application to the software defect prediction problem. Appl. Soft Comput. 2017, 59, 197–209. [Google Scholar] [CrossRef]
  58. Settouti, N.; Bechar, M.E.A.; Chikh, M.A. Statistical comparisons of the top 10 algorithms in data mining for classification task. Int. J. Interact. Multimed. Artif. 2016, 4, 46–51. [Google Scholar] [CrossRef]
  59. Rokach, L.; Maimon, O. Top-down induction of decision trees classifiers—A survey. IEEE Trans. Syst. Man Cybern. Part Appl. Rev. 2005, 35, 476–487. [Google Scholar] [CrossRef]
  60. Quinlan, J.R. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  61. Studer, M.; Ritschard, G.; Gabadinho, A.; Müller, N.S. Discrepancy analysis of state sequences. Sociol. Methods Res. 2011, 40, 471–510. [Google Scholar] [CrossRef]
  62. Zhan, S.; Huang, L.; Luo, G.; Zheng, S.; Gao, Z.; Chao, H.C. A review on federated learning architectures for privacy-preserving AI: Lightweight and secure cloud–edge–end collaboration. Electronics 2025, 14, 2512. [Google Scholar] [CrossRef]
  63. Cărunta, C.; Cărunta, A.; Popa, C.A. Heavy and lightweight deep learning models for semantic segmentation: A survey. IEEE Access 2025, 13, 17745–17765. [Google Scholar] [CrossRef]
  64. Liu, Y.; Zhang, Q.; Zhao, G.; Qu, Z.; Liu, G.; Liu, Z.; An, Y. Detecting diseases by human-physiological-parameter-based deep learning. IEEE Access 2019, 7, 22002–22010. [Google Scholar] [CrossRef]
  65. Escudero, J.; Ifeachor, E.; Zajicek, J.P.; Green, C.; Shearer, J.; Pearson, S. Machine learning-based method for personalized and cost-effective detection of Alzheimer’s disease. IEEE Trans. Biomed. Eng. 2013, 60, 164–168. [Google Scholar] [CrossRef] [PubMed]
  66. Chen, H.L.; Yang, B.; Liu, J.; Liu, D.Y. A support vector machine classifier with rough set-based feature selection for breast cancer diagnosis. Expert Syst. Appl. 2011, 38, 9014–9022. [Google Scholar] [CrossRef]
  67. Chen, H.L.; Wang, G.; Ma, C.; Cai, Z.N.; Liu, W.B.; Wang, S.J. An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson’s disease. Neurocomputing 2016, 184, 131–144. [Google Scholar] [CrossRef]
  68. Nguyen, H.; Hung, W.; Thornton, B.; Thornton, E.; Lee, W. Classification of microcalcifications in mammograms using artificial neural networks. In Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Hong Kong, China, 1 November 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 2, pp. 1006–1008. [Google Scholar] [CrossRef]
  69. Asri, H.; Mousannif, H.; Moatassime, H.A.; Noël, T. Using Machine Learning Algorithms for Breast Cancer Risk Prediction and Diagnosis. Procedia Comput. Sci. 2016, 83, 1064–1069. [Google Scholar] [CrossRef]
  70. Bharat, A.; Pooja, N.; Reddy, R.A. Using Machine Learning algorithms for breast cancer risk prediction and diagnosis. In Proceedings of the 2018 3rd International Conference on Circuits, Control, Communication and Computing (I4C), Bangalore, India, 3–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar] [CrossRef]
  71. Ed-daoudy, A.; Maalmi, K. Breast cancer classification with reduced feature set using association rules and support vector machine. Netw. Model. Anal. Health Inform. Bioinform. 2020, 9, 34. [Google Scholar] [CrossRef]
  72. Matamala, N.; Vargas, M.T.; González-Cámpora, R.; Miñambres, R.; Arias, J.I.; Menéndez, P.; Andrés-León, E.; Gómez-López, G.; Yanowsky, K.; Calvete-Candenas, J.; et al. Tumor MicroRNA Expression Profiling Identifies Circulating MicroRNAs for Early Breast Cancer Detection. Clin. Chem. 2015, 61, 1098–1106. [Google Scholar] [CrossRef] [PubMed]
  73. Fabry, G. Static, axial, and rotational deformities of the lower extremities in children. Eur. J. Pediatr. 2010, 169, 529–534. [Google Scholar] [CrossRef] [PubMed]
  74. Alsuhaymi, A.; Almohammadi, F.; Alharbi, O.; Alawfi, A.; Olfat, M.; Alhazmi, O.; Khoshhal, K. Flatfoot among school-age children in Almadinah Almunawwarah: Prevalence and risk factors. J. Musculoskelet. Surg. Res. 2019, 3, 204. [Google Scholar] [CrossRef]
  75. Hazzaa, H.H.; El-Meniawy, G.H.; Ahmed, S.E.; Bedier, M.B. Correlation between gender and age and flat foot in obese children. Trends Appl. Sci. Res. 2015, 10, 207–215. [Google Scholar] [CrossRef]
  76. Fung, J.P.P.; Ismiarto, Y.D.; Mayasari, W. Relationship between nutritional status and flat foot in children. Althea Med J. 2017, 4, 152–156. [Google Scholar] [CrossRef]
  77. Pfeiffer, M.; Kotz, R.; Ledl, T.; Hauser, G.; Sluga, M. Prevalence of flat foot in preschool-aged children. Pediatrics 2006, 118, 634–639. [Google Scholar] [CrossRef] [PubMed]
  78. Mian, A.N.; Schwartz, G.J. Measurement and estimation of glomerular filtration rate in children. Adv. Chronic Kidney Dis. 2017, 24, 348–356. [Google Scholar] [CrossRef] [PubMed]
  79. Chen, S. Kinetic glomerular filtration rate in routine clinical practice—Applications and possibilities. Adv. Chronic Kidney Dis. 2018, 25, 105–114. [Google Scholar] [CrossRef] [PubMed]
  80. Abu-Abed, F.; Borisov, N. Drilling rig operation mode recognition by an artificial neuronet. E3S Web Conf. 2017, 21, 01019. [Google Scholar] [CrossRef]
  81. Huang, C.; Jia, X.; Zhang, Z. A modified back propagation artificial neural network model based on genetic algorithm to predict the flow behavior of 5754 aluminum alloy. Materials 2018, 11, 855. [Google Scholar] [CrossRef] [PubMed]
  82. Yi, L.; Liu, Y.; Yu, W. Combination of Improved OGY and Guiding Orbit Method for Chaos Control. J. Adv. Comput. Intell. Intell. Inform. 2019, 23, 847–855. [Google Scholar] [CrossRef]
  83. Sieber, J. Generic stabilizability for time-delayed feedback control. Proc. R. Soc. A 2016, 473, 20150593. [Google Scholar] [CrossRef] [PubMed]
  84. Yamashita, Y. Continuous-time delayed feedback control for chaotic systems using complex number observer. IFAC-PapersOnLine 2015, 48, 199–204. [Google Scholar] [CrossRef]
  85. Wang, L. PID Control System Design and Automatic Tuning Using MATLAB/Simulink; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar] [CrossRef]
  86. Cheng, W.; Wang, Y.; Peng, Z.; Ren, X.; Shuai, Y.; Zang, S.; Liu, H.; Cheng, H.; Wu, J. High-efficiency chaotic time series prediction based on time convolution neural network. Chaos Solitons Fractals 2021, 152, 111304. [Google Scholar] [CrossRef]
  87. Antonik, P.; Gulina, M.; Pauwels, J.; Massar, S. Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronisation and cryptography. Phys. Rev. E 2018, 98, 012215. [Google Scholar] [CrossRef] [PubMed]
  88. Li, L.L.; Zhao, X.; Tseng, M.L.; Tan, R. Short-term wind power forecasting based on support vector machine with improved dragonfly algorithm. J. Clean. Prod. 2020, 242, 118447. [Google Scholar] [CrossRef]
  89. Zhou, M.; Chen, J.; Stanimirovic, P.S.; Katsikis, V.N.; Ma, H. Complex varying-parameter Zhang neural networks for computing core and core-EP inverse. Neural Process. Lett. 2020, 51, 1299–1329. [Google Scholar] [CrossRef]
  90. Wu, W.; Zheng, B. Improved recurrent neural networks for solving Moore-Penrose inverse of real-time full-rank matrix. Neurocomputing 2020, 418, 221–231. [Google Scholar] [CrossRef]
  91. Shamsuzzoha, M. (Ed.) PID Control for Industrial Processes; IntechOpen: London, UK, 2018. [Google Scholar] [CrossRef]
  92. Tabish, M.; Kalam, A.; Zayegh, A. Robot DC Servo Motor Parameters Estimation in a closed loop Using BAT Optimisation Algorithm. In Proceedings of the 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Swat, Pakistan, 24–25 July 2019. [Google Scholar] [CrossRef]
  93. Sharma, K.; Palwalia, D.K. A modified PID control with adaptive fuzzy controller applied to DC motor. In Proceedings of the 2017 International Conference on Information, Communication, Instrumentation and Control (ICICIC), Indore, India, 17–19 August 2017. [Google Scholar] [CrossRef]
  94. Kouziokas, G.N. A new W-SVM kernel combining PSO-neural network transformed vector and Bayesian optimized SVM in GDP forecasting. Eng. Appl. Artif. Intell. 2020, 92, 103650. [Google Scholar] [CrossRef]
  95. Cepni, O.; Güney, I.E.; Swanson, N.R. Nowcasting and forecasting GDP in emerging markets using global financial and macroeconomic diffusion indexes. Int. J. Forecast. 2019, 35, 555–572. [Google Scholar] [CrossRef]
  96. Kim, J.; Kim, H.; Kim, H.; Lee, D.; Yoon, S. A comprehensive survey of deep learning for time series forecasting: Architectural diversity and open challenges. Artif. Intell. Rev. 2025, 58, 216. [Google Scholar] [CrossRef]
  97. Kong, X.; Chen, Z.; Liu, W.; Ning, K.; Zhang, L.; Marier, S.M.; Liu, Y.; Chen, Y.; Xia, F. Deep learning for time series forecasting: A survey. Int. J. Mach. Learn. Cybern. 2025. [Google Scholar] [CrossRef]
  98. Zhang, Y.; Xue, Z.; Xiao, M.; Ling, Y.; Ye, C. Ten-quarter projection for Spanish central government debt via WASD neuronet. In International Conference on Neural Information Processing, Proceedings of the ICONIP 2017, Guangzhou, China, 14–18 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 893–902. [Google Scholar]
  99. Groes, F.; Kircher, P.; Manovskii, I. The U-shapes of occupational mobility. Rev. Econ. Stud. 2015, 82, 659–692. [Google Scholar] [CrossRef]
  100. Khalis, M.; Charbotel, B.; Fort, E.; Chajes, V.; Charaka, H.; Rhazi, K.E. Occupation and female breast cancer: A case-control study in Morocco. Rev. D’éPidéMiologie Et De Santé Publique 2018, 66, S302. [Google Scholar] [CrossRef]
  101. Heinesen, E.; Imai, S.; Maruyama, S. Employment, job skills and occupational mobility of cancer survivors. J. Health Econ. 2018, 58, 151–175. [Google Scholar] [CrossRef] [PubMed]
  102. Generalao, I.N. Measuring the telework potential of jobs: Evidence from the international standard classification of occupations. Philipp. Rev. Econ. 2021, 58, 92–127. [Google Scholar] [CrossRef]
  103. Choi, S.B.; Yoon, J.H.; Lee, W. The modified international standard classification of occupations defined by the clustering of occupational characteristics in the Korean working conditions survey. Ind. Health 2020, 58, 132–141. [Google Scholar] [CrossRef] [PubMed]
  104. Züll, C. The coding of occupations. In GESIS Survey Guidelines; GESIS—Leibniz Institute for the Social Sciences: Mannheim, Germany, 2016. [Google Scholar] [CrossRef]
  105. Marc, D.T.; Dua, P.; Fenton, S.H.; Lalani, K.; Butler-Henderson, K. Occupational Classifications in the Health Information Disciplines. In The Health Information Workforce; Health Informatics; Springer: Cham, Switzerland, 2021; pp. 71–78. [Google Scholar] [CrossRef]
  106. Hopenhayn, H.; Neira, J.; Singhania, R. From population growth to firm demographics: Implications for concentration, entrepreneurship and the labor share. Econometrica 2022, 90, 1879–1914. [Google Scholar] [CrossRef]
  107. Lassila, J.; Valkonen, T.; Alho, J.M. Demographic forecasts and fiscal policy rules. Int. J. Forecast. 2014, 30, 1098–1109. [Google Scholar] [CrossRef]
  108. Booth, H. Demographic forecasting: 1980 to 2005 in review. Int. J. Forecast. 2006, 22, 547–581. [Google Scholar] [CrossRef]
  109. Alho, J.M. Forecasting demographic forecasts. Int. J. Forecast. 2014, 30, 1128–1135. [Google Scholar] [CrossRef]
  110. Chen, S.; Tan, D. A SA-ANN-based modeling method for human cognition mechanism and the PSACO cognition algorithm. Complexity 2018, 2018, 6264124. [Google Scholar] [CrossRef]
Figure 1. General layout of the WASD neuronet.
Figure 1. General layout of the WASD neuronet.
Inventions 10 00062 g001
Figure 2. The relationship between beetle searching behavior and WASD.
Figure 2. The relationship between beetle searching behavior and WASD.
Inventions 10 00062 g002
Figure 3. Breast cancer prediction through WASD in [43]. (a) B: benign, M: malignant. (b) BT: breast tumors, HB: healthy breast.
Figure 3. Breast cancer prediction through WASD in [43]. (a) B: benign, M: malignant. (b) BT: breast tumors, HB: healthy breast.
Inventions 10 00062 g003
Figure 4. Engineering applications using WASD neuronets in [30,42]. (a) Control of chaos in [30]. (b) System stabilization in [42].
Figure 4. Engineering applications using WASD neuronets in [30,42]. (a) Control of chaos in [30]. (b) System stabilization in [42].
Inventions 10 00062 g004
Figure 5. Finance and business applications using WASD neuronets in [27,38]. (a) Investor’s view on stock price in [27]. (b) Loan approval in [38].
Figure 5. Finance and business applications using WASD neuronets in [27,38]. (a) Investor’s view on stock price in [27]. (b) Loan approval in [38].
Inventions 10 00062 g005
Figure 6. Social science applications using WASD neuronets in [40,44]. (a) Job title classification in [40]. (b) Crosswalk between job classif. systems in [44].
Figure 6. Social science applications using WASD neuronets in [40,44]. (a) Job title classification in [40]. (b) Crosswalk between job classif. systems in [44].
Inventions 10 00062 g006
Table 1. Evolution of WASD across time.
Table 1. Evolution of WASD across time.
YearRef.Target Function Approx.TaskEnhancement MethodsApplicationDataset Size
2008[17]Legendre polynomialRegressionTwo WASD methodMathematical formulas approximation4761 × 3
2009[19]Chebyshev polynomialRegressionSAD subalgorithmMathematical formulas approximation2116 × 3
2009[22]Euler polynomialRegressionTP and PWG techniquesMathematical formulas approximation6000 × 4
2009[23]Bernoulli polynomialRegressionTPMathematical formulas approximation6561 × 4
2011[21]K-Taylor polynomialRegressionPower-activation, growing,Mathematical formulas approximation1331 × 4
and pruning techniques
2012[18]Chebyshev polynomialRegressionPruning techniqueMathematical formulas approximation2116 × 3
2012[10]Hermite polynomialRegressionPower-activation methodMathematical formulas approximation676 × 3
2013[26]K-Taylor polynomialRegressionPower-activation methodMonthly time series forecast129 × 1
2014[32]PolynomialBinary Classif.Power signum activation methodXOR logic prediction100 × 2
2014[20]Chebyshev polynomialRegressionMulti-fold cross-validationGray image denoising65,536 × 1
2014[25]K-Taylor polynomialRegressionPower tan-sigmoid activation,Glomerular-filtration-rate estimation1180 × 9
PWG, TP, and LM methods
2014[12]Chebyshev polynomialBinary Classif.Power-activation methodSeveral real world datasets4900 × 12
2016[24]K-Taylor polynomialRegressionPower sine activation andMathematical formulas approximation10,961 × 3
two-stage WASD method
2019[13]Chebyshev polynomialBinary Classif.Subset methodBreast cancer131 × 1926
2019[14]LinearBinary Classif.Monotonous nonlinear activationFlatfoot diagnosis611 × 83
and k-fold cross-validation method
2019[15]LinearBinary Classif.Arctan activation andFlatfoot diagnosis611 × 83
k-fold cross-validation method
2019[16]LinearBinary Classif.Arctan activation andFlatfoot diagnosis611 × 83
double pseudoinversion method
2020[31]MultivaluedBinary Classif.Periodic square wave activationUSA presidential parties predictions42 × 2
2021[27]K-Taylor polynomialRegressionSpeedy power-activation methodStock market forecast180 × 1
2021[43]K-Taylor polynomialBinary Classif.Fuzzy controller and powerBreast Cancer classification683 × 9
softplus activation techniques
2022[28]K-Taylor polynomialRegressionMulti-function activation methodIndustrial indices forecast262 × 1
2022[29]K-Taylor polynomialRegressionMulti-function activation methodStock market forecast200 × 1
2022[30]K-Taylor polynomialRegressionMulti-function activationStochastic exchange rate stabilization1000 × 5
and pruning techniques
2022[38]K-Taylor polynomialBinary Classif.Multi-function activationFirm fraud and loan approval classif.150,000 × 11
and pruning techniques
2022[41]K-Taylor polynomialRegressionBAS and power-activation methodEuropean Central Bank publ. forecast262 × 50
2022[42]K-Taylor polynomialRegressionBAS and power-activation methodFeedback control systems stabilization2000 × 3
2023[33]Bernstein polynomialBinary Classif.Power Fresnel cosine integralEmployee attrition classification23,436 × 37
activation and pruning techniques
2023[37]K-Taylor polynomialBinary Classif.Power softplus activationCredit card attrition classification10,127 × 34
and pruning techniques
2023[39]K-Taylor polynomialRegressionMulti-function activationGross domestic product forecast109 × 1
and pruning techniques
2023[40]K-Taylor polynomialMulticlass classif.Power maxout activation methodOccupations classification1308 × 437
2024[34]K-Taylor polynomialBinary Classif.Power GELU activationCustomer churn classification4228 × 22
and pruning techniques
2024[35]Bernstein polynomialBinary Classif.Power activation andCredit card attrition classification10,127 × 34
pruning techniques
2024[44]K-Taylor polynomialMulticlass classif.BAS and power maxoutCrosswalk of occupational classif.3369 × 1124
activation method
2024[45]K-Taylor polynomialBinary Classif.BAS and multi-functionCredit and loan approval classification120,269 × 11
activation method
2025[36]K-Taylor polynomialBinary Classif.Power SiLU activationCredit approval classification471 × 13
and pruning techniques
Table 2. Performance statistics: WASD versus SVM.
Table 2. Performance statistics: WASD versus SVM.
Time-Series Forec. in [41] Binary Classif. in [45] Reg. Problem in [30]
StatisticWASDSVMStatisticWASDSVMStatisticWASDSVM
MAE0.2541736.9MAE0.13530.0768MAE0.01180.0249
RMSE0.3183887.5Precision0.59710.4281RMSE0.01460.0324
R 2 10.9959Accuracy0.86460.9231 R 2 0.99810.9899
TC0.1 s0.09 sF-score0.69350.5764TC0.03 s0.06 s
Table 3. Performance statistics: WASD versus GPR.
Table 3. Performance statistics: WASD versus GPR.
Time-Series Forec. in [27] Binary Classific. in [39] Reg. Problem in [42]
StatisticWASDGPRStatisticWASDGPRStatisticWASDGPR
MAE3.53.28MAPE6.6311.17MAPE0.21150.2164
RMSE4.365.1SMAPE3.17716.08RMSE0.14310.1438
R 2 0.910.88 R 2 0.6754−4.0 × 10−3 R 2 0.99540.9954
TC2.2 s0.11 sMDA0.68420.1578TC0.1 s6 s
Table 4. Performance statistics: WASD versus KNN.
Table 4. Performance statistics: WASD versus KNN.
Binary Classific. in [35] Binary Classific. in [43] Multiclass Classific. in [44]
Statistic WASD KNN Statistic WASD KNN Statistic WASD KNN
MAE0.09320.1501MAE0.42400.0424MAE0.400213.64
Precision0.97060.9255Precision0.98350.9615Precision0.93270.8312
Accuracy0.90670.8499Accuracy0.95760.9576Accuracy0.93270.8312
F-score0.80180.7408TC0.05 s2.1 sF-score0.92400.8126
Table 5. Performance statistics: WASD versus KNB.
Table 5. Performance statistics: WASD versus KNB.
Binary Classific. in [33] Binary Classific. in [37] Binary Classific. in [34]
StatisticWASDKNBStatisticWASDKNBStatisticWASDKNB
MAE0.09450.1442MAE0.05290.1465MAE0.18350.1925
Precision0.90890.8137Precision0.98601Precision0.59420.5695
Accuracy0.90540.8557Accuracy0.94710.8535Accuracy0.81640.8074
F-score0.90570.8494F-score0.87380.6743F-score0.69300.6722
Table 6. Performance statistics: WASD versus tree.
Table 6. Performance statistics: WASD versus tree.
Time-Series Forec. in [29] Binary Classif. in [38] Multiclass Classific. in [40]
StatisticWASDTreeStatisticWASDTreeStatisticWASDTree
MAE4.95474.7250MAE0.01560.0363MAE1.258031.7133
RMSE5.96336.0510Precision0.95860.9724Precision0.97820.4782
R 2 0.71910.5446Accuracy0.98440.9636Accuracy0.97820.4782
TC2.5 s0.57 sF-score0.97890.9656F-score0.97560.4900
Table 7. Performance statistics: WASD versus other deep learning methods.
Table 7. Performance statistics: WASD versus other deep learning methods.
Binary Classif. in [33] Multiclass Classif. in [44] Reg. Problem in [42]
StatisticWASDLRStatisticWASDNNNTRAStatisticWASDLSTM
MAE0.09450.2345MAE0.1055109.868.54MAPE0.21157.7211
Precision0.90890.8039Precision0.98450.53170.5570RMSE0.14311.4768
Accuracy0.90540.7654Accuracy0.98450.53170.5570 R 2 0.9954−0.1182
F-score0.90570.7741F-score0.98630.55650.5184TC0.1 s32 s
Table 8. Comparison between WASD and other machine learning algorithms.
Table 8. Comparison between WASD and other machine learning algorithms.
AlgorithmsAccuracyTime ConsumptionCross Domain AdaptationPopularity Nowadays
WASDComparatively highLowComparatively highPopular
SVMComparatively highOrdinaryComparatively highPopular
GPRHighOrdinaryOrdinaryRelatively low
KNNOrdinaryOrdinaryOrdinaryOrdinary
KNBComparatively highOrdinaryComparatively highComparatively popular
TreeOrdinaryOrdinaryOrdinaryOrdinary
LROrdinaryOrdinaryOrdinaryRelatively low
NNNOrdinaryOrdinaryHighPopular
TRAOrdinaryOrdinaryHighPopular
LSTMOrdinaryOrdinaryHighPopular
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mourtas, S.D.; Li, S.; Cao, X.; Liao, B.; Katsikis, V.N. Current Trends and Challenges in Applying Metaheuristics to the Innovative Area of Weight and Structure Determination Neuronets. Inventions 2025, 10, 62. https://doi.org/10.3390/inventions10040062

AMA Style

Mourtas SD, Li S, Cao X, Liao B, Katsikis VN. Current Trends and Challenges in Applying Metaheuristics to the Innovative Area of Weight and Structure Determination Neuronets. Inventions. 2025; 10(4):62. https://doi.org/10.3390/inventions10040062

Chicago/Turabian Style

Mourtas, Spyridon D., Shuai Li, Xinwei Cao, Bolin Liao, and Vasilios N. Katsikis. 2025. "Current Trends and Challenges in Applying Metaheuristics to the Innovative Area of Weight and Structure Determination Neuronets" Inventions 10, no. 4: 62. https://doi.org/10.3390/inventions10040062

APA Style

Mourtas, S. D., Li, S., Cao, X., Liao, B., & Katsikis, V. N. (2025). Current Trends and Challenges in Applying Metaheuristics to the Innovative Area of Weight and Structure Determination Neuronets. Inventions, 10(4), 62. https://doi.org/10.3390/inventions10040062

Article Metrics

Back to TopTop