Next Article in Journal
The Conformational Contribution to Molecular Complexity and Its Implications for Information Processing in Living Beings and Chemical Artificial Intelligence
Next Article in Special Issue
BGOA-TVG: Binary Grasshopper Optimization Algorithm with Time-Varying Gaussian Transfer Functions for Feature Selection
Previous Article in Journal
A New Approach Based on Collective Intelligence to Solve Traveling Salesman Problems
Previous Article in Special Issue
Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Credit and Loan Approval Classification Using a Bio-Inspired Neural Network

by
Spyridon D. Mourtas
1,2,*,
Vasilios N. Katsikis
1,
Predrag S. Stanimirović
2,3 and
Lev A. Kazakovtsev
2,4
1
Department of Economics, Mathematics-Informatics and Statistics-Econometrics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
2
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Prospect Svobodny 79, 660041 Krasnoyarsk, Russia
3
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
4
Institute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, Prospect Krasnoyarskiy Rabochiy 31, 660037 Krasnoyarsk, Russia
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(2), 120; https://doi.org/10.3390/biomimetics9020120
Submission received: 23 January 2024 / Revised: 12 February 2024 / Accepted: 15 February 2024 / Published: 17 February 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
Numerous people are applying for bank loans as a result of the banking industry’s expansion, but because banks only have a certain amount of assets to lend to, they can only do so to a certain number of applicants. Therefore, the banking industry is very interested in finding ways to reduce the risk factor involved in choosing the safe applicant in order to save lots of bank resources. These days, machine learning greatly reduces the amount of work needed to choose the safe applicant. Taking this into account, a novel weights and structure determination (WASD) neural network has been built to meet the aforementioned two challenges of credit approval and loan approval, as well as to handle the unique characteristics of each. Motivated by the observation that WASD neural networks outperform conventional back-propagation neural networks in terms of sluggish training speed and being stuck in local minima, we created a bio-inspired WASD algorithm for binary classification problems (BWASD) for best adapting to the credit or loan approval model by utilizing the metaheuristic beetle antennae search (BAS) algorithm to improve the learning procedure of the WASD algorithm. Theoretical and experimental study demonstrate superior performance and problem adaptability. Furthermore, we provide a complete MATLAB package to support our experiments together with full implementation and extensive installation instructions.

1. Introduction

Since the turn of the century, banks and other financial institutions have been granting loans. Given that credit risk emerges mostly when borrowers are unable or unwilling to pay, rigorous background screening of a customer prior to approval of a loan failing is an essential necessity to sustain oneself in such a business [1,2]. Keep in mind that the amount of non-performing loans in the economy is significant because these loans weigh on bank profits and use valuable resources, limiting banks’ ability to grant new loans [3,4]. Problems in the banking sector can swiftly spread to other sections of the economy, jeopardizing employment and economic growth [5,6]. As a result, there is an urgent need to develop better models for determining whether or not to grant a loan [7,8].
These days, emerging technologies like machine learning and natural language processing greatly reduce the amount of work needed to do such tasks [9,10,11]. Machine learning tasks involving classification are typically found in several fields, such as engineering [12,13], medicine [14], finance and economics [15,16]. Classification presents a significant challenge in these fields.
Neural networks (NNs), which are mostly used for classification and regression challenges, have been effectively implemented in several fields, including medicine, engineering, economics, social science research and finance. In engineering, they are widely used for alloy behavior analysis [17], solar systems measurements [18], and feedback control systems stabilization [19]. Additionally, NNs are frequently used in medical diagnostics to diagnose flat foot [20], diabetic retinopathy [21], and various cancers, such as breast cancer [14] and lung cancer [22]. In contrast, NNs are typically used in the fields of economics and finance for macroeconomic factor prediction [23,24], time series forecasting [25,26], and portfolio optimization [27]. Furthermore, NNs have been effectively used in social science investigation, typically for multiclass classification challenges like classifying occupations [28], assessing the possibility for teleworking in jobs [29], and defining occupational mobility [30].
The primary goal of this work is to create a model for predicting loan acceptance utilizing novel NNs enhanced with state-of-the-art metaheuristic optimization techniques. We will use a feed-forward NN that can handle binary classification tasks in order to achieve this. A training algorithm called weights and structure determination (WASD) will be used in place of the well-known back-propagation approach for training feed-forward NNs. Unlike the back-propagation technique, which iteratively changes the network’s structure, the WASD approach uses the weights direct determination (WDD) procedure to compute the optimal set of weights directly. In the end, this reduces computational complexity by preventing the system from becoming trapped in local minima [31]. Taking into account the multi-input with multi-function activated WASD (MWASD) algorithm for binary classification proposed in [15], the metaheuristic beetle antennae search (BAS) algorithm is paired with the MWASD concept in this work to further improve the performance and structure of the WASD based NNs. In this way, we present a bio-inspired WASD (BWASD) algorithm for binary classification challenges to train a 3-layer feed-forward NN. It is important to note that BAS, which can perform efficient global optimization, has recently gained significant traction in several scientific domains, such as finance [32], robotics [33,34], engineering [35,36], and machine learning [19]. To better address these tasks, BAS has undergone a number of alternations, such as the binary [37] and the semi-integer [38] acceptations. Specifically in machine learning, the WASD and BAS algorithms have been combined in [32] to improve the performance and structure of WASD based NNs for regression-related challenges. Unlike [32], which simply uses the BAS-WASD combination to determine the best structure of the NN in regression-related situations, our approach utilizes BWASD:
  • to identify the ideal structure of the NN;
  • to find the optimal activation function of each hidden layer neuron in binary classification tasks;
  • and to do cross-validation auto-adjustment (i.e., optimize the ratio between the fitting and validation sets).
Results from four experiments demonstrate that the BWASD model outperforms several of the most advanced models of MATLAB’s classification learner in every way.
The primary ideas of this work can be summed up as follows:
  • A novel 3-layer feed-forward bio-inspired WASD NN for binary classifications, termed BWASD, is presented.
  • The BWASD algorithm merges the BAS and MWASD processes to further improve the performance and structure of the WASD based NNs.
  • Taking into account four loan approval datasets, the performance of the MWASD and BWASD models is contrasted.
  • Several of the most advanced models of MATLAB’s classification learner are compared with the BWASD model in four experiments.
The structure of the paper is described in the following sections. Section 2 provides an overview of the WDD procedure for binary classification tasks. The 3-layer feed-forward BWASD NN structure, the BWASD algorithm and the whole process for training and the procedure for testing the BWASD NN model are presented in Section 3. Section 4 shows and discusses the findings of four loan approval datasets using the BWASD, the MWASD and several of the most cutting-edge models of MATLAB’s classification learner. Final remarks are provided in Section 5.

2. A Novel Weights Direct Determination (WDD) Process for Binary Classification

The WDD process is an essential component of any WASD technique since it eliminates the need for laborious, time-consuming, and frequently erroneous repeating computations to obtain the appropriate weights matching the current hidden layer structure. The WDD approach is claimed to offer reduced computing complexity and speed compared to traditional weight determination methods, while also resolving certain related issues [31]. It is important to note that real numbers are the sole type of input data that the WDD accepts. Prior to being fed into the NN model, the data must additionally be standardized to a range of [ 0.5 , 0.25 ] . The NN can manage over-fitting in this way. We can achieve it, if necessary, by using the linear transformation that is illustrated in [26].
In this section, comprehensive explanations of important scientific and theoretical underpinnings are provided in support of the creation of the BWASD NN. But before anything else, it is important to recognize some of the key symbols used in this paper: Transposition is indicated by ( ) T ; factorial of η is indicated by η ! ; pseudoinversion is shown by ( ) ; elementwise exponential is indicated by ( ) ; round function is indicated by R ( · ) .
The theorem of the approximation of the Taylor polynomial (ATP) from [39] is restated below.
Theorem 1.
When a target function, Q ( · ) , has the continuous ( ρ + 1 ) -order derivative on the range [ λ 1 , λ 2 ] and ρ is a nonnegative integer, it holds that:
Q ( η ) = U ρ ( η ) + V ρ ( η ) , η [ λ 1 , λ 2 ] ,
where V ρ ( η ) and U ρ ( η ) , respectively, imply the error term and ρ-order ATP of Q ( η ) .
Assume that Q ( α ) ( θ ) is the value of the α -order derivative of Q ( x ) at point θ . The approximation of Q ( η ) appears below:
Q ( η ) U ρ ( η ) = α = 0 ρ Q ( α ) ( θ ) α ! ( η θ ) α , θ [ λ 1 , λ 2 ] .
Proposition 1.
For approximating multivariable functions, one can apply Theorem 1. Consider Q ( η 1 , η 2 , , η v ) be the target function with v variables and continuous ( ρ + 1 ) -order partial derivatives in a neighborhood of the origin ( 0 , , 0 ) . The ρ-order ATP U ρ ( η 1 , η 2 , , η v ) about the origin appears below:
U ρ ( η 1 , η 2 , , η v ) = h = 0 ρ α 1 + + α v = h η 1 η v α 1 α v α 1 + + α v Q ( 0 , , 0 ) η 1 α 1 η v α v ,
where α 1 , α 2 , , α v are nonnegative integers.
Consider the input C = [ C 1 , C 2 , , C m ] R 1 × m and the target J R . The nonlinear function shown next can be utilized to define the relationship between the input variables C 1 , C 2 , , C m and the NN’s output target J, based on the multi-input NNs described in [31]:
Q ( C 1 , C 2 , , C m ) = J .
The map between the ρ -order ATP U ρ ( C 1 , C 2 , , C m ) and (4), inline with Proposition 1, appears below:
U ρ ( C 1 , C 2 , , C m ) = h = 0 n 1 k h w h ,
where a power activation function is denoted by k h = A h ( C 1 , C 2 , , C m ) R 1 × m n ; the weight associated with k h is denoted by w h R m n ; and h denotes both the number of hidden layer neurons and the power value.
When r N samples are taken, the target becomes J R r and the input matrix becomes C = [ C 1 , C 2 , , C m ] R r × m , where C j R r for j = 1 , , m . Then, with  k r , h = A h ( C 1 , C 2 , , C m ) R r × m n , the weight vector W and the input-activation matrix K appear below:
K = k 1 , 0 k 1 , 1 k 1 , n 1 k 2 , 0 k 2 , 1 k 2 , n 1 k r , 0 k r , 1 k r , n 1 R r × m n , W = w 0 w 1 w 2 w n 1 R m n .
Afterwards, instead of employing the iterative weight training techniques employed in traditional NNs, the weights of the ρ -order ATP NN are created by executing the WDD methodology laid out below [39]:
W = K J .
Furthermore, Table 1 presents the four power elementwise activation functions extracted from [26], which are suggested for use in binary classification tasks.

3. The Bio-Inspired WASD (BWASD) Model

This section features the 3-layer feed-forward NN structure and the BWASD algorithm.

3.1. The Neural Network Structure

Figure 1 illustrates the architecture of the 3-layer feed-forward NN. Specifically, the NN finds the appropriate neuron of Layer 2 with equal weight 1 after receiving the normalized input values C 1 , C 2 , , C m from Layer 1 (i.e., the input layer). Note that Layer 2 contains a maximum of n active neurons. Moreover, the WDD process is used to acquire the neurons that link Layer 2 and Layer 3 (i.e., the output layer), and these neurons have weights W j , j = 1 , 2 , , n 1 . Using the following formula, the predictions J ^ are computed:
J ^ = K W .
Finally, Layer 3 has a single active neuron that utilizes the elementwise function outlined below:
B ( J ^ i ) = 1 , J ^ i 0.375 0 , J ^ i < 0.375 , for i = 1 , 2 , , r ,
where the numbers 0 and 1, respectively, stand for false and true in order to identify something as true or false depending on the related input C of the first layer. Also, notice that the number 0.375 is the midpoint of the interval [ 0.5 , 0.25 ] .

3.2. The BWASD Algorithm

The BWASD algorithm, which incorporates the BAS algorithm [40], is responsible for training the NN model. It should be noted that beetles use both of their antennae to search for food, depending on how strong the scent is that they detect on them (Figure 2). This tendency is mimicked by the optimal solution finder of the BAS algorithm, and this approach allows the use of state-of-the-art techniques for optimization (see [41,42,43]). BWASD mimics the behavior of the beetle to find the optimal number of hidden layer neurons in the NN, their power value, and the optimal activation function from Table 1 for each hidden layer neuron. It does this by optimizing the ratio between the fitting and validation sets (i.e., cross-validation auto-adjustment).
First, an objective function must be defined. Consider the training set X t r R r × m with r in number samples and their target J t r R r . The parameter p [ 0.3 , 0.95 ] R determines the ratio between the fitting and the validation set. Particularly, the first r 1 = p r samples of X t r are used for fitting the model and the last r 2 = r r 1 samples for validation. That is, X f i R r 1 × m is the fitting set and X v a R r 2 × m is the validation set, while J f i R r 1 and J v a R r 2 are their target, respectively. Keep in mind that validation aids in ensuring that the model’s success generalizes beyond the training set because it is separate from the fitting set. Then, the K matrix is constructed according to Algorithm 1 proposed in [15], which makes use the power activation function in Table 1. For the fitting set X f i , the weights of the NN W are directly obtained by (7) using K ( 1 : r 1 ) and J f i . For the validation set X v a , the NN predictions J ^ v a are obtained by (8) using K ( r 1 + 1 : r ) and W, and the mean absolute error (MAE) between the target J v a and J ^ v a is gauged via the next formula:
E = 1 r 2 k = 1 r 2 | J k J ^ k | .
It should be noted that the MAE is widely used in machine learning as a loss function for classification challenges, and that it counts errors between paired observations that represent the same situation. Assume the vector x = [ p , c , N ] T , where N is a vector that includes the hidden layer neurons’ power values and c is a vector that contains the numbering of the optimal activation function from Table 1 for each hidden layer neuron. In Algorithm 1, the previously indicated procedure is expressed as an objective function.
Algorithm 1 Objective function.
Require: 
The vector x, the input data X and the target J.
 1:
procedure Ob_func( X , J , x )
 2:
    Split x into p, c and N, and set r the rows number of X.
 3:
   Keep only the nonnegative elements in N and in c only their corresponding activation function numbering.
 4:
    Calculate the matrix K through Algorithm 1 proposed in [15] under the N and c.
 5:
    Set r 1 = p r , r 2 = r r 1 , X f i = X ( 1 : r 1 , : ) , J f i = J ( 1 : r 1 ) , X v a = X ( r 1 + 1 : r , : ) and J v a = J ( r 1 + 1 : r ) .
 6:
    Through the WDD method, calculate W utilizing K ( 1 : r 1 ) and J f i .
 7:
    Through (8), calculate J ^ v a utilizing K ( r 1 + 1 : r ) and W.
 8:
    Through (10), assign the MAE that was calculated between J ^ v a and J v a to E.
 9:
end procedure
Ensure: 
E, the error.
Second, by adopting the beetle’s behavior, the objective function in Algorithm 1 is minimized. Consider the vector x = [ p , c T , N T ] T , where the parameter p [ 0.3 , 0.95 ] , and c is a vector of variable size and its elements take the integer values 1, 2, 3 or 4. These 4 numbers correspond to the activation functions presented in Table 1. Also, the vector N has the same size as c and its entries take the integer values 0, 1, …, n max 1 or n max . Take note that n max is the maximum number of hidden layer neurons that the user has set. These n max + 1 values represent the power of the activation functions for every neuron in the hidden layer. For instance, c = [ 2 , 4 ] T and N = [ 9 , 6 ] T indicate the presence of two hidden layer neurons, the first of which operates under the power of 9 using the power sigmoid activation function and the second under the power of 6 using the power softplus activation function.
The beetle’s position is represented by the previously described vector x in our method, and the odor concentration at position x is represented by the objective function f ( x ) in Algorithm 1. The minimal value of f ( x ) acts as a link to the source of the odor. In addition, we use the notation x t with t = 1 , 2 , 3 , , t max , where t max indicates the maximum number of iterations that the user specifies, to denote the position of the beetle at the t-th moment. As a result, we set the lower boundary LB = [ 0.3 , 1 T , 0 T ] T , where 1 , 0 R n max + 1 denote the all ones and all zeros vectors, respectively, and the upper boundary UB = [ 0.9 , 1 T · 4 , 1 T · n max ] T . In order to guarantee that LB x UB , the following element-wise function will be used for the element j = 1 , , 2 n max + 1 :
g ( x j ) = UB j , x j > UB j x j , LB x j UB LB j , x j < LB j .
Thus, a model of searching behavior is defined by the beetle’s chaotic search path in the following manner:
h = γ ϵ + γ ,
where γ R 2 n max + 1 implies a random vector of 2 n max + 1 entries and ϵ = 2 52 . The following formulas are used to create the left ( x L ) and right ( x R ) antennae, which simulate the beetle’s antennae’s searching behaviors:
x R = g ( R ( x t + η t h ) ) , x L = g ( R ( x t η t h ) ) .
where the sensing width of the antennae, η t , corresponds to the exploit’s capacity at the t-th moment. Furthermore, take into account the potential best solution ( x P ) :
x P = g ( R ( x t + ξ t η t sign ( f ( x L ) f ( x R ) ) ) ) ,
where the notation ξ t represents a step size that accounts for the rate of convergence after a rise in t across the search. Next, the detecting behavior is stated as follows:
x t + 1 = x P , f ( x P ) f ( x t ) x t , f ( x P ) > f ( x t ) .
Finally, the following describes the update rules for η and ξ :
η t + 1 = 0.991 η i + 0.001 , ξ t + 1 = 0.991 ξ i .
It is important to remember that the prerequisites for the previously given technique are as follows:
x 0 = [ 1 q , 2 q , , 2 n max + 1 q ] T ,
where q = R ( ( 2 n max + 1 ) / 2 ) .
After that, on the complete training data set, the BWASD algorithm finds and outputs the optimal: ratio p * between the fitting and validation sets; the optimal W; the optimal power value N * ; and the optimal activation function of each hidden layer neuron c * . The full workflow of the BWASD algorithm is illustrated in the diagram of Figure 3a.
Once finding the optimal structure of BWASD NN model of Figure 1, its optimal weights and parameters p * , N * , c * , we set the testing set X t e to find the predictions B ( J ^ t e ) via (9). The diagram presented in Figure 3b provides an illustration of the comprehensive process for modeling and prediction using the BWASD NN model.

4. Experiments

In this section, four datasets are used to conduct four different experiments on credit and loan approval. In these experiments, the performance of the BWASD NN is examined and compared with several top-performing models of MATLAB’s classification learner. The kernel naive Bayes (KNB), fine tree (FTR), linear support vector machine (LSVM), and fine k-nearest neighbors (FKNN) are these classification models. The MWASD NN model developed in [15] is also compared because BWASD is an enhanced version of MWASD. For the BWASD model, we have used η 0 = ξ 0 = 5 , t max = 21 , and  n max = 10 ; for the MWASD model, we have used n max = 10 and p = 0.8 ; and for the MATLAB classification models, we have used the default values. It is noteworthy that by clicking the next GitHub link, anyone can obtain the entire development and implementation of the ideas and computation techniques discussed in Section 2, Section 3 and Section 4: https://github.com/SDMourtas/BWASD (accessed on 10 January 2024). Be aware that the MATLAB toolbox includes implementation and installation guidance.

4.1. Dataset 1

Customer information entered on an online loan application form is included in the dataset used in this experiment. You may access the dataset, which we will refer to as DA1, by clicking on the provided link: https://www.kaggle.com/datasets/ninzaami/loan-predication?resource=download (accessed on 10 January 2024). It is important to mention that DA1 was provided by a business that wants to automate the real-time loan qualifying process using customer information. DA1 will contain 471 numerical samples under 13 variables when the data preprocessing algorithm described in [15] is used. As a result, the training set is constructed using the first 236 samples, while the testing set is constructed using the final 235 samples.
The BWASD training error is shown in Figure 4a, while the NNs classification results for the training and testing sets are displayed in Figure 4b,c, respectively. Figure 4a shows that the validation error is higher than the fitting error, and that the BWASD requires 20 iterations to optimize the NN structure. Particularly, BWASD returned N * = [ 3 , 1 , 3 , 4 ] with c * = [ 3 , 2 , 2 , 1 ] and p * = 0.3 for the specific run, while MWASD returned N * = [ 0 , 1 ] with c * = [ 1 , 3 ] . That is, the NN trained under BWASD has 4 hidden layer neurons, while the NN trained under MWASD has 2. Figure 4b shows that FKNN has the best ratio correct/incorrect classifications on the training set, whereas KNB has the worst. Figure 4c shows that BWASD has the best ratio correct/incorrect classifications on the testing set, while FKNN and KNB have the worst.

4.2. Dataset 2

The dataset used in this experiment includes results based on credit rating algorithms as well as the possibility that someone may have financial issues in the next two years. It is important to mention that banks ussualy utilize credit scoring algorithms to assess whether to approve a loan based on an estimation of the likelihood of default. You may access the dataset, which we will refer to as DA2, by clicking on the provided link: https://www.kaggle.com/brycecf/give-me-some-credit-dataset?select=cs-training.csv (accessed on 10 January 2024). DA2 will contain 120269 numerical samples under 11 variables when the data preprocessing algorithm described in [15] is used. As a result, the training set is constructed using the first 9179 samples, while the testing set is constructed using the final 111090 samples.
The BWASD training error is shown in Figure 5a, while the NNs classification results for the training and testing sets are displayed in Figure 5b,c, respectively. Figure 5a shows that the validation error is higher than the fitting error, and that the BWASD requires 10 iterations to optimize the NN structure. Particularly, BWASD returned N * = [ 0 , 0 , 3 , 3 , 4 , 6 ] with c * = [ 4 , 3 , 3 , 4 , 2 , 3 ] and p * = 0.9 for the specific run, while MWASD returned N * = [ 0 , 1 , 2 , 3 , 4 ] with c * = [ 2 , 3 , 4 , 3 , 4 ] . That is, the NN trained under BWASD has 6 hidden layer neurons, while the NN trained under MWASD has 5. Figure 5b shows that FKNN has the best ratio correct/incorrect classifications on the training set, whereas KNB has the worst. Figure 5c shows that LSVM has the best ratio correct/incorrect classifications on the testing set and BWASD has the second best, while KNB has the worst.

4.3. Dataset 3

Numerous credit card applications are received by commercial banks. Many of them are turned down for a variety of reasons, such as excessive credit record requests, poor income, or large loan balances. Because time is money, manually assessing these applications is tedious, prone to errors, and time-consuming. Fortunately, machine learning can be used to automate this operation, and most commercial banks already do so. The dataset used in this experiment includes results of credit card applications. You may access the dataset, which we will refer to as DA3, by clicking on the provided link: https://www.kaggle.com/datasets/samuelcortinhas/credit-card-approval-clean-data (accessed on 10 January 2024). DA3 will contain 689 numerical samples under 16 variables when the data preprocessing algorithm described in [15] is used. As a result, the training set is constructed using the first 345 samples, while the testing set is constructed using the final 344 samples.
The BWASD training error is shown in Figure 6a, while the NNs classification results for the training and testing sets are displayed in Figure 6b,c, respectively. Figure 6a shows that the validation error is mostly higher than the fitting error, and that the BWASD requires 21 iterations to optimize the NN structure. Particularly, BWASD returned N * = [ 0 , 0 , 2 , 2 , 2 , 5 , 5 ] with c * = [ 2 , 1 , 1 , 4 , 4 , 2 , 2 ] and p * = 0.95 for the specific run, while MWASD returned N * = [ 0 , 1 , 2 ] with c * = [ 4 , 3 , 4 ] . That is, the NN trained under BWASD has 7 hidden layer neurons, while the NN trained under MWASD has 3. Figure 6b shows that FKNN has the best ratio correct/incorrect classifications on the training set, whereas KNB has the worst. Figure 6c shows that BWASD has the best ratio correct/incorrect classifications on the testing set, while KNB has the worst.

4.4. Dataset 4

Banks require decision-making guidelines regarding which loans they will approve or deny in order to reduce their own losses. Loan managers take into account an applicant’s socioeconomic and demographic profiles before making a determination about the loan application. The dataset used in this experiment includes results of loan applications based on applicants socioeconomic and demographic profiles. You may access the dataset, which we will refer to as DA4, by clicking on the provided link: https://www.kaggle.com/datasets/mpwolke/cusersmarildownloadsgermancsv (accessed on 10 January 2024). DA4 will contain 999 numerical samples under 20 variables when the data preprocessing algorithm described in [15] is used. As a result, the training set is constructed using the first 500 samples, while the testing set is constructed using the final 499 samples.
The BWASD training error is shown in Figure 7a, while the NNs classification results for the training and testing sets are displayed in Figure 7b,c, respectively. Figure 7a shows that the validation error is higher than the fitting error, and that the BWASD requires 2 iterations to optimize the NN structure. Particularly, BWASD returned N * = [ 0 , 2 , 3 , 3 , 2 , 4 ] with c * = [ 4 , 1 , 1 , 1 , 1 , 4 ] and p * = 0.95 for the specific run, while MWASD returned N * = [ 0 , 1 , 2 ] with c * = [ 4 , 3 , 3 ] . That is, the NN trained under BWASD has 6 hidden layer neurons, while the NN trained under MWASD has 3. Figure 7b shows that FKNN has the best ratio correct/incorrect classifications on the training set, whereas KNB has the worst. Figure 7c shows that BWASD has the best ratio correct/incorrect classifications on the testing set, while FTR has the worst.

4.5. Performance Measures and Discussion

The models statistics for DA1-DA4 on the testing set are shown in Table 2, Table 3, Table 4 and Table 5, correspondingly. The MAE, true positive (TP), true negative (TN), false positive (FP), false negative (FN), precision, recal, accuracy and F-score are the performance gauges considered in this analysis. Consult [44] for further information and a detailed examination of these gauges. Additionally, the accuracy of the classification models is statistically evaluated using the mid-p-value McNemar test in Table 6, Table 7 and Table 8.
In Table 2, BWASD appears to have the finest MAE, accuracy and F-score, and the second finest TP, FP, precision and recal. FTR has the best TN, FN and recal, and the worst TP, FP, and precision. The results of MWASD and LSVM are identical and they have the best TP, FP, and precision. Additionally, KNB has the worst MAE, TN, FN, recal, accuracy and F-score. According to the aforementioned statistics, the performance of BWASD is the best, while KNB is the poorest.
In Table 3, LSVM appears to have the finest MAE, TN, FN, recal and accuracy, whereas KNB has finest TP, FP and precision, and FTR has the finest F-score. BWASD has the second best MAE, TN, FN, recal and accuracy, but BWASD has better TP, FP, precision and F-score than LSVM. Additionally, KNB has the worst MAE, TN, FN, recal and accuracy, whereas LSVM has the worst TP, FP, precision and F-score. According to the aforementioned statistics, the performance of BWASD is the best overall, while KNB is the poorest.
In Table 4, BWASD appears to have the finest MAE, TN, FN, recal, accuracy and F-score. LSVM has the finest TP, FP and precision. The results of MWASD and LSVM are identical and they have the best TP, FP, and precision. Additionally, KNB has the worst statistic measurements, FKNN has the second worst MAE, TP, FP, precision, recal, accuracy and F-score, whereas LSVM has the second worst TN and FN. According to the aforementioned statistics, the performance of BWASD is the best, while KNB is the poorest.
In Table 5, BWASD appears to have the finest MAE, accuracy and F-score, the second finest recall, and the third finest TP, FP, TN, FN and precision. KNB has the best TP, FP and precision, FTR has the best TN and FN, and MWASD has the best recal. LSVM has the best TP, FP and precision. Additionally, FTR has the worst MAE, TP, FP, precision, accuracy and F-score, whereas KNB has the worst TN, FN and recal. According to the aforementioned statistics, the performance of BWASD is the best, while FTR is the poorest.
The BWASD model is compared to all other models in Table 6, Table 7 and Table 8 using the mid-p-value McNemar test to statistically evaluate the classification models’ accuracies. The McNemar test is a form of homogeneity test that applies to contingency tables and is a distribution-free statistical hypothesis test. The test determines whether the binary classification models’ accuracies differ or whether one binary classification model outperforms the other. We perform the McNemar test specifically using the MATLAB function testcholdout, as described in [45,46]. It is important to note that the simulation experiments in [45,46,47] show that this test has good statistical power and achieves nominal coverage. The statistical analysis in this subsection follows the recommendations in [47]. According to the marginal homogeneity null hypothesis, each outcome’s two marginal probabilities are equal. In our investigation, the null hypothesis claims that the accuracy of the predicted class labels from the NN model Z and the BWASD is equal, where Z refers to MWASD, FKNN, FTR, LSVM or KNB. Additionally, we consider the following three alternative hypothesis (AH):
  • AH1: For predicting the class labels, the NN model Z and the BWASD have unequal accuracies.
  • AH2: For predicting the class labels, the NN model Z is more accurate than the BWASD.
  • AH3: For predicting the class labels, the NN model Z is less accurate than the BWASD.
In this way, we conduct three McNemar tests under three different alternative hypothesis to assess. Each test determines whether to reject or not to reject the null hypothesis at the 5% significance level. Keep in mind that an outcome is considered statistically significant if it allows us to reject the null hypothesis, and that lower p-values (usually ≤ 0.05) are seen as more convincing proof to reject the null hypothesis.
Table 6 shows the McNemar’s test results for AH1. In DA1, DA3 and DA4, when comparing BWASD to Z = {FKNN, FTR, KNB}, a p-value of almost zero from the McNemar test indicates that there is enough proof to reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are not equal. On the other hand, when comparing BWASD to Z = {MWASD, LSVM}, a p-value that is far from zero indicates that there is enough proof to not reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are equal. In DA2, when comparing BWASD to Z = {MWASD, FKNN, FTR, LSVM, KNB}, a p-value of almost zero indicates that there is enough proof to reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are not equal.
Table 7 shows the McNemar’s test results for AH2. In DA1, DA3 and DA4, when comparing BWASD to Z = {MWASD, FKNN, FTR, KNB, LSVM}, a p-value of one or almost one from the McNemar test indicates that there is not enough proof to reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are equal. In DA2, when comparing BWASD to Z = {MWASD, FKNN, FTR, KNB}, a p-value of one or almost one indicates that there is not enough proof to reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are equal. However, when comparing BWASD to Z = {LSVM}, a p-value of zero indicates that there is enough proof to reject the null hypothesis. In other words, the Z model is more accurate than the BWASD.
Table 8 shows the McNemar’s test results for AH3. In DA1, DA3 and DA4, when comparing BWASD to Z = {FKNN, FTR, KNB}, a p-value of almost zero from the McNemar test indicates that there is enough proof to reject the null hypothesis. In other words, the Z model is less accurate than the BWASD. On the other hand, when comparing BWASD to Z = {MWASD, LSVM}, a p-value that is far from zero indicates that there is enough proof to not reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are equal. In DA2, when comparing BWASD to Z = {MWASD, FKNN, FTR, KNB}, a p-value of almost zero indicates that there is enough proof to reject the null hypothesis. In other words, the Z model is less accurate than the BWASD. On the other hand, when comparing BWASD to Z = {LSVM}, a p-value of one indicates that there is enough proof to not reject the null hypothesis. In other words, the predicted accuracies of the Z and BWASD models are equal.
Therefore, based on Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 statistics, we conclude that the BWASD is the best performing model in DA1-DA4. In broad terms, the BWASD model consistently provided great results in the classification of loan approval tasks, and it performs rather well when compared to traditional NN models. Therefore, the BWASD model can be beneficial for various businesses. These include businesses looking to automate the evaluation of loan applications based on customer information, banks evaluating credit card applications, banks evaluating loan applications based on an estimation of the likelihood of default, and banks evaluating loan applications based on the socioeconomic and demographic profiles of applicants.

5. Conclusions

This work presents a bio-inspired WASD NN for binary classification challenges, named BWASD. The findings of experiments in four loan approval datasets demonstrate that the BWASD model performs better than the MWASD model and several cutting-edge models of MATLAB’s classification learner. Therefore, the BWASD model has shown to be an excellent stand-in for determining whether or not to approve a loan. It is significant to note that the BWASD NN model can only be trained and tested using actual numerical data as input due to restrictions imposed by the WDD method. Future research will therefore focus on properly adjusting and applying it to other binary classification challenges across multiple scientific disciplines.
In this context, the BWASD model could be modified for use in the engineering domain to analyze alloy behavior or data from solar systems, as shown in [17,18]. The BWASD model could also be adjusted for use in the medicine domain to analyze diagnostic data, as demonstrated in [21]. Finally, integrating BAS alternations, like [37,38], may help to improve the accuracy of the BWASD model even further.

Author Contributions

All authors (S.D.M., V.N.K., P.S.S. and L.A.K.) contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.kaggle.com/datasets/ninzaami/loan-predication?resource=download (accessed on 10 January 2024); https://www.kaggle.com/brycecf/give-me-some-credit-dataset?select=cs-training.csv (accessed on 10 January 2024); https://www.kaggle.com/datasets/samuelcortinhas/credit-card-approval-clean-data (accessed on 10 January 2024); https://www.kaggle.com/datasets/mpwolke/cusersmarildownloadsgermancsv (accessed on 10 January 2024).

Acknowledgments

Predrag Stanimirović is supported from the Ministry of Education, Science and Technological Development, Republic of Serbia, Grants 451-03-47/2023-01/200124. Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kesraoui, A.; Lachaab, M.; Omri, A. The impact of credit risk and liquidity risk on bank margins during economic fluctuations: Evidence from MENA countries with a dual banking system. Appl. Econ. 2022, 54, 4113–4130. [Google Scholar] [CrossRef]
  2. Li, Z.; Liang, S.; Pan, X.; Pang, M. Credit risk prediction based on loan profit: Evidence from Chinese SMEs. Res. Int. Bus. Financ. 2024, 67, 102155. [Google Scholar] [CrossRef]
  3. Naili, M.; Lahrichi, Y. The determinants of banks’ credit risk: Review of the literature and future research agenda. Int. J. Financ. Econ. 2022, 27, 334–360. [Google Scholar] [CrossRef]
  4. Zhang, X.; Yu, L. Consumer credit risk assessment: A review from the state-of-the-art classification algorithms, data traits, and learning methods. Expert Syst. Appl. 2024, 237, 121484. [Google Scholar] [CrossRef]
  5. Abdelaziz, H.; Rim, B.; Helmi, H. The interactional relationships between credit risk, liquidity risk and bank profitability in MENA region. Glob. Bus. Rev. 2022, 23, 561–583. [Google Scholar] [CrossRef]
  6. Huang, Y.; Li, Z.; Qiu, H.; Tao, S.; Wang, X.; Zhang, L. BigTech credit risk assessment for SMEs. China Econ. Rev. 2023, 81, 102016. [Google Scholar] [CrossRef]
  7. Bhatore, S.; Mohan, L.; Reddy, Y.R. Machine learning techniques for credit risk evaluation: A systematic literature review. J. Bank. Financ. Technol. 2020, 4, 111–138. [Google Scholar] [CrossRef]
  8. Pang, M.; Li, Z. A novel profit-based validity index approach for feature selection in credit risk prediction. AIMS Math. 2024, 9, 974–997. [Google Scholar] [CrossRef]
  9. Singh, V.; Yadav, A.; Awasthi, R.; Partheeban, G.N. Prediction of modernized loan approval system based on machine learning approach. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–4. [Google Scholar] [CrossRef]
  10. Lohani, B.P.; Trivedi, M.; Singh, R.J.; Bibhu, V.; Ranjan, S.; Kushwaha, P.K. Machine learning based model for prediction of loan approval. In Proceedings of the 2022 3rd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 27–29 April 2022; pp. 465–470. [Google Scholar] [CrossRef]
  11. Weng, C.; Huang, C. A hybrid machine learning model for credit approval. Appl. Artif. Intell. 2021, 35, 1439–1465. [Google Scholar] [CrossRef]
  12. Bigdeli, B.; Pahlavani, P.; Amirkolaee, H.A. An ensemble deep learning method as data fusion system for remote sensing multisensor classification. Appl. Soft Comput. 2021, 110, 107563. [Google Scholar] [CrossRef]
  13. Sun, Y.; Zhang, J.; Li, G.; Wang, Y.; Sun, J.; Jiang, C. Optimized neural network using beetle antennae search for predicting the unconfined compressive strength of jet grouting coalcretes. Int. J. Numer. Anal. Methods Geomech. 2019, 43, 801–813. [Google Scholar] [CrossRef]
  14. Raj, R.J.S.; Shobana, S.J.; Pustokhina, I.V.; Pustokhin, D.A.; Gupta, D.; Shankar, K. Optimal feature selection-based medical image classification using deep learning model in internet of medical things. IEEE Access 2020, 8, 58006–58017. [Google Scholar] [CrossRef]
  15. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. A multi-input with multi-function activated weights and structure determination neuronet for classification problems and applications in firm fraud and loan approval. Appl. Soft Comput. 2022, 127, 109351. [Google Scholar] [CrossRef]
  16. Zeng, T.; Zhang, Y.; Li, Z.; Qiu, B.; Ye, C. Predictions of USA presidential parties from 2021 to 2037 using historical data through square wave-activated WASD neural network. IEEE Access 2020, 8, 56630–56640. [Google Scholar] [CrossRef]
  17. Huang, C.; Jia, X.; Zhang, Z. A modified back propagation artificial neural network model based on genetic algorithm to predict the flow behavior of 5754 aluminum alloy. Materials 2018, 11, 855. [Google Scholar] [CrossRef] [PubMed]
  18. Premalatha, N.; Arasu, A.V. Prediction of solar radiation for solar systems by using ANN models with different back propagation algorithms. J. Appl. Res. Technol. 2016, 14, 206–214. [Google Scholar] [CrossRef]
  19. Mourtas, S.D.; Katsikis, V.N.; Kasimis, C. Feedback control systems stabilization using a bio-inspired neural network. EAI Endorsed Trans. AI Robot. 2022, 1, e5. [Google Scholar] [CrossRef]
  20. Chen, L.; Huang, Z.; Li, Y.; Zeng, N.; Liu, M.; Peng, A.; Jin, L. Weight and structure determination neural network aided with double pseudoinversion for diagnosis of flat foot. IEEE Access 2019, 7, 33001–33008. [Google Scholar] [CrossRef]
  21. Gayathri, S.; Krishna, A.K.; Gopi, V.P.; Palanisamy, P. Automated binary and multiclass classification of diabetic retinopathy using Haralick and multiresolution features. IEEE Access 2020, 8, 57497–57504. [Google Scholar] [CrossRef]
  22. Daliri, M.R. A hybrid automatic system for the diagnosis of lung cancer based on genetic algorithm and fuzzy extreme learning machines. J. Med Syst. 2012, 36, 1001–1005. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Guo, D.; Luo, Z.; Zhai, K.; Tan, H. CP-activated WASD neuronet approach to Asian population prediction with abundant experimental verification. Neurocomputing 2016, 198, 48–57. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Xue, Z.; Xiao, M.; Ling, Y.; Ye, C. Ten-quarter projection for Spanish central government debt via WASD neuronet. In Proceedings of the International Conference on Neural Information Processing, Guangzhou, China, 14–18 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 893–902. [Google Scholar]
  25. Mourtas, S.D. A weights direct determination neuronet for time-series with applications in the industrial indices of the federal reserve bank of St. Louis. J. Forecast. 2022, 14, 1512–1524. [Google Scholar] [CrossRef]
  26. Mourtas, S.D.; Drakonakis, E.; Bragoudakis, Z. Forecasting the gross domestic product using a weight direct determination neural network. AIMS Math. 2023, 8, 24254–24273. [Google Scholar] [CrossRef]
  27. Leung, M.F.; Wang, J. Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization. Neural Networks 2022, 145, 68–79. [Google Scholar] [CrossRef] [PubMed]
  28. Matbouli, Y.T.; Alghamdi, S.M. Statistical machine learning regression models for salary prediction featuring economy wide activities and occupations. Information 2022, 13, 495. [Google Scholar] [CrossRef]
  29. Generalao, I.N. Measuring the telework potential of jobs: Evidence from the international standard classification of occupations. Philipp. Rev. Econ. 2021, 58, 92–127. [Google Scholar] [CrossRef]
  30. Groes, F.; Kircher, P.; Manovskii, I. The U-shapes of occupational mobility. Rev. Econ. Stud. 2015, 82, 659–692. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Chen, D.; Ye, C. Deep Neural Networks: WASD Neuronet Models, Algorithms, and Applications; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  32. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D. Multi-input bio-inspired weights and structure determination neuronet with applications in European Central Bank publications. Math. Comput. Simul. 2022, 193, 451–465. [Google Scholar] [CrossRef]
  33. Cheng, Y.; Li, C.; Li, S.; Li, Z. Motion planning of redundant manipulator with variable joint velocity limit based on beetle antennae search algorithm. IEEE Access 2020, 8, 138788–138799. [Google Scholar] [CrossRef]
  34. Fan, Y.; Shao, J.; Sun, G. Optimized PID controller based on beetle antennae search algorithm for electro-hydraulic position servo control system. Sensors 2019, 19, 2727. [Google Scholar] [CrossRef]
  35. Li, X.; Jiang, H.; Niu, M.; Wang, R. An enhanced selective ensemble deep learning method for rolling bearing fault diagnosis with beetle antennae search algorithm. Mech. Syst. Signal Process. 2020, 142, 106752. [Google Scholar] [CrossRef]
  36. Li, X.; Zang, Z.; Shen, F.; Sun, Y. Task offloading scheme based on improved contract net protocol and beetle antennae search algorithm in fog computing networks. Mobile Netw. Appl. 2020, 25, 2517–2526. [Google Scholar] [CrossRef]
  37. Mourtas, S.D.; Katsikis, V.N. V-Shaped BAS: Applications on large portfolios selection problem. Comput. Econ. 2021, 60, 1353–1373. [Google Scholar] [CrossRef]
  38. Katsikis, V.N.; Mourtas, S.D. Diversification of time-varying tangency portfolio under nonlinear constraints through semi-integer beetle antennae search algorithm. AppliedMath 2021, 1, 63–73. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Yu, X.; Xiao, L.; Li, W.; Fan, Z.; Zhang, W. Weights and structure determination of articial neuronets. In Self-Organization: Theories and Methods; Nova Science: New York, NY, USA, 2013. [Google Scholar]
  40. Jiang, X.; Li, S. BAS: Beetle antennae search algorithm for optimization problems. arXiv 2017, arXiv:1710.10724. [Google Scholar] [CrossRef]
  41. Zhu, Z.; Zhang, Z.; Man, W.; Tong, X.; Qiu, J.; Li, F. A new beetle antennae search algorithm for multi-objective energy management in microgrid. In Proceedings of the 13th IEEE Conf. Industrial Electronics and Applications (ICIEA), Wuhan, China, 31 May–2 June 2018; pp. 1599–1603. [Google Scholar]
  42. Wu, Q.; Shen, X.; Jin, Y.; Chen, Z.; Li, S.; Khan, A.H.; Chen, D. Intelligent beetle antennae search for UAV sensing and avoidance of obstacles. Sensors 2019, 19, 1758. [Google Scholar] [CrossRef] [PubMed]
  43. Xu, X.; Deng, K.; Shen, B. A beetle antennae search algorithm based on Lévy flights and adaptive strategy. Syst. Sci. Control Eng. 2020, 8, 35–47. [Google Scholar] [CrossRef]
  44. Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2020, 17, 168–192. [Google Scholar] [CrossRef]
  45. Kucer, M.; Loui, A.C.; Messinger, D.W. Leveraging expert feature knowledge for predicting image aesthetics. IEEE Trans. Image Process. 2018, 27, 5100–5112. [Google Scholar] [CrossRef]
  46. Zhou, T.; Liu, M.; Thung, K.H.; Shen, D. Latent representation learning for Alzheimer’s disease diagnosis with incomplete multi-modality neuroimaging and genetic data. IEEE Trans. Med Imaging 2019, 38, 2411–2422. [Google Scholar] [CrossRef]
  47. Fagerland, M.W.; Lydersen, S.; Laake, P. The McNemar test for binary matched-pairs data: Mid-p and asymptotic are better than exact conditional. BMC Med Res. Methodol. 2013, 13, 91. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of the BWASD neural network.
Figure 1. Structure of the BWASD neural network.
Biomimetics 09 00120 g001
Figure 2. Beetle searching behavior.
Figure 2. Beetle searching behavior.
Biomimetics 09 00120 g002
Figure 3. The BWASD algorithm and the procedure for predicting with the BWASD neural network.
Figure 3. The BWASD algorithm and the procedure for predicting with the BWASD neural network.
Biomimetics 09 00120 g003
Figure 4. Neural networks results on DA1.
Figure 4. Neural networks results on DA1.
Biomimetics 09 00120 g004
Figure 5. Neural networks results on DA2.
Figure 5. Neural networks results on DA2.
Biomimetics 09 00120 g005
Figure 6. Neural networks results on DA3.
Figure 6. Neural networks results on DA3.
Biomimetics 09 00120 g006
Figure 7. Neural networks results on DA4.
Figure 7. Neural networks results on DA4.
Biomimetics 09 00120 g007
Table 1. Options of power activation functions.
Table 1. Options of power activation functions.
Name A h ( X ) RangeNumbering
Power X h ( , ) 1
Power sigmoid e X h e X h + 1 1 2 , 1 2
Power inverse exponential e X h ( 0 , 1 ) 3
Power softplus ln ( 1 + e X h ) ( 0 , ) 4
Table 2. Neural network models’ statistics in DA1.
Table 2. Neural network models’ statistics in DA1.
DA1Neural Network Models
Statistic BWASD MWASD FKNN FTR LSVM KNB
MAE0.21700.22970.28930.27230.22970.2893
FP0.03040.02430.16460.17070.02430.0243
TP0.96950.97560.83530.82920.97560.9756
FN0.64780.70420.57740.50700.70420.9014
TN0.35210.29570.42250.49290.29570.0985
Precision0.96950.97560.83530.82920.97560.9756
Recal0.59940.58070.59120.62050.58070.5197
Accuracy0.78290.77020.71060.72760.77020.7106
F-score0.74080.72810.69240.70980.72810.6782
Table 3. Neural network models’ statistics in DA2.
Table 3. Neural network models’ statistics in DA2.
DA2Neural Network Models
Statistic BWASD MWASD FKNN FTR LSVM KNB
MAE0.13530.13950.30930.20730.07680.4847
FP0.40280.39580.46690.29320.57180.1747
TP0.59710.60410.53300.70670.42810.8252
FN0.12480.12950.30320.20390.05750.4968
TN0.87510.87040.69670.79600.94240.5031
Precision0.59710.60410.53300.70670.42810.8252
Recal0.82700.82330.63740.77600.88150.6242
Accuracy0.86460.86040.69060.79260.92310.5152
F-score0.69350.69690.58050.73980.57640.7107
Table 4. Neural network models’ statistics in DA3.
Table 4. Neural network models’ statistics in DA3.
DA3Neural Network Models
Statistic BWASD MWASD FKNN FTR LSVM KNB
MAE0.13660.14240.20050.19180.14820.2994
FP0.09150.07840.20910.18950.07180.3856
TP0.90840.92150.79080.81040.92810.6143
FN0.17270.19370.19370.19370.20940.2303
TN0.82720.80620.80620.80620.79050.7696
Precision0.90840.92150.79080.81040.92810.6143
Recal0.84020.82630.80320.80700.81580.7272
Accuracy0.86330.85750.79940.80810.85170.7005
F-score0.87300.87130.79690.80870.86830.6660
Table 5. Neural network models’ statistics in DA4.
Table 5. Neural network models’ statistics in DA4.
DA4Neural Network Models
Statistic BWASD MWASD FKNN FTR LSVM KNB
MAE0.23840.24040.31260.34060.25650.2885
FP0.10160.11010.19490.27680.09600.0169
TP0.89830.88980.80500.72310.90390.9830
FN0.57240.55860.60000.49650.64820.9517
TN0.42750.44130.40000.50340.35170.0482
Precision0.89830.88980.80500.72310.90390.9830
Recal0.61070.61430.57290.59280.58230.5080
Accuracy0.76150.75950.68730.65930.74340.7114
F-score0.72710.72680.66940.65150.70830.6699
Table 6. McNemar’s test results for AH1.
Table 6. McNemar’s test results for AH1.
BWASDDA1DA2
vs Null Hypothesis p-Value Null Hypothesis p-Value
MWASDnot rejected0.2187rejected0
FKNNrejected0.0050rejected0
FTRrejected0.0288rejected0
KNBrejected0.0009rejected0
LSVMnot rejected0.2187rejected0
BWASDDA3DA4
vsNull Hypothesisp-ValueNull Hypothesisp-Value
MWASDnot rejected0.5488not rejected0.8600
FKNNrejected0.0021rejected0.0006
FTRrejected0.0090rejected 10 6
KNBrejected 10 7 rejected0.0095
LSVMnot rejected0.2668not rejected0.1741
Table 7. McNemar’s test results for AH2.
Table 7. McNemar’s test results for AH2.
BWASDDA1DA2
vs Null Hypothesis p-Value Null Hypothesis p-Value
MWASDnot rejected0.8906not rejected1
FKNNnot rejected0.9975not rejected1
FTRnot rejected0.9856not rejected1
KNBnot rejected0.9995not rejected1
LSVMnot rejected0.8906rejected0
BWASDDA3DA4
vsNull Hypothesisp-ValueNull Hypothesisp-Value
MWASDnot rejected0.7256not rejected0.5700
FKNNnot rejected0.9989not rejected0.9997
FTRnot rejected0.9955not rejected1
KNBnot rejected1not rejected0.9952
LSVMnot rejected0.8666not rejected0.9129
Table 8. McNemar’s test results for AH3.
Table 8. McNemar’s test results for AH3.
BWASDDA1DA2
vs Null Hypothesis p-Value Null Hypothesis p-Value
MWASDnot rejected0.1094rejected0
FKNNrejected0.0025rejected0
FTRrejected0.0144rejected0
KNBrejected0.0004rejected0
LSVMnot rejected0.1094not rejected1
BWASDDA3DA4
vsNull Hypothesisp-ValueNull Hypothesisp-Value
MWASDnot rejected0.2744not rejected0.4300
FKNNrejected0.0011rejected0.0003
FTRrejected0.0045rejected2 ×   10 6
KNBrejected5 ×   10 8 rejected0.0048
LSVMnot rejected0.1334not rejected0.0871
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mourtas, S.D.; Katsikis, V.N.; Stanimirović, P.S.; Kazakovtsev, L.A. Credit and Loan Approval Classification Using a Bio-Inspired Neural Network. Biomimetics 2024, 9, 120. https://doi.org/10.3390/biomimetics9020120

AMA Style

Mourtas SD, Katsikis VN, Stanimirović PS, Kazakovtsev LA. Credit and Loan Approval Classification Using a Bio-Inspired Neural Network. Biomimetics. 2024; 9(2):120. https://doi.org/10.3390/biomimetics9020120

Chicago/Turabian Style

Mourtas, Spyridon D., Vasilios N. Katsikis, Predrag S. Stanimirović, and Lev A. Kazakovtsev. 2024. "Credit and Loan Approval Classification Using a Bio-Inspired Neural Network" Biomimetics 9, no. 2: 120. https://doi.org/10.3390/biomimetics9020120

APA Style

Mourtas, S. D., Katsikis, V. N., Stanimirović, P. S., & Kazakovtsev, L. A. (2024). Credit and Loan Approval Classification Using a Bio-Inspired Neural Network. Biomimetics, 9(2), 120. https://doi.org/10.3390/biomimetics9020120

Article Metrics

Back to TopTop