Next Article in Journal
Employing Quantum Entanglement for Real-Time Coordination of Distributed Electric Vehicle Charging Stations: Advancing Grid Efficiency and Stability
Previous Article in Journal
Impacts of Inertia and Photovoltaic Integration on Existing and Proposed Power System Transient Stability Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement of Wild Horse Optimizer Algorithm with Random Walk Strategy (IWHO), and Appointment as MLP Supervisor for Solving Energy Efficiency Problem

1
Vocational Schools of Technical Sciences, Muş Alparslan University, 49250 Muş, Turkey
2
Vocational Schools of Social Sciences, Muş Alparslan University, 49250 Muş, Turkey
3
Department of Computer Engineering, Faculty of Computer and Information Sciences, Sakarya University, 54187 Sakarya, Turkey
*
Author to whom correspondence should be addressed.
Energies 2025, 18(11), 2916; https://doi.org/10.3390/en18112916
Submission received: 13 April 2025 / Revised: 27 May 2025 / Accepted: 29 May 2025 / Published: 2 June 2025
(This article belongs to the Section B: Energy and Environment)

Abstract

This paper aims to enhance the success of the Wild Horse Optimization (WHO) algorithm in optimization processes by developing strategies to overcome the issues of stuckness and early convergence in local spaces. The performance change is observed through a Multi-Layer Perceptron (MLP) sample. In this context, an advanced Wild Horse Optimization (IWHO) algorithm with a random walking strategy was developed to provide solution diversity in local spaces using a random walking strategy. Two challenging test sets, CEC 2019, were selected for the performance measurement of IWHO. Its competitiveness with alternative algorithms was measured, showing that its performance was superior. This superiority is visually represented with convergence curves and box plots. The Wilcoxon signed-rank test was used to evaluate IWHO as a distinct and powerful algorithm. The IWHO algorithm was applied to MLP training, addressing a real-world problem. Both WHO and IWHO algorithms were tested using MSE results and ROC curves. The Energy Efficiency Problem dataset from UCI was used for MLP training. This dataset evaluates the heating load (HL) or cooling load (CL) factors by considering the input characteristics of smart buildings. The goal is to ensure that HL and CL factors are evaluated most efficiently through the use of HVAC technology in smart buildings. WHO and IWHO were selected to train the MLP architecture, and it was observed that the proposed IWHO algorithm produced better results.

1. Introduction

Optimization involves a series of tasks that employ mathematical or analytical methods to enhance the parameters of a given system or process at reduced cost. This process follows a trajectory that enables the problem to attain a global optimum [1]. To date, optimization has been applied to real-world problems represented by mathematical models to address challenges in engineering, finance, and science [2,3]. The development of optimization algorithms has garnered significant interest in addressing optimization issues across various domains, including engineering, science, economics, and business [4,5,6]. A real-world optimization problem can be resolved if it is mathematically expressed. When addressing real-world challenges, it is crucial to optimally derive the desired variables. Consequently, intelligent algorithms have been acknowledged as vital tools for solving these problems. These algorithms are categorized into deterministic and stochastic types [7]. However, deterministic algorithms present several limitations, such as the necessity for complex mathematical computations, implementation challenges, and tendency to become trapped in local optima [2,8]. Stochastic algorithms that employ non-deterministic derivative-free methods have been utilized to mitigate these limitations. In this regard, metaheuristic optimization techniques, classified as stochastic algorithms, provide promising solutions to fulfil this requirement.
Such algorithms are based on theorems derived from scientific laws and mathematical modelling with specific constraints, as well as theorems that mimic the behavior of swarms with shared intelligence [2]. The goal of metaheuristics is to identify search or optimization methods that are superior when applied to instances of a given class of problems. In all swarm-based optimization algorithms, the problem is optimized using a set of random solutions. Each time the algorithm is executed, this set of random solutions is evaluated and improved by using a fitness function. The probability of obtaining an overall optimal solution was increased through enough random solutions and iterations.
Metaheuristic algorithms are characterized by two primary phases: exploration and exploitation. The efficacy of the balance between these phases indicated the overall quality of the algorithm. Exploration pertains to the algorithm’s capacity to survey the entire search space, whereas exploitation involves each search agent’s ability to identify the optimal solution within its local vicinity. The effectiveness of an algorithm in addressing a specific problem set may not necessarily translate to optimization problems of a different nature or type [9]. It is acknowledged that no single optimization algorithm is universally applicable to all problems. Metaheuristic algorithms can be designed to efficiently navigate the search space and deliver optimal solutions within a reasonable time frame.
The Genetic Algorithm (GA) [10] is among the earliest swarm-based algorithms. The particle swarm optimization (PSO) algorithm [11] has been extensively examined and has drawn inspiration from various algorithms, including the Grey Wolf Algorithm (GWO) [12] and Harris Hawks Optimizer (HHO) [8], among others, in terms of their operational principles [11]. The hunger game search algorithm (HGS) [13] was inspired by the hunger drive of animals, a common theme in swarm-based algorithms. Over the past two decades, numerous algorithms have been developed and implemented in this domain [14,15,16].
The No-Free Lunch (NFL) theorem underscores the importance of leveraging problem-specific knowledge to enhance performance, thereby providing an opportunity to develop various strategies for fine-tuning a metaheuristic algorithm as desired [17]. Hybridization techniques can be employed to obtain superior solutions [18,19]. Hybridization offers three significant advantages: it capitalizes on the flexibility of metaheuristics to tackle large-scale problems that individual metaheuristics alone cannot manage by integrating different strategies or algorithms, and it improves the problem-solving performance, resulting in more robust algorithms [20].
Some examples of hybridization were examined in this study.
The authors of one paper highlight that hybrid metaheuristic algorithms improve the effectiveness of individual algorithms by combining their unique advantages. Many hybrid algorithms have been created for feature selection, with the goal of identifying the best feature subsets from datasets. These methods avoid getting stuck in local optima and prevent early convergence while thoroughly examining the search space. The algorithms strike a balance between exploration and exploitation. The improved algorithms produce results that are nearly optimal. By integrating multiple methods, hybrid metaheuristics enhance both convergence and the quality of solutions [21].
The seminal study on hybridization explores hybrid meta-heuristic algorithms for optimizing the dimensions of hybrid renewable energy systems (HRES). HRES enhances energy sustainability in remote areas; however, optimization is complex due to multiple factors. Hybrid meta-heuristic algorithms yield more precise results than traditional methods for HRES optimization. The paper reviews these algorithms for both single-objective and multi-objective design optimization. In single-objective optimization, algorithms such as GA-TS, PSO-HS, and the GA-Markov model are employed to minimize costs and enhance reliability. For multi-objective optimization, algorithms like FPA-SA, MOPSO-NSGA-II, and ACO-ABC are utilized to optimize cost, emissions, and reliability simultaneously. Low-level co-evolutionary hybridization is most commonly employed to balance algorithm exploration and exploitation. The authors note an increased use of hybrid algorithms for multi-objective HRES optimization [22]. The study evaluates an autonomous green energy system using hybrid metaheuristic methods. It presents a hybrid renewable energy system (HRES) with wind turbines, solar photovoltaic panels, biomass gasifier, and fuel cells with hydrogen storage for an off-grid university campus in Turkey. The research examines electricity production and optimization algorithms to minimize annual system cost while ensuring reliable supply. Using a Hybrid Firefly Genetic Algorithm (HFGA), which outperforms other algorithms, the optimal sizing is determined. The standalone system proves most cost-effective, achieving 100% renewable energy fraction [23].
Recently, metaheuristic algorithms have been widely employed to address real-world problems, with applications spanning engineering challenges [24,25], efficient use of air conditioning technology [25], and training of artificial neural networks [2]. Furthermore, a substantial body of literature exists on the integration of metaheuristics into machine learning algorithms [26,27].
For instance, in one study, the authors examined the HVAC system [28]. In paper, introduces a hybrid model that combines physical system modeling with symbolic regression, noted for its effectiveness with limited data and physical expressiveness. This model has been tested on air conditioning systems to forecast energy performance. IWHO develops optimization algorithms for machine learning (MLP) training. The algorithm’s optimization effectiveness and its impact on MLP training are evaluated, with applications to standard optimization tests and MLP training using the UCI Energy Efficiency dataset. IWHO excels in optimization and training stability. Enhanced with a Random Walk strategy, IWHO addresses premature convergence issues and delivers superior results compared to classical WHO and meta-heuristic algorithms, as validated by the Wilcoxon test. As an MLP training consultant, IWHO achieves lower error rates and higher explanatory power (R2: 0.98) than classical WHO, providing precise outcomes in energy efficiency and HVAC systems. Box plots and convergence curves demonstrate that IWHO yields more stable results near the global optimum. Although it is slower by 23%, its accuracy makes this acceptable. IWHO’s adaptable nature suits various optimization problems and machine learning applications, ranging from energy efficiency to engineering design. The algorithm’s primary advantages are its novel optimization approach, integration with machine learning, and successful application to energy efficiency, with statistically significant results and stable performance.
In this study, the IWHO algorithm was utilized as an advisor for an artificial neural network, a machine-learning algorithm, thereby contributing to this field. The wild horse optimization algorithm considered in this study is a swarm-based algorithm [27]. The group hierarchy in a swarm of wild horses is inspired by common swarm behaviors such as the satisfaction of motives. Several studies have addressed the WHO algorithm. It is pertinent to provide examples of previous studies. In 2022, the Wild Horse Optimizer (WHO) algorithm was developed based on swarm characteristics and the hierarchy of wild horses. For instance, to augment the exploitation capacity of the WHO algorithm and prevent it from becoming trapped in local optima, a random running strategy (RRS) was employed to ensure a balance between exploration and exploitation, and to strengthen competition with the waterhole mechanism (CWHM). In addition, the dynamic inertia weight strategy (DIWS) was used to achieve a global optimal result [29]. To address the limitations of the WHO algorithm, such as its inadequate search accuracy and slow convergence speed, a Hybrid Multi-Strategy Improved Wild Horse Optimizer (HMSWHO) is proposed. The Halton sequence is utilized to increase the diversity of the swarm, the adaptive parameter TDR is employed for exploration-exploitation balance, and the simplex method is applied to improve the worst position of the swarm [30]. In another study aimed at enhancing the WHO algorithm, a new algorithm was developed by implementing two strategies to increase the global search efficiency and avoid local optima. The first strategy is the quantum acceleration strategy, which enhances the individual abilities of each horse, and the chaotic mapping strategy, which prevents entrapment in the local optima [31].
Numerous studies have applied the WHO algorithm to various engineering challenges. In one study, the WHO algorithm was employed as a controller in a wind turbine system to standardize the wind speed through a specific controller and to avert potential failures [32]. Another study hybridized the WHO algorithm with the dwarf mongoose optimization (DMO) algorithm to assess its efficacy as a controller to reduce electricity costs in a small-grid system model [33]. Within the domain of energy engineering, the WHO algorithm has demonstrated its effectiveness as an optimization controller [34]. It has also been used for attribute selection [35]. In the context of scheduling problems, the WHO algorithm has been successfully applied to the planning of work processes in workshops by hybridizing it with various strategies [36]. The stability of the WHO algorithm served as a key motivation for this study. The quality of the algorithm was evaluated using the CEC 2019 [37] benchmark functions and hybridized with the Random Walking (RW) strategy to enhance outcomes. The RW strategy augments the potential of metaheuristic algorithms by introducing randomness into local search processes and generating alternative solutions [38,39]. The findings of this study indicated a significant improvement in the performance of the WHO algorithm across most functions. Previously, Eker et al. conducted a similar investigation using the RW-ARO algorithm, which resulted from the hybridization of the artificial rabbit optimization (ARO) algorithm and RW strategy, achieving successful results in Control Engineering [40]. In this study, the proposed IWHO hybrid algorithm was employed to train the CL and HL for HVAC systems in buildings via MLP, with the IWHO designated as the trainer. The WHO algorithm was selected as an alternative. This study utilized the Energy Efficiency dataset from the UCI, which addresses the energy efficiency problem [41]. Previous studies have explored related topics. One study assessed the predictive capabilities of Artificial Neural Networks (ANN) and XGBoost surrogate models for heating, cooling, lighting loads, and equivalent primary energy requirements, using EnergyPlus (Matlab2024) simulation data across various office cell model versions. The XGBoost models exhibited superior accuracy in predicting these loads. The mc-intersite-proj-th sampling method slightly enhanced prediction accuracy compared to maximin Latin hypercube sampling for both models. XGBoost models, which consist of collections of piecewise constant functions, perform optimally when the simulated loads show minimal variation or smaller gradients across the design space. This implies that fewer regression trees may suffice for precise predictions, or the same number could yield more accurate results. In this case study, XGBoost models demonstrated greater accuracy for lighting loads and primary energy needs than for heating and cooling loads, which have simpler slanted shapes with higher gradients not approaching zero [42]. In another study on the increasing prevalence of electric vehicles, accurately predicting energy consumption is essential for effective power grid management. This research evaluated eleven machine learning models, including Ridge Regression, Lasso Regression, K-Nearest Neighbors, Gradient Boosting, Support Vector Regression, Multi-Layer Perceptron, XGBoost, CatBoost, LightGBM, Gaussian Process Regression (GPR), and Extra Trees Regressor, using data from Colorado. The models were assessed using metrics such as Mean Absolute Error (MAE), Mean Squared Error (MSE), R2, Root Mean Squared Error (RMSE), and Normalized RMSE (NRMSE). Both Gradient Boosting and K-Nearest Neighbors (KNN) also demonstrated strong performance, while non-linear and linear regression models faced challenges in predicting extreme energy levels. The Extra Trees Regressor emerged as the most precise in forecasting energy consumption. A limitation of this study is its exclusive reliance on data from Colorado. Although accurate, the computational demands of the Extra Trees Regressor may affect its scalability for real-time applications. Future research should consider exploring deep learning models, expanding datasets, and employing time-series analysis to enhance forecasting precision [43]. Khishe and Mohammadi trained a passive sonar dataset using the SSA-MLP hybrid system, achieving high classification accuracy [44]. He et al. proposed a metaheuristic-based algorithm model for designing underwater wireless sensor networks to enhance energy efficiency and extend network lifetime, suggesting a hybrid hierarchical chimpanzee optimization algorithm to efficiently manage clustering and multihop routing procedures [45].
Building on these advancements, our research introduces the IWHO-MLP framework, which utilizes the Improved Wild Horse Optimizer for training Multi-Layer Perceptrons (MLPs). Comparative analyses using boxplots and convergence curves reveal that IWHO-MLP consistently outperforms both traditional and hybrid methods, achieving lower MSE, RMSE, and higher R2 values on the energy efficiency dataset and CEC 2019 benchmark functions. These findings not only confirm but also advance the current state-of-the-art, highlighting the efficacy of metaheuristic-driven optimization in neural network training.
In the existing literature, there are studies on artificial neural networks that incorporate the WHO algorithm. A comprehensive study of face recognition systems in the banking sector employed methods such as artificial neural networks, decision trees, and adaptive neural fuzzy inference systems, with the WHO algorithm contributing to the optimization of these methods [44]. Additionally, the WHO algorithm has been utilized as an optimizer in artificial neural networks for the classification of long-term storage of medical images in the healthcare domain [45].
The proposed approach offers several key strengths and advantages, including enhanced performance. The IWHO algorithm demonstrated superior results compared with the original WHO algorithm and other alternatives across most benchmark functions.
Stability and consistency: boxplot analysis indicated that the IWHO algorithm consistently exhibited stable performance.
Improved exploration-exploitation balance: the Random Walk strategy mitigates early convergence and prevents entrapment in local optima.
Competitive performance: the IWHO surpasses other algorithms when addressing the challenging test sets of CEC 2019.
Statistical significance: the Wilcoxon signed-rank test confirmed the distinctiveness and efficacy of the IWHO algorithm.
Efficient MLP training: the IWHO excels in training a Multi-Layer Perceptron for the energy efficiency problem, achieving lower MSE values and more reliable outcomes.
Improved prediction accuracy: the IWHO algorithm delivers higher prediction rates for Cooling Load (CL) factors than the original WHO algorithm.
Time efficiency: despite its increased complexity, the IWHO completes the optimization process within a reasonable timeframe.
Flexibility and hybridization potential: the success of the IWHO suggests that the WHO algorithm possesses a flexible structure that is amenable to integration with other strategies.
Real-world applicability: the IWHO algorithm demonstrates potential for addressing practical issues such as optimizing HVAC systems in smart buildings for energy efficiency.
The remainder of the paper is organized as follows: Section 2 discusses the WHO algorithm, RW search strategy, and proposed IWHO hybridization structure. Section 3 presents the testing of the IWHO algorithm using the CEC 2019. Section 4 covers MLP training, and Section 5 presents the results.
The proposed hybrid IWHO and the optimization process with alternative algorithms are given in Figure 1 below.

2. Algorithms

2.1. Wild Horse Optimizer (WHO)

The WHO algorithm is inspired by the common swarm behavior of wild animals. Figure 2 illustrates the swarm diagram of the WHO algorithm, which includes a swarm hierarchy (N), stallion (S), and foals (F), with each leader forming a group (G). As the foals mature, they leave the swarm to form their own independent families. The algorithm consists of five stages [32,46].
The algorithm starts the optimization process with a set of candidate solutions. In the initial set, the candidate horse simulation presents a certain hierarchy. In this hierarchy, there are groups of horses under different leaders. At this stage, a swarm diagram is generated as shown in Figure 2.
The ratio of the number of leaders ( P S ) in a group,
P S = G i N i , i = 1 , 2 , 3 ,
is determined by Equation (1).
In the second phase, the behavior of the swarm in the grazing pasture is modeled. The stallion (S) plays a central role in this behavior.
X G ,   j i = 2 Z c o s 2 π R Z · S t a l l i o n G , j X G ,   j i + S t a l l i o n G , j
where j is the position of the members and stallions of group i within the group is expressed by S t a l l i o n G , j and X G ,   j i . The parameter R is a random value in the open interval (−2, 2) and the parameter z is a self-adaptive parameter according to the changes in the process calculated in Equation (4) below.
P = r 1 < T D R ,   T D R = 1 i t m a x _ i t   I D X = P = = 0
z = r 2 θ I D X + r 3 θ ~ I D X
where P and r 1 , r 2   v e r 3 parameters are vectors that take random values in the range (0, 1). The current number of iterations is expressed as i t , and the maximum number of iterations is expressed as m a x _ i t .
Here, P is a vector containing 0 and 1 of a size equal to the size of the equation, and r 1 , r 2   a n d   r 3 are vectors taking random values in the interval (0, 1). The current number of iterations is denoted by i t and the maximum number of iterations is denoted by m a x _ i t .
In the third stage, Equation (1) simulates the mating behavior of horses. The swarm separates foals that have not yet reached mating age and prevents mating between parents and foals.
X G ,   k p = C r o s s o v e r X G , j q , X G ,   j z ,     i j k ,     q = z = e n d , C r o s s o v e r = m e a n
In Equation (5), X G ,   k p is the position of horse p at position k , which includes the positions of horse q in group i and horse z in group j . Here the crossover parameter Crossover percentage (PC) is adjustable.
In the fourth phase, the competition between group leaders is simulated with Equation (6). While the group leaders direct the other group members to the area with a suitable puddle, the winning group will have priority in the competition.
S t a l l i o n ¯ G , j = 2 Z c o s 2 π R Z × W H S t a l l i o n G , j + W H ,   r 3 > 0.5 2 Z c o s 2 π R Z × W H S t a l l i o n G , j W H ,   r 3 0.5
where W H is the location of the puddle, S t a l l i o n ¯ G , j and S t a l l i o n G , j are the candidate position and current position of the leader, respectively.
In the final phase, leader selection is considered and simulated using Equation (7). Initially, the leader is randomly selected from the entire swarm, and then the stallion with the most favorable value becomes the leader.
S t a l l i o n G , j = X G , j i S t a l l i o n G , i , f X G , j i < f S t a l l i o n G , j , f X G , j i   f S t a l l i o n G , j
where f X G , j i is the fitted value of foal, f S t a l l i o n G , j is the fitted value of parents.

2.2. Random Walking Strategy

Metaheuristic algorithms employ randomization to identify optimal solutions, working with a set of candidate solutions to explore all possible paths for the best outcome. Achieving a global search requires reaching a local optimum, a process facilitated by non-deterministic randomization, which is also applied in local searches. Balancing local and global searches is crucial for evaluating the performance of metaheuristic algorithms [47]. The Random Walk strategy enhances this balance by introducing a predetermined number of random steps within defined boundaries. This strategy acts as a fine-tuning mechanism in later iterations, preventing the algorithm from becoming trapped in local optima [38,39,40]. The random walks method introduces diversity by disrupting the regular progression, interfering with future iterations to avoid premature entrapment in local optima. This approach aids in escaping local optima without reaching them too swiftly. During the exploration phase, as deviation decreases in subsequent iterations, progress is made, leading to less pronounced deviations and achieving stable outcomes. The parameter Walk, which determines the step size, allows random walks to be integrated into various search algorithms. It serves to perturb the population of solutions, thereby preventing entrapment in local optima. Selecting the step size distribution is vital in the search process. A small step size encourages the exploitation of the search space near the current state, focusing on refining existing solutions. Conversely, a sufficiently large step size promotes the exploration of uncharted areas within the search space, potentially uncovering new and promising regions for further investigation [47].
In this paper, by increasing the number of random steps through RW, the possibility of avoiding premature convergence and getting stuck at the local optimum is enhanced by delaying the discovery of the optimum point in the local search phase. In this strategy, the fitness values are differentiated by adjusting the position of the group leader according to a varying parameter.
g B e s t = W H . p o s x 0 = g B e s t
x = x 0 + W a l k × 0.5 r a n d ( 1 )
where gBest is the best location of the puddle in Equation (8), x represents the deviated value, x 0 is the current best position of the leader, and W a l k is the number of randomized steps in Equation (9). The flowchart of the IWHO algorithm is shown in Figure 3 and pseudo code is shown in Algorithm 1.
Algorithm 1: Pseudo Code of IWHO
1: Initialization
2: Set parameters:
3: N = number of search agents (horses)
4: t _ m a x = maximum number of iterations
5: d i m = problem dimension
6: l b ,   u b = lower and upper bounds
7: r = number of random walk steps
8: Generating Initial Population
9: for i = 1 to N do
10:    X _ i = random position within [ l b ,   u b ]
11:   Evaluate fitness ( X _ i )
12: end for
13: Establish Hierarchy and Group Structure
14: Calculate P S = ratio of leaders using Equation (1)
15: Divide population into groups with leaders
16: Main Loop
17: for t = 1 to t _ m a x do
18:    Update z parameter (self-adaptive)
19:     z = 2     ( 1 ( t / t _ m a x ) 2 )
20:    Phase 1: Grazing behavior
21:    for each group i do
22:      for each member j in group i do
23:         Update position using Equation (2)
24:          R = random value in (−2, 2)
25:          X _ i j = X _ i j + R     z     ( X _ s t a l l i o n X _ i j )
26:         Evaluate fitness ( X _ i j )
27:      end for
28: end for
29:   Phase 2: Mating Behavior
30:   for each group i do
31:      for each member j in group i do
32:         if random() < PC then // PC is crossover percentage
33:           Apply crossover using Equation (5)
34:           Select random horse z from different group k
35:            X _ i j = c r o s s o v e r ( X _ i j ,   X _ z k )
36:           Evaluate fitness ( X _ i j )
37:        end if
38:     end for
39: end for
40:   Phase 3: Competition Among Leaders
41:   for each leader i do
42:       Update leader position using Equation (8)
43:        X _ c a n d i d a t e = X _ i + r a n d o m ( )     ( X _ p u d d l e X _ i )
44:       if fitness ( X _ c a n d i d a t e ) etter than fitness ( X _ i ) then
45:             X _ i = X _ c a n d i d a t e
46:       end if
47: end for
48:    Phase 4: Leader Selection
49:    for each group i do
50:       Select new leader using Equation (7)
51:       if any foal has better fitness than current leader then
52:             Update leader
53:       end if
54:    end for
55:    IMPROVEMENT: Random Walk Strategy
56:    for each leader i do
57:       b e s t _ p o s i t i o n = X i
58:      Execute Random Walk Steps
59:      for s t e p = 1 to r do
60:           Apply random walk using Equation (9)
61:            X _ d e v i a t e d = b e s t _ p o s i t i o n + r a n d o m _ d e v i a t i o n ( )
62:           Ensure bounds
63:            X _ d e v i a t e d = m a x ( l b ,   m i n ( u b ,   X _ d e v i a t e d ) )
64:           Evaluating New Position
65:           if fitness ( X _ d e v i a t e d ) better than fitness (best_position) then
66:             b e s t _ p o s i t i o n = X _ d e v i a t e d
67:           end if
68:      end for
69:      Updating Leader with Best Position Found During Random Walk
70:       X _ i = b e s t _ p o s i t i o n
71:   end for
72:   Update Global Best Solution
73:   Update X _ b e s t if better solution found
74: end for
75: return X _ b e s t
In accordance with the standard WHO phases, leaders engage in stochastic explorations, taking multiple steps to investigate the surrounding environment. This methodology assists in circumventing local optima and preventing premature convergence. The optimal position identified during these stochastic explorations is subsequently adopted. The random walk strategy constitutes a significant enhancement in IWHO compared to the original WHO algorithm, improving the exploration of the search space and facilitating the avoidance of local optima.

3. Multilayer Perceptron Architecture

The Multi-Layer Perceptron (MLP) is an artificial neural network architecture that serves as a nonparametric estimator, applicable for both classification, prediction, and regression tasks. The brain is an information processing device with extraordinary capabilities, surpassing current technological products in various fields such as vision, speech recognition, and learning. These applications have clear economic advantages when implemented on machines. By understanding how the brain accomplishes these functions, we can develop formal algorithms to solve these tasks and implement them on computers. The human brain is fundamentally different from a computer. While a computer typically has a single processor, the brain consists of an enormous number of processing units, approximately 10 11 neurons, which operate in parallel. Although the precise details are not fully understood, these processing units are thought to be much simpler and slower than a computer processor. Another distinguishing feature of the brain, which is believed to contribute to its computational power, is its extensive connectivity. Neurons in the brain have connections, called synapses, to around 10 4 other neurons, also functioning in parallel. In a computer, the processor is active and the memory is separate and passive; however, in the brain, it is believed that processing and memory are distributed throughout the network. Processing is carried out by the neurons, while memory resides in the synapses between neurons. The information processing system in the brain requires a fundamental theory, algorithm, and hardware. This includes data input for the desired operation, an algorithm to perform the function, and the capability to execute commands using a specific hardware implementation. The training of neural networks, which involves a mathematical function, is made possible through statistical techniques. One of the architectures designed for this training is the MLP structure [48].
The aim of perceptron is to correctly classify the set of externally applied stimuli x 1 , x 2 , x m into one of two classes. In Figure 4 bias denoted by b, x 1 , x 2 , x m inputs value, w 1 , w 2 , w m connection or synaptic weight, v induced local field of the neuron, and φ v transfer function. Given v can also be defined as a hyperplane.
v = i = 1 m w i x i + b
We can express the output of the perceptron as a dot product, and augmenting the input vector
x = x 1 , x 2 , x m T and weight vector w = w 1 , w 2 , w m T to include the bias weight and input, respectively.
v = w T x
During testing, the output y is calculated with the given weights w for input x. To implement a given task, it is necessary to learn the weights w, which are the parameters of the system, such that the correct outputs are produced given the inputs.
The hyperplane can be utilized to partition the input space into two regions: one where the values are positive and another where the values are negative. This concept is foundational in implementing a linear separation function. In Figure 5, the detector can then classify the input into one of two classes by examining the sign of the output: if the output is positive, it belongs to one class C 1 ; if negative, it belongs to the other class C 2 [49].
S v =     C 1 ,           S v > 0 C 2 ,         S v < 0
If x i belongs to C 1 else C 2 , S v as a treshold function, We need to activation functon or output (y) for calculate risk. The transfer function is the mathematical function that defines the properties of the network and is chosen according to the problem that the network has to solve. The reason for choosing sigmoid as the transfer function in this paper is that it has continuous and differentiable values in the interval [0, 1]. Here the parameter a is the slope of the transfer function.
y = s i g m o i d v = 1 1 + e a v
A perceptron with a single layer of weights can only approximate linear functions of the input and cannot solve problems like XOR, where the discriminant to be estimated is nonlinear. Similarly, a perceptron cannot be used for nonlinear regression. This limitation does not apply to feedforward networks with hidden layers between the input and output layers. When used for classification, such Multi-Layer Perceptrons can implement nonlinear discriminants, and when used for regression, they can approximate nonlinear functions of the input. Typically, the network consists of a set of source nodes that constitute the input layer, one or more hidden layers of computation nodes, and an output layer of computation nodes. These neural networks are called Multi-Layer Perceptrons (MLPs), representing a generalization of the single-layer perceptron. MLPs have been applied successfully to solve some difficult and diverse problems by training them in a supervised manner with a viral algorithm known as “backprop”. The learning process performed with this algorithm is called backpropagation learning. As seen Figure 6 in an MLP, while function signals flow in the forward propagation direction, the error signal responds with the backpropagation flow and is corrected to minimize the error. An error signal begins at an output neuron of the network and travels backward through the network. Each neuron in the network computes using a function namely error that depends on the error in some way [49,50].
Learning is a process in which the free parameters of a neural network are adapted by the environment in which the network is embedded through a simulation process shown as Figure 7. The type of learning is determined by the way parameter changes take place [50].
Error metrics serve as a quantitative measure of the deviation between predicted outcomes and actual values. While a single result may not yield extensive insights, it facilitates the selection of the most appropriate regression model by providing a numerical basis for comparison with other model outcomes. The supervisor can provide the neural network with the desired response for a specific training vector, representing the optimal action the network should undertake. The network parameters are adjusted under the combined influence of the training vector and the error signal, defined as the difference between the desired and actual responses of the network. This tuning process is conducted iteratively and incrementally, with the objective of ensuring that the neural network eventually emulates the supervisor, which is presumed to be optimal in a statistical sense. Through this process, the environmental information available to the supervisor is transferred as comprehensively as possible to the neural network during training. Once this condition is achieved, the tutor can be dispensed with, allowing the neural network to independently interact with the environment. Error values can be calculated using metrics such as mean squared error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and the coefficient of determination ( R 2 ).
M S E = 1 N i = 1 N P i O i 2
R M S E = M S E
M A E = 1 N i = 1 N P i O i
M A P E = M A E A V E R A G E   T R U E   V A L U E 100
R 2 = 1 i = 1 N P i O i 2 i = 1 N P i P i ¯ 2
In Equation (14) N is the training samples, i is the number of outputs value, P i is the desired output, and O i actual output.

4. Experimental Results

The main goal of this paper is to solve a global non-parametric standardized function set using the WHO algorithm (IWHO) hybridized with the Random Walk strategy and to compare the results with those of alternative competitive metaheuristic algorithms. A notation table summarizing all parameters of the algorithm has been added as Table 1.
The analysis tools used include statistical tables, boxplots, and convergence curve. The CEC 2019 functions are among the most challenging non-linear function sets due to their structure, which delays reaching the global solution, and are widely accepted for evaluating algorithm quality. The primary rationale for selecting only 10 functions from the CEC 2019 set in our research is that these functions are internationally recognized as a challenging and diverse benchmark. The CEC 2019 functions are designed to evaluate the effectiveness of optimization algorithms, encompassing a wide range of problem types and complexities. This variety is sufficient for an objective assessment of both the algorithms’ convergence speed and their ability to identify the global optimum. As indicated in the manuscript (refer to Table 2), the selected 10 functions vary in dimension, range, and problem type, providing adequate challenge and diversity to thoroughly evaluate the algorithms’ performance. These functions constitute the core and most frequently cited subset of the CEC 2019 benchmark. The specified the hardware used (Intel Core i7-10700K CPU, 32 GB RAM) and MATLAB settings (R2021a, parallel processing enabled). For IWHO, 40 search agents, 500 iterations, and 300 Random Walk steps were chosen. Each function was tested in 51 independent runs, a number that ensures the results are more objective and reliable.
In Table 2, CEC 2019 featured demonstration. In Table 3, mean, best value, worst value, and standard deviation are used as statistical measures to evaluate the suitability of the values obtained by the algorithms.
When examining Table 3, the proposed IWHO algorithm achieves the best value in all functions except CEC03 and CEC04. For CEC03, all algorithms yield the same results with different standard deviations. In this case, WHO performs very well, and the proposed IWHO algorithm ranks second.
The IWHO algorithm proposed in this paper is compared with the WHOFWA and WHOW algorithms, which were previously proposed as variants of WHO. It is also compared with the RW-DO algorithm, which has been studied with the random walking strategy in the IWHO algorithm. As a result, as shown in Table 4 the IWHO algorithm has a superior performance in seven functions.
In Table 5, the nonparametric Wilcoxon test was used to evaluate whether the proposed IWHO algorithm differs from other algorithms when working with a different data set, assessing its originality. In this context, W (Win) represents a victory, T (Till) a draw, and L (Low) a defeat. The confidence level was set at p = 0.05. From the results, it can be concluded that the proposed IWHO algorithm is unique and original, as it produced significantly different results with 24 wins, 1 tills, and 5 losses out of 30 results.
Table 5 analyzed, The Wilcoxon signed-rank test was employed to assess if there was a statistically significant difference in the performance of the two algorithms. Table 5: W/T/L: These letters stand for ‘Win’, ‘Tie’, and ‘Lose’, respectively, reflecting the status of the IWHO algorithm in comparison to other algorithms. p-value: A value under 0.05 was deemed to indicate statistical significance. The IWHO algorithm demonstrated notable superiority (W) over the CAPSA and HHO algorithms for almost all functions. When compared to the WHO algorithm, it was superior in some functions and either matched or fell short in others. Remarkably, against CAPSA and HHO, the IWHO excelled in nine or all ten functions. The p-values were mostly very small (e.g., 5.15 × 10−10), indicating that the observed difference is unlikely to be by chance, but rather that the IWHO’s superiority is statistically sound. The results of the Wilcoxon signed-rank test reveal that the IWHO algorithm is statistically significantly superior in nearly all CEC 2019 functions, especially when compared to the CAPSA and HHO algorithms. Although it surpassed the WHO algorithm in certain functions, it showed similar or lower performance in others. These findings imply that the IWHO is generally a more effective and dependable optimization algorithm than its competitors.
Benferroni correction was used. Taking into account the ( p v a l u e s < 0.05 / 30 ) evaluation, all elements of the h vector were 1. This result shows that the functions are significant according to the Benferroni test.
This paper performs a sensitivity analysis to determine the optimal beta parameter values within the RW-DO algorithm. For this purpose, the CEC 2019 benchmark, consisting of ten distinct functions, was chosen. The parameter C was tested at values of 0.12, 0.13, and 0.14, while the parameter ps remained constant at 0.2. The effects of these adjustments on the IWHO algorithm were evaluated. Although the results for CEC 06 did not produce the best standard deviation, the outcomes for all other functions were favorable. These results are detailed in the accompanying Table 6 and Table 7.
Upon reviewing Figure 8, it becomes evident that the values produced by the proposed IWHO algorithm are more closely clustered across all functions, making it the algorithm nearest to the optimal point compared to others. The whiskers and boxes for the CEC01 function in both the WHO and IWHO algorithms are notably lower and shorter than those of HHO and CapSA. This indicates that WHO and IWHO achieve superior (lower MSE) and more consistent results. The lowest whisker point suggests that these algorithms may be closer to, or have reached, the global optimum. For the CEC02 function, WHO, IWHO, and CapSA show very short whiskers and low boxes, while HHO is significantly higher and more variable. It is likely that WHO and IWHO have reached the global optimum. Similarly, for the CEC03 function, the whiskers for WHO and IWHO are the lowest, whereas HHO is considerably higher, further indicating that WHO and IWHO are closer to the global optimum. Across functions in the range CEC04-CEC10, WHO and IWHO display the lowest whisker values, suggesting that these algorithms yield superior results (closer to the global optimum) compared to the others. Although a few outlier values are observed in the CEC01, CEC03, CEC04, and CEC10 functions, the consistency of results from fifty-one independent runs highlights the algorithm’s stable structure. Consequently, it can be concluded that the global solutions of this algorithm are within the best limits and closer to the optimal global solution.
As seen in the box plot model in Figure 9, box plots are displayed as lines with whiskers extending from the box to the minimum and maximum non-outlier values, revealing the spread of the data. The lowest point reached by the whisker, or the lowest outlier, might represent the optimal value (global optimum) identified by the algorithm, assuming it aligns with the true global optimum of the function. In a boxplot, the whiskers extend from the box to the minimum and maximum non-outlier values, indicating the spread of most of the data. The box itself represents the interquartile range (IQR), which encompasses the middle 50% of the data, from the first quartile (Q1) to the third quartile (Q3). The thick line within the box signifies the median, or the 50th percentile, of the data. Any data point outside the whiskers is considered an outlier and is depicted as a red dot. The lowest value reached by the left whisker, or the lowest outlier, might represent the optimal value (global optimum) identified by an optimization algorithm, provided it corresponds to the true minimum of the function.
Convergence curves determine how the algorithms approach the optimal value over 500 iterations, whether they get stuck at local minimum points, and if they are subject to premature convergence. In this context, Figure 10 shows that the proposed IWHO algorithm progresses gradually in all functions without early convergence. Observing that the IWHO algorithm does not show constant progress in a certain range of iterations, but instead continually moves towards better solutions, can be considered proof that the IWHO algorithm reaches the global solution without getting stuck at local optima. As a result, it can be observed that the IWHO algorithm converges towards the optimal point across the entire function set.

5. MLP Training with the Proposed IWHO Algorithm

Solving Energy Efficiency Problem with IWHO-Based MLP Training

The main aim of this chapter is to optimize the efficiency of energy load factors in heating, ventilation, and air conditioning (HVAC) systems used in smart buildings. Energy used in buildings is consumed through (HVAC) systems. The HVAC system is designed to create suitable indoor air conditions by heating, cooling, and ventilating the air inside the building, and in this process, it calculates the building’s heat load (HL) and cooling load (CL) factors. Accurate estimation of HL and CL factors in buildings is important for reducing energy consumption based on occupancy, managing changes in building performance according to energy demands, reducing harmful gas emissions, and lowering costs. The necessary cooling and heating capacities are estimated based on fundamental factors such as building characteristics, usage, and climate conditions. The optimal design of the HVAC system plays a critical role in ensuring energy savings. In this context, the interior design of buildings is important not only for saving energy consumption but also for human health. To achieve energy savings, HVAC systems, which are generally preferred as active systems, are used. To maximize efficiency and energy savings, HVAC systems should be designed according to the building’s climate conditions [52,53,54,55].
The MLP (Multi-Layer Perceptron) architecture is designed according to the dimensions of the selected problem. In this article, the Energy Efficiency problem dataset has 8 features. Therefore, the input signal of the network consists of eight features. These features are: Relative Compactness, Surface Area, Wall Area, Roof Area, Overall Height, Orientation, Glazing Area, and Glazing Area Distribution. The output signals are defined as Heating Load and Cooling Load. The number of nodes in the hidden layer, which is located between the input and output layers, is 17, which is one more than twice the 8 nodes in the input layer. This choice is an arbitrary selection found to be appropriate by the author.
In Figure 11, the simulation of the MLP architecture created for the Energy Efficiency Problem dataset is drawn.
In the architecture shown in Figure 11, MSE is assigned as the error rate, and the sigmoid function is assigned as the transfer function. While cooling load specifies a different working universe, heating load specifies a different working universe, so a single output is used. Both WHO and the proposed IWHO hybrid algorithms were selected as advisors, and MLP training was performed. The results of the MLP training were statistically evaluated. While the running time of WHO is 0.0087 on average for 10 independent runs, the running time of IWHO is 0.0107. Although the parameters added for the hybrid algorithm result in an average delay of 22.98% in time, the optimization processes of the algorithms are fast given the short time spent on each iteration. The comparative results of WHO and IWHO for cooling load and heating load are shown in Table 8.
The IWHO algorithm exhibits strong regression performance, evidenced by low error metrics (MSE, RMSE, MAE) and significant explanatory power (R2) for both Cooling and Heating Load. MAPE values under 2% indicate that the model’s predictions are closely aligned with the actual values. An R2 value above 0.98 suggests that the model accounts for nearly all the variance in the data.
As a second analysis, when examining the convergence curves of WHO and IWHO values for average MSE, as shown in Figure 12, the MSE values of the IWHO algorithm for CL were lower than those of WHO in 6 runs and were also more consistent. Similarly, for HL the MSE values of the IWHO algorithm were closer together and more stable compared to the WHO algorithm. The comparative convergence curve of WHO and WHO-RW for cooling load and heating load are shown in Figure 13.
As a third analysis, when the WHO and IWHO algorithms are compared in terms of prediction rates, the average prediction value of the WHO algorithm for the CL factor is 99.6980 ± 1.4980 × 10−14; for IWHO, this value is 99.7010 ± 1.1751 × 10−14. For the HL factor, the average prediction value of the WHO algorithm is 100; for IWHO, this value is also 100.
As a fourth analysis, it is recommended to include a concise discussion on computational complexity, specifically using Big-O notation, along with a plot illustrating runtime against population size or iterations. It is essential to specify the hardware utilized (CPU/GPU) and any MATLAB configurations employed. The computational complexity of the IWHO algorithm introduced in this study is influenced by several parameters: the swarm size (N), the maximum number of iterations (t), the number of random walk steps (s), and the problem size (d). In each iteration, position updates, random walks, and fitness evaluations are performed for each member of the population. These operations can be expressed in Big-O notation as O(t × N × (s + d)). The complexity calculation can be determined based on the variations in the variables t and n , while maintaining S = 300 and d = 162 as constants. Table 9 expresses the results of analysis and Figure 14 shows the computational complexity of IWHO.
This analysis demonstrates that the computational complexity of the IWHO algorithm remains within acceptable bounds and is comparable to that of similar meta-heuristic optimization algorithms. A notation table summarizing all the parameters of the MLP architecture is added as Table 10.

6. Conclusions

In this research, the suboptimal performance of the WHO algorithm during the exploitation phase is tackled by incorporating a Random Walking strategy to avoid stagnation. The experimental results of the proposed IWHO algorithm show notable enhancements. Analysis of both box plots and convergence curves demonstrates stable convergence, which is due to a well-balanced exploitation-exploration dynamic. The Wilcoxon signed-rank nonparametric test further supports the claim that the IWHO algorithm is both unique and robust. The success of IWHO algorithm also highlights its inherent flexibility, making it suitable for hybridization. This adaptability implies that various strategies can be applied to the WHO algorithm, potentially overcoming its local search limitations through diverse hybridization. Moreover, IWHO algorithm is considered applicable to optimization challenges such as engineering design problems and artificial neural network training in future studies. IWHO-based MLP architecture is particularly relevant in the current era, where efficient energy use is crucial. When assessed in the context of MLP training, the IWHO algorithm effectively addressed certain shortcomings of the WHO algorithm through the Random Walking strategy, completing the optimization process in an optimal timeframe with a low error rate and a high prediction rate. Additionally, the successful resolution of such nonlinear problems via MLP training is expected to contribute to the scientific community’s efforts to optimize real-world problems.
The paper contains several limitations that can be addressed through a variety of studies, allowing for multiple investigations to be conducted. To effectively evaluate the Improved Whale Optimization Algorithm (IWHO) against the Whale Optimization Algorithm (WHO) and other limited algorithms, it is beneficial to include a wider array of algorithms in the comparison. The Random Walk strategy in IWHO presents a potential risk of overfitting the CEC 2019 test functions; however, the exploratory characteristics of the CEC 2019 test set help to mitigate this risk. Despite reasonable optimization times, IWHO experiences a 22.98% increase in runtime compared to WHO, which could be a concern for applications with strict time constraints. The effectiveness of the Random Walk strategy is heavily reliant on parameter selection, a challenge that can be addressed through parameter sensitivity testing. Additionally, exploring different domains and hybridizations can help reduce parameter sensitivity. Currently, the algorithm’s application is restricted to an energy efficiency problem, which does not fully demonstrate its versatility, even though the study primarily targets HVAC systems. While the Random Walk strategy is intended to avoid local optima, its performance in complex optimization landscapes remains uncertain and requires further investigation. Moreover, this paper did not perform k-fold cross-validation or conduct a thorough comparison of various optimizers due to computational constraints. Although these potential limitations do not compromise the study’s conclusions, they highlight areas where further research and analysis could enhance the proposed IWHO algorithm and its applications.
The proposed IWHO algorithm is anticipated to guide future research endeavors aimed at solving various engineering challenges.

Author Contributions

Data curation, Ş.G.; Writing—original draft, Ş.G.; Writing—review & editing, E.E.; Visualization, Ş.G.; Supervision, N.Y.; Project administration, N.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Izci, D. An Enhanced Slime Mould Algorithm for Function optimization. In Proceedings of the 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 11–13 June 2021; pp. 1–5. [Google Scholar]
  2. Eker, E.; Kayri, M.; Ekinci, S.; Izci, D. A New Fusion of ASO with SA Algorithm and Its Applications to MLP Training and DC Motor Speed Control. Arab. J. Sci. Eng. 2021, 46, 3889–3911. [Google Scholar] [CrossRef]
  3. Rizk-Allah, R.M. Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems. J. Comput. Des. Eng. 2017, 5, 249–273. [Google Scholar] [CrossRef]
  4. Rao, S.S.; Desai, R.C. Optimization Theory and Applications. IEEE Trans. Syst. Man Cybern. 1980, 10, 280. [Google Scholar] [CrossRef]
  5. Uryasev, S.; Pardalos, P.M. Stochastic Optimization: Algorithms and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Antoniou, A.; Lu, W.-S. Practical Optimization: Algorithms and Engineering Applications; Springer: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  7. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  8. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  9. Adam, S.P.; Alexandropoulos, S.-A.N.; Pardalos, P.M.; Vrahatis, M.N. No Free Lunch Theorem: A Review. In Approximation and Optimization; Springer: Cham, Switzerland, 2019; pp. 57–82. [Google Scholar] [CrossRef]
  10. Mirjalili, S.M. Evolutionary Algorithms and Neural Networks—Theory and Applications; Studies in Computational Intelligence Series; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  11. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  13. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  14. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L.; Alharbi, S.K.; Khalifa, H.A.E.-W. Efficient Initialization Methods for Population-Based Metaheuristic Algorithms: A Comparative Study. Arch. Comput. Methods Eng. 2022, 30, 1727–1787. [Google Scholar] [CrossRef]
  15. Lagaros, N.D.; Plevris, V.; Kallioras, N.A. The Mosaic of Metaheuristic Algorithms in Structural Optimization. Arch. Comput. Methods Eng. 2022, 29, 5457–5492. [Google Scholar] [CrossRef]
  16. Farag, A.A.; Ali, Z.M.; Zaki, A.M.; Rizk, F.H.; Eid, M.M.; EL-Kenawy, E.S.M. Exploring Optimization Algorithms: A Review of Methods and Applications. J. Artif. Intell. Metaheuristics 2024, 7, 8–17. [Google Scholar] [CrossRef]
  17. Igel, C. No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 1–23. [Google Scholar] [CrossRef]
  18. Nenavath, H.; Jatoth, R.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  19. Ting, T.O.; Yang, X.; Cheng, S.; Huang, K. Hybrid Metaheuristic Algorithms: Past, Present, and Future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  20. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Hybrid Particle Swarm Optimization with Sine Cosine Algorithm and Nelder–Mead Simplex for Solving Engineering Design Problems. Arab. J. Sci. Eng. 2020, 45, 3091–3109. [Google Scholar] [CrossRef]
  21. Tomar, V.; Bansal, M.; Singh, P. Metaheuristic Algorithms for Optimization: A Brief Review. Eng. Proc. 2024, 59, 238. [Google Scholar] [CrossRef]
  22. Bouaouda, A.; Sayouti, Y. Hybrid Meta-Heuristic Algorithms for Optimal Sizing of Hybrid Renewable Energy System: A Review of the State-of-the-Art. Arch. Comput. Methods Eng. 2022, 29, 4049–4083. [Google Scholar] [CrossRef]
  23. Güven, A.F.; Samy, M.M. Performance analysis of autonomous green energy system based on multi and hybrid metaheuristic optimization approaches. Energy Convers. Manag. 2022, 269, 116058. [Google Scholar] [CrossRef]
  24. Mehta, P.; Yildiz, B.S.; Sait, S.M.; Yildiz, A.R. Hunger games search algorithm for global optimization of engineering design problems. Mater. Test. 2022, 64, 524–532. [Google Scholar] [CrossRef]
  25. Eker, E.; Atar, Ş.; Şevgin, F.; Tuğal, İ. Optimization of Non-Linear Problems Using Salp Swarm Algorithm and Solving the Energy Efficiency Problem of Buildings with Salp Swarm Algorithm-based Multi-Layer Perceptron Algorithm. Electrica 2024, 24, 436–449. [Google Scholar] [CrossRef]
  26. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.-G. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  27. Ponmalar, A.; Vijayakumar, K.; Lakshmipriya, C.; Karthikeyan, M.; Gracelin Sheena, B.; Beslin Pajila, P.J.; Siva Subramanian, R. Meta-Heuristics and Machine Learning Applications in Complex Systems. In Metaheuristic and Machine Learning Optimization Strategies for Complex Systems; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 257–275. [Google Scholar] [CrossRef]
  28. Yousaf, S.; Bradshaw, C.R.; Kamalapurkar, R.; San, O. A gray-box model for unitary air conditioners developed with symbolic regression. Int. J. Refrig. 2024, 168, 696–707. [Google Scholar] [CrossRef]
  29. Zheng, R.; Hussien, A.G.; Jia, H.-M.; Abualigah, L.; Wang, S.; Wu, D. An Improved Wild Horse Optimizer for Solving Optimization Problems. Mathematics 2022, 10, 1311. [Google Scholar] [CrossRef]
  30. Li, Y.; Yuan, Q.; Han, M.; Cui, R. Hybrid Multi-Strategy Improved Wild Horse Optimizer. Adv. Intell. Syst. 2022, 4, 2200097. [Google Scholar] [CrossRef]
  31. Li, M.-W.; Wang, Y.-T.; Yang, Z.-Y.; Huang, H.-P.; Hong, W.-C.; Li, X.-Y. CQND-WHO: Chaotic quantum nonlinear differential wild horse optimizer. Nonlinear Dyn. 2024, 112, 4899–4927. [Google Scholar] [CrossRef]
  32. Mahmoud, M.M. Improved current control loops in wind side converter with the support of wild horse optimizer for enhancing the dynamic performance of PMSG-based wind generation system. Int. J. Model. Simul. 2022, 43, 952–966. [Google Scholar] [CrossRef]
  33. Ganesan, P.; Xavier, S.A.E. A Hybrid Wild Horses Optimization (WHO) and Dwarf Mongoose Optimization (DMO) method for optimum energy management in SG system. Optim. Control. Appl. Methods 2024, 45, 1925–1949. [Google Scholar] [CrossRef]
  34. Milovanović, M.; Klimenta, D.; Panić, M.; Klimenta, J.; Perović, B. An application of Wild Horse Optimizer to multi-objective energy management in a micro-grid. Electr. Eng. 2022, 104, 4521–4541. [Google Scholar] [CrossRef]
  35. Ewees, A.A.; Ismail, F.H.; Ghoniem, R.M. Wild Horse Optimizer-Based Spiral Updating for Feature Selection. IEEE Access 2022, 10, 106258–106274. [Google Scholar] [CrossRef]
  36. Peng, F.; Zheng, L. An Improved Multi-Objective Wild Horse Optimization for the Dual-Resource-Constrained Flexible Job Shop Scheduling Problem: A Comparative Analysis with NSGA-II and a Real Case Study. Available online: https://www.researchgate.net/publication/376330051_An_improved_multi-objective_Wild_Horse_optimization_for_the_dual-resource-constrained_flexible_job_shop_scheduling_problem_A_comparative_analysis_with_NSGA-II_and_a_real_case_study (accessed on 28 May 2025).
  37. Bujok, P.; Zamuda, A. Cooperative Model of Evolutionary Algorithms Applied to CEC 2019 Single Objective Numerical Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 366–371. [Google Scholar]
  38. Gupta, S.; Deep, K. A novel Random Walk Grey Wolf Optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  39. Yin, Y.; Tu, Q.; Chen, X. Enhanced Salp Swarm Algorithm based on random walk and its application to training feedforward neural networks. Soft Comput. 2020, 24, 14791–14807. [Google Scholar] [CrossRef]
  40. Eker, E.; Izci, D.; Ekinci, S.; Migdady, H.; Abu Zitar, R.; Abualigah, L. Efficient voltage regulation: An RW-ARO optimized cascaded controller approach. e-Prime 2024, 9, 100687. [Google Scholar] [CrossRef]
  41. Tsanas, A.; Xifara, A. Energy Efficiency. UCI Machine Learning Repository. 2012. Available online: https://doi.org/10.24432/C51307 (accessed on 28 May 2025).
  42. Stevanović, S.; Dashti, H.; Milošević, M.; Al-Yakoob, S.; Stevanović, D. Comparison of ANN and XGBoost surrogate models trained on small numbers of building energy simulations. PLoS ONE 2024, 19, e0312573. [Google Scholar] [CrossRef]
  43. Hussain, I.; Ching, K.B.; Uttraphan, C.; Tay, K.G.; Noor, A. Evaluating machine learning algorithms for energy consumption prediction in electric vehicles: A comparative study. Sci. Rep. 2025, 15, 16124. [Google Scholar] [CrossRef]
  44. Khishe, M.; Mohammadi, H. Passive sonar target classification using multi-layer perceptron trained by salp swarm algorithm. Ocean Eng. 2019, 181, 98–108. [Google Scholar] [CrossRef]
  45. He, S.; Li, Q.; Khishe, M.; Mohammed, A.S.; Mohammadi, H.; Mohammadi, M. The optimization of nodes clustering and multi-hop routing protocol using hierarchical chimp optimization for sustainable energy efficient underwater wireless sensor networks. Wirel. Netw. 2024, 30, 233–252. [Google Scholar] [CrossRef]
  46. Eker, E. Development of Random Walks Strategy based Dandelion Optimizer and Its Application to Engineering Design Problems. IEEE Access 2025, 13, 56547–56575. [Google Scholar] [CrossRef]
  47. Nosrati, L.; Bidgoli, A.M.; Javadi, H.H.S. Identifying People’s Faces in Smart Banking Systems Using Artificial Neural Networks. Int. J. Comput. Intell. Syst. 2024, 17, 9. [Google Scholar] [CrossRef]
  48. Alpaydin, E. Machine Learning Textbook: Introduction to Machine Learning (Ethem ALPAYDIN). Available online: https://www.cmpe.boun.edu.tr/~ethem/i2ml/ (accessed on 28 May 2025).
  49. Haykin, S.S. Neural Networks: A Comprehensive Foundation; Prentice Hall: Hoboken, NJ, USA, 1999. [Google Scholar]
  50. Adaptive, Learning, and Pattern Recognition Systems: Theory and Applications. Available online: https://shop.elsevier.com/books/adaptive-learning-and-pattern-recognition-systems-theory-and-applications/mendel/978-0-12-490750-8 (accessed on 28 May 2025).
  51. Rouhi, A.; Pira, E. WHOFWA: An effective hybrid metaheuristic algorithm based on wild horse optimizer and fireworks algorithm. J. Electr. Comput. Eng. Innov. JECEI 2024, 12, 319–342. [Google Scholar] [CrossRef]
  52. Yussuf, R.O.; Asfour, O.S. Applications of artificial intelligence for energy efficiency throughout the building lifecycle: An overview. Energy Build. 2024, 305, 113903. [Google Scholar] [CrossRef]
  53. Prediction of Residential Building Energy Efficiency Performance Using Deep Neural Network. Available online: https://www.researchgate.net/publication/358929639_Prediction_of_Residential_Building_Energy_Efficiency_Performance_using_Deep_Neural_Network (accessed on 28 May 2025).
  54. McQuiston, F.C.; Parker, J.D.; Spitler, J.D. Heating, Ventilating and Air Conditioning Analysis and Design, 6th ed.; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2004. [Google Scholar]
  55. Bui, D.T.; Moayedi, H.; Anastasios, D.; Foong, L.K. Predicting Heating and Cooling Loads in Energy-Efficient Buildings Using Two Hybrid Intelligent Models. Appl. Sci. 2019, 9, 3543. [Google Scholar] [CrossRef]
Figure 1. The process of optimization.
Figure 1. The process of optimization.
Energies 18 02916 g001
Figure 2. Hierarchy of wild horse swarms.
Figure 2. Hierarchy of wild horse swarms.
Energies 18 02916 g002
Figure 3. Workflow of IWHO.
Figure 3. Workflow of IWHO.
Energies 18 02916 g003
Figure 4. Sample of perceptron.
Figure 4. Sample of perceptron.
Energies 18 02916 g004
Figure 5. (a) A pair of linearly separable patterns; (b) a pair of non-linearly separable patterns [49].
Figure 5. (a) A pair of linearly separable patterns; (b) a pair of non-linearly separable patterns [49].
Energies 18 02916 g005
Figure 6. Directions of function and error signals.
Figure 6. Directions of function and error signals.
Energies 18 02916 g006
Figure 7. MLP architecture.
Figure 7. MLP architecture.
Energies 18 02916 g007
Figure 8. Boxplots view of algorithms.
Figure 8. Boxplots view of algorithms.
Energies 18 02916 g008
Figure 9. Demonstration of boxplot model.
Figure 9. Demonstration of boxplot model.
Energies 18 02916 g009
Figure 10. Converge curves of algorithms.
Figure 10. Converge curves of algorithms.
Energies 18 02916 g010
Figure 11. MLP architecture of Energy Efficiency Problem dataset.
Figure 11. MLP architecture of Energy Efficiency Problem dataset.
Energies 18 02916 g011
Figure 12. Mean MSE.
Figure 12. Mean MSE.
Energies 18 02916 g012
Figure 13. Average processing time.
Figure 13. Average processing time.
Energies 18 02916 g013
Figure 14. Computational complexity of IWHO.
Figure 14. Computational complexity of IWHO.
Energies 18 02916 g014
Table 1. Algorithm parameters.
Table 1. Algorithm parameters.
ParameterStatementValue
N Number of search agents40
t m a x Maximum number of iterations500
d Problem dimension9, 10, 16, 18 (see CEC 2019 functions)
lb, ubLower and upper bounds[−100, 100]
rNumber of random walk steps300
x i Position of the i-th horse/agent-
x i , j Position of the j-th member in group i-
x s t a l l i o n Position of the group leader (stallion)-
zSelf-adaptive parameterz = 2 × (1 − (t/t_max)2)
R Random value in update equationsR ∈ (−2, 2)
P Random vectorP ∈ (0, 1)
P C Crossover percentageAdjustable
HardwareCPU used for experimentsIntel Core i7-10700K, 32 GB RAM
SoftwareMATLAB versionR2021a (parallel processing enabled)
Table 2. Set of CEC 2019 benchmark functions.
Table 2. Set of CEC 2019 benchmark functions.
FunctionsDimensionBoundsFitting Value
CEC01: Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
CEC02: Inverse Hilbert Matrix Problem16[−16,384, 16,384]1
CEC03: Lennard–Jones Minimum Energy Cluster18[−4, 4]1
CEC04: Rastrigin’s Function10[−100, 100]1
CEC05: Griewangk’s Function10[−100, 100]1
CEC06: Weierstrass Function10[−100, 100]1
CEC07: Modified Schwefel’s Function10[−100, 100]1
CEC08: Expanded Schafer’s F6 Function10[−100, 100]1
CEC09: Happy Cat Function10[−100, 100]1
CEC10: Ackley Function 10[−100, 100]1
Table 3. Performance results of functions.
Table 3. Performance results of functions.
Score
Func.MetricWHOIWHOCapSAHHO
CEC01Mean5.34 × 1044.21 × 1045.01 × 1045.11 × 104
Std4.62 × 1044.17 × 1045.39 × 1046.15 × 103
Best 3.54 × 1043.15 × 1043.59 × 1044.28 × 104
Worst 3.60 × 1053.34 × 1053.18 × 1057.30 × 104
CEC02Mean17.317.317.317.4
Std2.15 × 10−142.15 × 10−148.80 × 10−59.99 × 10−3
Best 17.317.317.317.3
Worst 17.317.317.317.4
CEC03Mean12.712.712.712.7
Std3.15 × 10−62.36 × 10−72.21 × 10−176.49 × 10−6
Best 12.712.712.712.7
Worst 12.712.712.712.7
CEC04Mean26.228.741.2180
Std15.723.62162.9
Best 5.978.958.9686
Worst 86.6124105380
CEC05Mean1.071.051.192.45
Std0.0710.0290.1080.515
Best 1.011.001.021.27
Worst 1.491.141.593.86
CEC06Mean5.603.607.649.03
Std0.9280.9871.231.11
Best 3.531.534.665.75
Worst 7.376.0210.311.1
CEC07Mean12260.1463361
Std125126307178
Best −205-253−166182
Worst 3692731000939
CEC08Mean4.744.335.395.77
Std0.6430.70684.90.484
Best 3.332.903.064.59
Worst 5.885.706.846.76
CEC09Mean2.452.382.533.15
Std0.1290.03190.1600.449
Best 2.352.342.352.48
Worst 3.102.482.984.64
CEC10Mean2019.620.120.2
Std0.00912.640.09950.0952
Best 201.162020
Worst 202020.420.4
Table 4. Comparison of IWHO with other hybrid algorithms.
Table 4. Comparison of IWHO with other hybrid algorithms.
Func.Metric Score
--IWHOWHOFWA [51]WHOW [35]RW-DO [45]
CEC01Mean4.21 × 10044.76 × 1047.59 × 1041.01 × 1012
Std4.17 × 10447.14.57 × 1045.48 × 1012
CEC02Mean17.317.317.3754
Std2.15 × 10−142.09 × 10−58.31 × 10−144.03 × 103
CEC03Mean12.712.712.712.7
Std2.36 × 10−78.75 × 10−103.95 × 10−153.24 × 10−4
CEC04Mean28.746.222.92.00 × 103
Std23.616.85.035.18 × 104
CEC05Mean1.051.311.082.22
Std0.02860.1410.03931.12
CEC06Mean3.604.735.781.55
Std0.9870.9600.6066.87
CEC07Mean60.1-154585
Std126-56381
CEC08Mean4.335.104.886.03
Std0.7066.570.5110.888
CEC09Mean2.382.572.67258
Std0.03190.1260.1291.22 × 103
CEC10Mean19.619.42020.5
Std2.643.239.41 × 10−30.217
Table 5. Wilcoxon test results.
Table 5. Wilcoxon test results.
FunctionsMetricsWHOCapSAHHO
CEC01p-value3.03 × 10−42.80 × 10−39.66 × 10−5
W/T/LWWW
CEC02p-value1.005.15 × 10−105.15 × 10−10
W/T/LLWW
CEC03p-value1.000.55.15 × 10−10
W/T/LLWW
CEC04p-value0.77862.27 × 10−55.47 × 10−10
W/T/LLWW
CEC05p-value0.1071.40 × 10−95.15 × 10−10
W/T/LTWW
CEC06p-value8.27 × 10−105.15 × 10−105.15 × 10−10
W/T/LWWW
CEC07p-value0.3258.65 × 10−97.80 × 10−10
W/T/LLWW
CEC08p-value0.5938.61 × 10−73.94 × 10−9
W/T/LLWW
CEC09p-value2.27 × 10−57.32 × 10−95.15 × 10−10
W/T/LWWW
CEC10p-value6.70 × 10−85.15 × 10−105.15 × 10−10
W/T/LWWW
Table 6. Sensitivity analysis for RW = 300.
Table 6. Sensitivity analysis for RW = 300.
RW = 300 and Population Size = 40RW = 300 and Population Size = 50
FunctionMetric (ps = 0.2)PC = 0.12PC = 0.13PC = 0.14Metric (ps = 0.2)PC = 0.12PC = 0.13PC = 0.14
CEC 2019
CEC01Mean6.89 × 1086.41 × 1087.67 × 108Mean5.74 × 1084.52 × 1084.66 × 108
Std1.04 × 1096.82 × 1081.16 × 109Std8.51 × 1083.86 × 1081.77 × 108
CEC02Mean1.73 × 1051.73 × 1051.73 × 105Mean1.73 × 1051.73 × 1051.73 × 105
Std001.40 × 10−10Std000
CEC03Mean1.27 × 1051.27 × 1051.27 × 105Mean1.27 × 1051.27 × 1051.27 × 105
Std1.40 × 10−22.18 × 10−100Std1.57 × 10−19.62 × 10−38.93 × 10−3
CEC04Mean3.25 × 1052.46 × 1053.62 × 105Mean1.98 × 1051.71 × 1051.76 × 105
Std5.34 × 1051.21 × 1056.28 × 105Std1.00 × 1051.50 × 1057.38 × 105
CEC05Mean1.06 × 1041.05 × 1041.06 × 104Mean1.04 × 1041.03 × 1051.05 × 104
Std431292475Std279244269
CEC06Mean5.40 × 1045.54 × 1045.40 × 104Mean5.22 × 1045.15 × 1045.24 × 104
Std1.07 × 1048.05 × 1038.27 × 103Std9.78 × 1038.47 × 1037.89 × 103
CEC07Mean1.28 × 1061.27 × 1061.29 × 106Mean7.41 × 1057.32 × 1051.20 × 106
Std1.37 × 1061.30 × 1061.35 × 106Std1.31 × 1061.21 × 1061.11 × 106
CEC08Mean4.78 × 1044.50 × 1044.69 × 104Mean4.64 × 1044.35 × 1044.49 × 104
Std7.51 × 1036.54 × 1036.96 × 103Std5.05 × 1037.43 × 1037.15 × 103
CEC09Mean2.44 × 1042.43 × 1042.44 × 104Mean2.39 × 1042.30 × 1042.39 × 104
Std661913675Std306762502
CEC10Mean1.97 × 1051.96 × 1052.00 × 105Mean2.00 × 1051.90 × 1052.00 × 105
Std2.52 × 10482.22.64 × 104Std48.538.2132
Table 7. Sensitivity analysis for RW = 500.
Table 7. Sensitivity analysis for RW = 500.
RW = 500 and Population Size = 40RW = 500 and Population Size = 50
FunctionMetric (ps = 0.2)PC = 0.12PC = 0.13PC = 0.14Metric (ps = 0.2)PC = 0.12PC = 0.13PC = 0.14
CEC 2019
CEC01Mean3.30 × 1095.17 × 1086.16 × 108Mean4.27 × 1087.56 × 1087.06 × 108
Std1.26 × 10102.20 × 1086.61 × 108Std4.01 × 1071.44 × 1091.23 × 109
CEC02Mean1.73 × 1051.73 × 1051.73 × 105Mean1.73 × 1051.73 × 1051.73 × 105
Std000Std3.61 × 10−113.61 × 10−110
CEC03Mean1.27 × 1051.27 × 1051.27 × 105Mean1.27 × 1051.27 × 1051.27 × 105
Std5.48 × 10−115.48 × 10−113.86 × 10−3Std5.48 × 10−113.86 × 10−35.48 × 10−11
CEC04Mean5.18 × 1052.25 × 1052.32 × 105Mean2.45 × 1052.31 × 1052.38 × 105
Std8.78 × 1051.61 × 1051.19 × 105Std1.08 × 1051.18 × 1051.43 × 105
CEC05Mean1.05 × 1041.06 × 1041.06 × 104Mean1.05 × 1041.04 × 1041.04 × 104
Std3.86 × 1023.80 × 1023.39 × 102Std2.74 × 1023.30 × 1022.35 × 102
CEC06Mean5.58 × 1045.14 × 1045.32 × 104Mean5.57 × 1045.09 × 1045.20 × 104
Std1.13 × 1041.34 × 1049.04 × 103Std6.66 × 1031.07 × 1049.77 × 103
CEC07Mean1.29 × 1069.15 × 1059.26 × 105Mean9.52 × 1057.05 × 1057.18 × 105
Std9.83 × 1051.29 × 1061.16 × 106Std9.11 × 1051.03 × 1061.39 × 106
CEC08Mean4.60 × 1044.48 × 1044.81 × 104Mean4.76 × 1044.36 × 1044.43 × 104
Std5.21 × 1038.35 × 1035.35 × 103Std4.68 × 1039.39 × 1037.79 × 103
CEC09Mean2.47 × 1042.43 × 1042.46 × 104Mean2.40 × 1042.37 × 1042.39 × 104
Std1.91 × 1037.42 × 1021.09 × 103Std5.48 × 1024.26 × 1023.86 × 102
CEC10Mean1.91 × 1052.00 × 1051.81 × 105Mean2.00 × 1051.80 × 1051.89 × 105
Std4.06 × 1041.06 × 1025.67 × 104Std62.12284.59 × 104
Table 8. Results of regression measures.
Table 8. Results of regression measures.
MetricWHOIWHO
WCooling LoadHeating LoadCooling LoadHeating Load
MSE2.65 × 10−10.3330.2570.314
RMSE0.5150.5680.5080.561
MAE0.420.470.430.48
MAPE1.7%1.9%1.7%1.9%
R20.9810.9800.9850.983
Table 9. Big-O notation.
Table 9. Big-O notation.
Swarm Size (N)Total Operations (millions)Iterations (t)Total Operations (millions)
505.78504.62
10011.551009.24
15017.3215013.86
20023.1020018.48
25028.8825023.10
30034.6530027.72
Table 10. Summarized notation tables of MLP.
Table 10. Summarized notation tables of MLP.
ParameterStatementValue
HLHeating load (output of MLP)-
CLCooling load (output of MLP)-
Input featuresFeatures for MLP input8 (Relative Compactness, Surface Area, Wall Area, Roof Area, Overall Height, Orientation, Glazing Area, Glazing Area Distribution)
Hidden layer nodesNumber of nodes in MLP hidden layer17
Output nodesNumber of output nodes in MLP1 (HL or CL)
Population sizeNumber of agents in optimization200
Number of runsNumber of independent runs for experiments10
Processing time (WHO)Average running time for 10 runs0.0087
Processing time (IWHO)Average running time for 10 runs0.0107
Prediction rate (CL, WHO)Average prediction value for CL (WHO)99.6980 ± 1.4980 × 10−14
Prediction rate (CL, IWHO)Average prediction value for CL (IWHO)99.7010 ± 1.1751 × 10−14
Prediction rate (HL, WHO)Average prediction value for HL (WHO)100
Prediction rate (HL, IWHO)Average prediction value for HL (IWHO)100
ComplexityComputational complexity of IWHOO(t × N × (s + d))
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Güler, Ş.; Eker, E.; Yumuşak, N. Improvement of Wild Horse Optimizer Algorithm with Random Walk Strategy (IWHO), and Appointment as MLP Supervisor for Solving Energy Efficiency Problem. Energies 2025, 18, 2916. https://doi.org/10.3390/en18112916

AMA Style

Güler Ş, Eker E, Yumuşak N. Improvement of Wild Horse Optimizer Algorithm with Random Walk Strategy (IWHO), and Appointment as MLP Supervisor for Solving Energy Efficiency Problem. Energies. 2025; 18(11):2916. https://doi.org/10.3390/en18112916

Chicago/Turabian Style

Güler, Şahiner, Erdal Eker, and Nejat Yumuşak. 2025. "Improvement of Wild Horse Optimizer Algorithm with Random Walk Strategy (IWHO), and Appointment as MLP Supervisor for Solving Energy Efficiency Problem" Energies 18, no. 11: 2916. https://doi.org/10.3390/en18112916

APA Style

Güler, Ş., Eker, E., & Yumuşak, N. (2025). Improvement of Wild Horse Optimizer Algorithm with Random Walk Strategy (IWHO), and Appointment as MLP Supervisor for Solving Energy Efficiency Problem. Energies, 18(11), 2916. https://doi.org/10.3390/en18112916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop