Next Article in Journal
A Review of Biomass-to-Bioenergy Supply Chain Research Using Bibliometric Analysis and Visualization
Previous Article in Journal
Investigations on Three-Section Plate-Type Electrostatic Precipitators Used in Thermoelectric Power Plants
Previous Article in Special Issue
Reinforcement Learning: Theory and Applications in HEMS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Al-Biruni Earth Radius Optimization Based Algorithm for Improving Prediction of Hybrid Solar Desalination System

by
Abdelhameed Ibrahim
1,
El-Sayed M. El-kenawy
2,*,
A. E. Kabeel
3,4,
Faten Khalid Karim
5,*,
Marwa M. Eid
6,
Abdelaziz A. Abdelhamid
7,8,
Sayed A. Ward
3,9,
Emad M. S. El-Said
10,
M. El-Said
11,12 and
Doaa Sami Khafaga
5
1
Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
2
Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
3
Faculty of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
4
Mechanical Power Engineering Department, Faculty of Engineering, Tanta University, Tanta 31733, Egypt
5
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
6
Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
7
Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
8
Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
9
Electrical Engineering Department, Shoubra Faculty of Engineering, Benha University, 108 Shoubra St., Cairo 11629, Egypt
10
Mechanical Engineering Department, Faculty of Engineering, Dameitta University, Damietta 34511, Egypt
11
Electrical Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
12
Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
*
Authors to whom correspondence should be addressed.
Energies 2023, 16(3), 1185; https://doi.org/10.3390/en16031185
Submission received: 19 October 2022 / Revised: 8 January 2023 / Accepted: 17 January 2023 / Published: 21 January 2023
(This article belongs to the Special Issue Artificial Intelligence and Smart Energy: The Future Approach)

Abstract

:
The performance of a hybrid solar desalination system is predicted in this work using an enhanced prediction method based on a supervised machine-learning algorithm. A humidification–dehumidification (HDH) unit and a single-stage flashing evaporation (SSF) unit make up the hybrid solar desalination system. The Al-Biruni Earth Radius (BER) and Particle Swarm Optimization (PSO) algorithms serve as the foundation for the suggested algorithm. Using experimental data, the BER–PSO algorithm is trained and evaluated. The cold fluid and injected air volume flow rates were the algorithms’ inputs, and their outputs were the hot and cold fluids’ outlet temperatures as well as the pressure drop across the heat exchanger. Both the volume mass flow rate of hot fluid and the input temperatures of hot and cold fluids are regarded as constants. The results obtained show the great ability of the proposed BER–PSO method to identify the nonlinear link between operating circumstances and process responses. In addition, compared to the other analyzed models, it offers better statistical performance measures for the prediction of the outlet temperature of hot and cold fluids and pressure drop values.

1. Introduction

One of the necessities of existence, fresh water, is insufficiently accessible to billions of people worldwide. Although governments and humanitarian organizations have recently assisted many people living in water-stressed areas acquire access, the issue is expected to worsen due to the rapid population rise. About 97% of the water sources on earth are salty, and there are almost 800 million people who lack access to clean drinking water. Furthermore, by 2050, it is anticipated that 50% of the world’s water would be utilized [1]. Physical scarcity, which occurs when there is a lack of water due to regional ecological circumstances, and economic scarcity, which occurs when there is insufficient water infrastructure, are the two main types of water scarcity. Most experts agree that the Middle East and North Africa (MENA) region experiences the most physical water stress. The MENA region experiences lower rainfall than other places, yet several of its nations have rapidly expanding, highly populated urban areas that demand more water. However, many of these nations, particularly the wealthier ones, continue to fulfil their water needs. Desalination is a clever and practical solution that grows proportionally with population growth to the problem of freshwater resources. To address this issue successfully, numerous water desalination techniques have been employed. Due to their straightforward construction, long operational lives, and affordable clean water production, solar stills (SSs) are the most widely used technique of desalinating water. When combined with low-grade and renewable energy sources such as wind and solar, SS, which is seen as a promising entrant to the desalination process, gives an alternative way for lowering dependence on fossil fuels while also delivering different environmental benefits. SS, on the other hand, produces little. Therefore, numerous performance improvement initiatives were made to increase the output of SSs. Hybridization with other techniques, such as HDH, is one technique used to enhance the SS to achieve high performance. In order to produce a more cost-effective product, provide a better match between power demand and water requirements, and achieve the best possible combination of the properties of the two processes, a hybrid desalination system combines two or more processes [2]. Three models have been used to predict the effectiveness of solar stills and other desalination techniques. These models use numerical solutions [3], regression models [4], Artificial Neural Network (ANN) and machine learning technology, which is widely used in many energy system aspects based on actual experimental measurements, such as desalination systems [5].
ANNs are a prediction model and classification strategy when simulating complex interactions between sets of cause-and-effect variables or discovering patterns in data. Methods from transient detection, pattern recognition, approximation, and time series prediction are all potentially applicable [6,7]. ANN is a system of information processing that mimics how the human brain does. Neurons are the central component of this network, and they solve problems by collaborating. In situations where it is essential to extract the structure from existing data to formulate an algorithmic solution, a neural network comes in helpful [8,9]. Meta-heuristic algorithms are one of the most powerful techniques available for resolving challenges encountered in a variety of application contexts [10]. Most of these algorithms derive their logic from the physical algorithms present in the natural world. These optimization methods often yield acceptable solutions with minimal computational work and in a fair amount of time. Early detection of coronavirus can significantly reduce the spread of the disease, improving the prognosis for patients. This has led to the suggestion of several forms of artificial intelligence (AI) for use in solar desalination systems [11,12]. Ensemble method, the objective is to ensemble a prediction model by integrating the features of several independent base models. There are numerous ways in which this idea can be put into practice. Some more effective ways involve resampling the training set, while others use different prediction algorithms or tweak various parameters of the predictive strategy, etc. Finally, an ensemble of methods is used to combine the results of each prediction [13,14]. When more variables are included in the optimization process, Al-Biruni Earth Radius (BER) optimization algorithm [15] performs worse. Additionally, the algorithm’s convergence is premature. A significant advantage is the successful balancing of exploration and exploitation. The suggested method takes use of this advantage by using the BER algorithm. Despite its ease of use and balancing ability between exploration and exploitation, the Particle Swarm Optimization (PSO) algorithm [16] has drawbacks including performance declines when many local optimum solutions are present and a low exploration rate. This study employs the Al-Biruni Earth Radius optimizer to exploit the advantages and overcome the drawbacks of the PSO technique.
Based on measured values of water and air temperatures and yield, Kabeel and El-Said El-Said [17] offered some of the relations that help evaluate the hybrid desalination system (HDH-SSF) and the suggested BER–PSO classification model, excellent prediction accuracy of parameters include water yield, GOR, cost, and thermal efficiency is achieved. The proposed (BER–PSO) technique is initially applied to feature selection from the tested dataset using a binary version. The binary BER–PSO (bBER–PSO) algorithm is tested first compared to PSO [16], Grey Wolf Optimizer (GWO) [18], hybrid of PSO and GWO (GWO-PSO) [19], Whale Optimization Algorithm (WOA) [20], Biogeography-Based Optimizer (BBO) [21], Bowerbird Optimizer (SBO) [22], Firefly Algorithm (FA) [23], Genetic Algorithm (GA) [24], and Bat Algorithm (BA) [25]. The tested dataset is next assessed using a classifier built using the specified BER–PSO algorithm. Comparisons are made between the Decision Tree Regressor (DTR) [26], MLP Regressor (MLP) [27], K-Neighbors Regressor (KNR) [28], Support Vector Regression (SVR) [29], and Random Forest Regressor (RFR) [30] models and the “BER–PSO” algorithm. Additionally, two ensembles for creating new estimators are Average Ensemble (AVE) and Ensemble utilizing KNR (EKNR).
The following is a condensed list of the most important contributions that can be drawn from this body of work:
  • Novel machine-learning techniques to predict the impact of various design and operational parameters on the thermos-fluid functionality of the HDH-SSF system.
  • An improved Al-Biruni Earth Radius (BER) optimization-based Particle Swarm Optimization (BER–PSO) algorithm is suggested.
  • The suggested algorithm’s binary variant, the binary BER–PSO algorithm, is used to select features from the dataset under test.
  • For the purpose of raising the accuracy of tested dataset prediction, a BER–PSO-based classifier is introduced.
  • The statistical significance of the BER–PSO algorithm can be determined by employing the Wilcoxon rank-sum and ANOVA tests.
  • Both the binary BER–PSO and the BER–PSO-based classification algorithms can be tested for a variety of datasets.
The remaining portions of the document are arranged as follows. A description of the system and applications of AI is covered in Section 2. The methods utilized by machine learning (ML) to estimate the HDH-SSF’s thermos-fluid performance parameters are introduced in Section 3. Meanwhile, Section 4 presents a description of the proposed BER–PSO algorithm. Section 5 presents performance metrics, statistical parameters, experimental results, findings, and discussion. The study’s key conclusions are presented in Section 6.

2. Background

Based on Kabeel and El-Said [17], the experimental setup of the hybrid desalination system (HDH-SSF) is under investigation in this study. HDH-SSF system was primarily based on two concepts: the air humidification and dehumidification process (HDH), and the flash evaporation of saline water (SSF), as shown in Figure 1. A humidifier, dehumidifier (water cooled exchanger), air heater (flat plate collector), water heater (flat plate collector), and flashing evaporation unit make up the majority of the system. HDH system is made up of two loops, one for heating water and the other for heating air with a packed tower as a humidifier. Hot air passes through the humidifier and transfers the water that has evaporated to the dehumidifier, which cools to extract the fresh water. The condenser and flashing chamber are the main components of the single stage flashing evaporation system (SSF). Pump (P4) in the water loop delivers water from the mixing tank and splits it into two major lines, a feeding line and a bypass line, which are both controlled by valves (V15). The humidifier’s top is where the heated saline water is sprayed. The water drips to the humidifier’s bottom, where it is sent to a mixing tank for reheating. A centrifugal blower (placed on the humidifier’s input) pulls air from the atmosphere into the air loop, where it passes through the air solar heater. Hot air passes through the humidifier and transfers the water that has evaporated to the dehumidifier, which cools and dehumidifies the air. Condenser and flashing chamber make up the SSF system. In the closed loop of saline water flow between the flashing chamber and heat exchanger (HHEx), a portion of the salty water in the mixing tank (MT) is flowed to the helical heat exchanger (HHEx) to backup water, while the remainder is drained using valve (V5). Pumped to a flashing chamber under sub-atmospheric pressure, hot saline water from a heat exchanger evaporates by flashing. The condenser receives the water vapor that was extracted from the flashing chamber. The flashing unit condenser receives the saline-cooled water, which condenses the water vapor and discharges it. The bottom tray of the condenser holds the desalinated water, which is drawn up and pumped to the product tank. The pressure drop affects the flashing evaporation. In order to vacuum the condenser and flashing chamber, a vacuum pump is used. The rejected brine water from the humidifier is then combined with the saline water that exits the flashing unit condenser. Since the saline water is cold when it leaves the dehumidifier, it must be emptied. The following are some stages that can be used to illustrate the precise operational procedures of the experimental apparatus shown in Figure 1 [17]:
  • Until the operational levels are reached, fill the system’s salty water loops.
  • Set and modify the test case’s flow rates and temperature settings.
  • Run the electrical heater (EH) and pump (P1) in the solar water heater loop until the tank’s water temperature (TK1) reaches the desired level.
  • Set and modify the solenoid valve’s (SV1) open and closing periods in accordance with the flow rates of the backup water.
  • Operate the vacuum pump (P3) until the desired pressure is reached inside the flashing chamber.
  • Start the feeding pump (P2) to dispense saline water throughout the flashing chamber.
  • Use the level indicator, water bleeding through valve (V4), and control valve (V8) to adjust the brine pool’s height.
  • Turn on the air blower.
  • Turn on the feeding pump (P4) to supply the salty water to the humidifier sprayers.
  • Run the cooling water pump (P6) to pump the saline water to the condenser (C1) and dehumidifier.
  • Run the circulation pump (P5) to pump the saline water mixing tank (C2)
The performance of the HDH-SSF system is predicted in this work using an enhanced prediction method based on a supervised machine-learning algorithm. It is essential to be aware of the intricate physical processes that occur inside the system. Some examples of these processes include the process of heat and mass transfer between water air streams, as well as the formation of humid air. Because of this, it is difficult to evaluate the effects of the design and operational parameters on the HDH-SSF’s thermal and economic performance [31,32,33]. Another difficult problem that needs to be taken into account is the existence of phase change processes. Additionally, the abrupt variations in the air and water streams’ temperature, and flow rate would cause a complicated behavior [34]. Therefore, it is believed that the variation in the air and water streams flow rates and temperature, are crucial parameters affect the performance of HDH-SSF. As a consequence, more emphasis should be given to analyzing its implications on the overall performance of hybrid desalination systems such as HDH-SSF, which includes thermos-fluid processes. The majority of the evaluation the effect of design and operational parameters on the HDH-SSF’s performance is conducted through the utilization of numerical and experimental methodologies. Mathematical problems involving complicated nonlinear systems should be solved with the use of simplified assumptions and numerical methods [35]. Contrarily, in experimental approaches, expensive and time-consuming trials are conducted out before system characteristics and the considered results are correlated statistically. Unfortunately, both approaches’ outcomes may be impacted by noisy circumstances [36]. Therefore, it would be ideal to create a reliable and accurate approach for simulating and forecasting the effects design and operational parameters on the HDH-SSF’s performance. Artificial intelligence-based algorithms can be learned by doing through a training process that is inspired by the brain [37]. Once trained, these algorithms can be a potent tool for predicting the link between the HDH-SSF parameters and the design and operational parameters. The ability of artificial intelligence approaches to generalize means that they can forecast behavior in situations that they were not exposed to during the training phase. By using these techniques, one can avoid issues with experimental methods, such as the use of low accuracy statistical-based correlations between HDH-SSF’s parameters derived from experimental data, or issues with numerical methods, such as those associated with solving nonlinear mathematical models [38]. The relationship between HDH-SSF design and operating variables is poorly understood and has not been the subject of extensive experimental research.

3. Materials and Methods

3.1. Al-Biruni Earth Radius (BER) Algorithm

The optimization procedure for the BER algorithm [15] begins by separating the population into two groups for exploration and exploitation. Agents are separated into subgroups, and within each subgroup, agents are dynamically adjusted to enhance the balancing ability between exploration and exploitation processes. The exploration group comprises 70 percent of the population, whereas the exploitation group comprises 30 percent. To raise the fitness levels of individuals in each group, the exploitation and exploration groups’ individuals are updated to allow for a more significant increase in the global average of individuals’ fitness. Mathematically, the agents in the exploration group seek out promising regions surrounding its current position in the search space. This is achieved by repeatedly looking for a higher option in terms of fitness value among the surrounding viable options. The following equations are used for this reason in the BER investigation:
Optimization algorithms discover the optimum solution given constraints. BER can depict a population member as a vector, S = S 1 , S 2 , , S d R , where S d is the search space size, and d is the optimization problem parameter or feature. Use the fitness function F to evaluate the agent’s performance up to a certain point. The optimization phases explore populations for a fitness-optimal vector S * .

3.1.1. Exploration Operation

Exploration finds intriguing portions of the search space and avoids local optimum stagnation by moving toward the optimal solution, as discussed below.
  • The group’s lone explorer will utilize this strategy to find intriguing new areas to explore nearby. One must look through the many local possibilities to choose the best one. BER study uses the following equations:
    r = h c o s ( x ) 1 c o s ( x )
    S ( t + 1 ) = S ( t ) + D ( 2 r 2 1 ) , D = r 1 ( S ( t ) 1 )
    where r 1 and r 2 are calculated by Equation (1), 0 < x 180 , and h is within [ 0 , 2 ] . The vector S ( t ) represents a solution at iteration t, and D indicates circle diameter of search area.

3.1.2. Exploitation Operation

Exploitation teams must improve existing solutions. Each iteration, the BER honors the fittest participants. The BER uses two distinct methods to exploit.
  • The equation that is presented here is utilized in order to proceed in the best solution direction.
    S ( t + 1 ) = r 2 ( S ( t ) + D ) , D = r 3 ( L ( t ) S ( t ) )
    where vector L ( t ) denotes the best solution, vector S ( t ) represents the solution at iteration t, and D is the distance vector. r 3 is a random vector that is calculated using Equation (1), and it governs the movement steps that are taken towards the best solution.
  • The area surrounding the best solution constitutes an interesting potential choice. Individuals will find ways to improve the situation by investigating possibilities somewhat close to the best solution. To complete the aforementioned process, the BER will apply the following equation.
    S ( t + 1 ) = r ( S * ( t ) + k ) ,   k = 1 + 2 × t 2 T m a x 2
    where S * ( t ) denotes the best solution to the problem. Following a comparison of the S ( t + 1 ) and S ( t + 1 ) it is possible to pick the S * option as the optimal one. If the best fitness has not changed during the course of the preceding two iterations, the solution will be updated in accordance with the following equation.
    S ( t + 1 ) = k z 2 h c o s ( x ) 1 c o s ( x )
    where z denotes random value within [ 0 , 1 ] .

3.1.3. Selection of the Best Solution

The BER chooses the best for the next cycle to ensure high-quality solutions. Mutation and exploration group members give the BER excellent exploration capabilities. BER exploration delays convergence. Algorithm 1 shows BER pseudo-code. The BER receives size of population, rate of mutation, and maximum iterations. The BER splits participants into exploratory and exploitative groups. The BER algorithm dynamically alters group sizes during iterative solution search. Each team performs tasks in two ways. To ensure variety and depth, the BER shuffles answers between iterations. An exploration group solution may progress to the exploitation group in the next iteration. BER elitism prevents leader replacement.
Algorithm 1 AL-Biruni earth radius (BER) algorithm
1:
Initialize BER population S i ( i = 1 , 2 , , d ) with population size of d, maximum iterations of T m a x , objective function F n , t = 1 , and other parameters
2:
Calculate objective function F n for each agent S i
3:
Find the best solution and denote it as S *
4:
Divide agents into exploration group, n 1 , and exploitation group, n 2
5:
while t T m a x do
6:
   Update  r = h c o s ( x ) 1 c o s ( x ) , r 1 = h 1 c o s ( x ) 1 c o s ( x ) , r 2 = h 2 c o s ( x ) 1 c o s ( x ) , r 3 = h 3 c o s ( x ) 1 c o s ( x )
7:
   for ( i = 1 : i < n 1 + 1 ) do
8:
     Find parameter D as r 1 ( S ( t ) 1 )
9:
     Update agents’ positions as S ( t + 1 ) = S ( t ) + D ( 2 r 2 1 )
10:
   end for
11:
   for ( i = 1 : i < n 2 + 1 ) do
12:
     Find parameter D as r 3 ( L ( t ) S ( t ) )
13:
     Update agents’ positions as S ( t + 1 ) = r 2 ( S ( t ) + D )
14:
     Calculate parameter k as 1 + 2 × t 2 M a x i t e r 2
15:
     Update agents’ positions around the best solution as S ( t + 1 ) = r ( S * ( t ) + k )
16:
     Compare  S ( t + 1 ) and S ( t + 1 ) and select the best solution as S *
17:
     if best value of objective function is the same for last two rounds then
18:
       Mutate the solution using S ( t + 1 ) = k z 2 h c o s ( x ) 1 c o s ( x )
19:
     end if
20:
   end for
21:
   Update objective function F n for each agent S i
22:
   Update the parameters, t = t + 1
23:
end while
24:
Return S *

3.2. Particle Swarm Optimization (PSO) Algorithm

In the PSO algorithm, potential solutions (particles) in the search space, imitating the natural bird swarms intelligence [16]. A particle’s velocity is the pace at which it changes location. Throughout time, the positions of the particles are modified. Throughout its flight, a particle’s speed is stochastically increased to its previous best location. The position vector is denoted by S i ( i = 1 , 2 , , n ) with size n while the velocity vector is denoted by V i ( i = 1 , 2 , , n ) . The following equations are applied to update the particle at iteration t + 1 to a region close to the optimal solution.
S ( t + 1 ) = S ( t ) + V ( t + 1 ) ,
V ( t + 1 ) = V ( t ) + C 1 g 1 ( ( S * ) + S ( t ) ) + C 2 g 2 ( S g b e s t S ( t ) )
where g 1 and g 2 are random variables within [0, 1], and C 1 and C 2 are constants.

4. Proposed BER–PSO Algorithm

The proposed BER–PSO algorithm uses the advantages of both BET and PSO algorithms. Algorithm 2 shows the steps of the proposed algorithm. The algorithm starts by initializing the population positions S i ( i = 1 , 2 , , n ) and velocities V i ( i = 1 , 2 , , n ) with n agents—the parameters of the BER and PSO algorithms and the maximum iterations and objective function f n . The objective function is then calculated for each S i , and the best agent ( S * ) is selected. During the iterations, the proposed algorithm swaps between the BER and PSO algorithms to update the agents. Finally, the optimal solution ( S * ) is calculated.
Algorithm 2 Proposed BER–PSO algorithm
1:
Initialize BER–PSO population positions S i ( i = 1 , 2 , , d ) with population size d, maximum iterations T m a x , objective function F n , t = 1 , and other parameters
2:
Calculate objective function F n for each agent S i
3:
Find the best solution as S *
4:
Divide agents into exploration group, n 1 , and exploitation group, n 2
5:
while t T m a x do
6:
   if ( t % 2 ) then
7:
     Update  r = h c o s ( x ) 1 c o s ( x ) , r 1 = h 1 c o s ( x ) 1 c o s ( x ) , r 2 = h 2 c o s ( x ) 1 c o s ( x ) , r 3 = h 3 c o s ( x ) 1 c o s ( x )
8:
     for ( i = 1 : i < n 1 + 1 ) do
9:
          Calculate parameter D as r 1 ( S ( t ) 1 )
10:
        Update agents’ positions as S ( t + 1 ) = S ( t ) + D ( 2 r 2 1 )
11:
     end for
12:
     for ( i = 1 : i < n 2 + 1 ) do
13:
        Calculate parameter D as r 3 ( L ( t ) S ( t ) )
14:
        Update agents’ positions as S ( t + 1 ) = r 2 ( S ( t ) + D )
15:
        Calculate  k = 1 + 2 × t 2 M a x i t e r 2
16:
        Update agents’ positions around the best solution as S ( t + 1 ) = r ( S * ( t ) + k )
17:
        Compare  S ( t + 1 ) and S ( t + 1 ) and select the best solution S *
18:
        if best value of objective function is the same for last two rounds then
19:
           Mutate solution as S ( t + 1 ) = k z 2 h c o s ( x ) 1 c o s ( x )
20:
        end if
21:
     end for
22:
   else
23:
     Update velocities as V ( t + 1 ) = V ( t ) + C 1 g 1 ( ( S * ) + S ( t ) ) + C 2 g 2 ( S g b e s t S ( t ) )
24:
     Update agents’ positions as S ( t + 1 ) = S ( t ) + V ( t + 1 )
25:
   end if
26:
   Update objective function F n for each agent S i
27:
   Update the BER–PSO parameters, t = t + 1
28:
end while
29:
Return S *

4.1. Computational Complexity

The computational complexity of the BER–PSO algorithm is denoted as follows. The complexity is defined as follows for iterations T m a x with a population of n.
  • Initialize BER–PSO parameters: O(1).
  • Calculate fitness function for each agent: O(n).
  • Get the best agent: O (n).
  • Update agents’ positions: O( T m a x × n ).
  • Update agents’ positions around best solution: O( T m a x × n ).
  • Compare positions to select best solution: O( T m a x × n ).
  • Mutate solutions: O( T m a x × n ).
  • Update velocities: O( T m a x × n ).
  • Update positions to head toward best solution: O( T m a x × n ).
  • Update fitness function: O( T m a x ).
  • Update BER–PSO parameters: O( T m a x ).
  • Get best agent x G b e s t : O(1)
The computation complexity is chosen at based on the BER–PSO algorithm analysis that was conducted above to O( T m a x × n ) and to O( T m a x × n × d ) for d dimension.

4.2. Proposed Binary BER–PSO Algorithm

To make the process of choosing features from the dataset easier, the continuous values of the proposed BER–PSO algorithm will be converted to binary [0,1] in this section. In the event of feature selection concerns, the BER–PSO algorithm answers can be strictly binary [0 or 1] by utilizing the following equation.
The following equation is used in this investigation and is based on the S i g m o i d function [39].
x d t + 1 = 1 if S i g m o i d ( m ) 0.5 0 o t h e r w i s e , S i g m o i d ( m ) = 1 1 + e 10 ( m 0.5 ) ,
where the binary answer at iteration t and dimension d is indicated by x d t + 1 . The output solutions can be scaled to be binary ones using the S i g m o i d function. The value will change to 1 if S i g m o i d ( m ) 0.5 , otherwise it will remain at 0. The algorithm’s chosen features are reflected in the m parameter. In Algorithm 3, the binary BER–PSO algorithm is thoroughly explained. The computing complexity of the BER–PSO method is determined to be O( t m a x × n ) and will be O( t m a x × n × d ) for the d dimension.
Algorithm 3 Proposed Binary BER–PSO algorithm.
1:
Initialize the BER–PSO parameters and population
2:
Convert solution to binary [0 or 1]
3:
Calculate objective function for each agent
4:
Find best solution
5:
Divide agents into exploration group, n 1 , and exploitation group, n 2
6:
while t T m a x do
7:
   if ( t % 2 ) then
8:
       for ( i = 1 : i < n 1 + 1 ) do
9:
          Update agents’ positions
10:
     end for
11:
     for ( i = 1 : i < n 2 + 1 ) do
12:
          Update agents’ positions
13:
          Update agents’ positions around best solution
14:
          Compare updated positions and select the best solution
15:
          if best value of objective function is the same for last two rounds then
16:
            Mutate solution
17:
          end if
18:
     end for
19:
   else
20:
     Update agents’ velocities and positions
21:
   end if
22:
   Convert updated solution to binary
23:
   Update objective function for each individual
24:
   Update the BER–PSO parameters
25:
end while
26:
Return best solution

5. Experimental Results

The findings of this investigation are thoroughly explained in this section. There are two different situations for the experiments. The first case covers the BER–PSO algorithm’s feature selection capabilities for the dataset under test, while the second scenario demonstrates the algorithm’s classification capabilities. In the tested dataset, the volume flow rates, temperatures, and weather conditions were the inputs to the algorithms, and the outputs of the algorithms were the outlet temperatures of the air and water as well as the humidity added by the humidifier.

5.1. Feature Selection Scenario

The proposed (BER–PSO) algorithm’s binary version is employed to choose features from the dataset under test. The feature selection outcomes of the BER–PSO method provided in this work are examined in the first scenario. Table 1 displays the configuration of the BER–PSO method for each parameter utilized in the experiment, while Table 2 displays the configuration of the comparison algorithms. PSO [16], GWO [18], hybrid GWO-PSO [19], WOA [20], BBO [21], SBO [22], FA [23], GA [24], and BA [25] are tested against the binary BER–PSO (bBER–PSO) algorithm.
In the binary BER–PSO method, the quality of a solution is assessed using the objective equation, f n . The following objective equation, f n , includes total number of features, S, number of selected features, s, and classifier’s error rate, E r r .
f n = α E r r + β | s | | S |
where β = 1 α and α [ 0 , 1 ] denote the population significance of the provided trait. If the method can offer a subset of features with a low classification error rate, it is successful. K-nearest neighbor (k-NN) is a popular and simple categorization method. This method makes use of the k-NN as a classifier to guarantee the validity of the features selected. The shortest distance between the query instance and the training instances is the sole criterion for classifier selection; this experiment does not make use of a K-nearest neighbor model.
The performance metrics used in this experiment are average error, average select size, average fitness, best fitness, worst fitness, and standard deviation, as listed mathematically in Table 3. Table 4 shows the out results for the presented bBER–PSO algorithm and compared algorithms. The bBER–PSO algorithm gives an average error of 0.5623, which is better than other algorithms. The second-best average error of (0.5795) is achieved by bGWO, while the worst average error of (0.6229) is achieved by bBA. The box plot analysis of the presented bBER–PSO and compared algorithms is also presented in Figure 2. The bBER–PSO algorithm shows better results over 15 runs over average error metric.
Statistical analysis is performed to confirm the algorithm’s performance. Table 5 shows the ANOVA test results of the presented bBER–PSO algorithm versus compared algorithms. Table 6 presents the Wilcoxon Signed Rank test results of the presented bBER–PSO algorithm and compared algorithms over 15 runs. The ANOVA and Wilcoxon Signed Rank tests results confirm the performance of the bBER–PSO algorithm.

5.2. Classification Scenario

The HDH and SSF test data classification results using the suggested BER–PSO algorithm are shown in this section. Table 7 shows the performance evaluation metrics applied in this scenario. With N parameter as the total number of observations in the dataset, the ( V n ^ ) and ( V n ) are the nth estimated and observed bandwidth, and ( V n ^ ¯ ) and ( V n ) are the arithmetic means of the estimated and observed values.

5.2.1. Results and Discussion Using HDH Data

The findings from the suggested BER–PSO method for the HDH test data are presented in this section. First, tests are conducted on the fundamental Decision Tree Regressor (DTR), MLP Regressor (MLP), K-Neighbors Regressor (KNR), Support Vector Regression (SVR), and Random Forest Regressor (RFR) models. Additionally, two ensembles for creating new estimators are Average Ensemble (AVE) and Ensemble utilizing KNR (EKNR). Table 8 compares the experimental findings of the BER–PSO-based model with those of the basic and ensemble models. The BER–PSO-based model achieved an RMSE of (0.0053), better than the compared models. The EKNR model achieved the second-best RMSE of (0.0130).
The presented BER–PSO algorithm is also compared with other optimization techniques of PSO [16], GWO [18], WOA [20], and GA [24] algorithms. Figure 3 illustrates the box plot of the presented BER–PSO and compared algorithms for the HDH-tested data. The BER–PSO algorithm shows better results over 16 runs over HDH. While Figure 4 discusses the histogram of the presented BER–PSO and compared algorithms for the HDH-tested data. The BER–PSO algorithm shows better performance over the histogram of HDH.
Figure 5 displays the quantile-quantile (QQ) plots as well as the residual plots for the presented BER–PSO method and the algorithms that were compared for the HDH-tested data. The ROC curve of the proposed BER–PSO algorithm compared to the ROC curve of the PSO algorithm for the HDH-tested data is displayed in Figure 6. The performance of the suggested BER–PSO algorithm for the HDH-tested data is demonstrated by these statistics, which prove its effectiveness.
In this section, statistical analysis is carried out in order to verify the effectiveness of the BER–PSO algorithm. The descriptive statistics of the provided BER–PSO method and the algorithms that are being compared for the HDH data are presented in Table 9. The description includes the minimum, median, maximum, mean, and standard (Std.) deviation of the RMSE for the tested data over 16 runs. The results of the ANOVA test for the suggested BER–PSO algorithm in comparison to the other algorithms are presented in Table 10. The results of the Wilcoxon Signed Rank Test for the presented BER–PSO algorithm and the algorithms that were compared may be found in Table 11.

5.2.2. Results and Discussion Using SSF Data

The results based on the proposed BER–PSO algorithm for the SSF test data are discussed in this section. The basic models of DTR, MLP, KNR, SVR, and RFR are tested first. In addition, AVE and EKNR are also tested. Table 12 shows the experimental results of the BER–PSO-based model versus basic and ensemble models for the SSF test data. The presented method achieved an RMSE of (0.00490), which is better than the compared models. The RFR achieved the worst RMSE of (0.1040).
The presented BER–PSO algorithm is also compared with other optimization techniques of PSO, GWO, WOA, and GA algorithms. Figure 7 illustrates the box plot of the presented BER–PSO and compared algorithms for the SSF -tested data. While Figure 8 discusses the histogram of the presented BER–PSO and compared algorithms for the SSF -tested data. The QQ and residual plots of the presented BER–PSO and compared algorithms for the SSF -tested data are shown in Figure 9. Figure 10 shows the ROC curve of the proposed BER–PSO algorithm versus the PSO algorithm for the SSF -tested data.
The Statistical analysis is also performed here to confirm the BER–PSO algorithm’s performance for the SSF data. Table 13 presents the descriptive statistics of the presented BER–PSO algorithm and the compared algorithms for the SSF data. The description includes the minimum, median, maximum, mean, and standard (Std.) deviation of the RMSE for the tested data over 16 runs. Table 14 shows the ANOVA test results of the proposed BER–PSO algorithm versus compared algorithms. Finally, Table 15 presents the Wilcoxon Signed Rank Test results of the presented BER–PSO algorithm and compared algorithms.

6. Conclusions and Future Work

This work employed an improved prediction method based on a supervised machine-learning algorithm to forecast the performance of a hybrid solar desalination system. The proposed algorithm is based on the Al-Biruni Earth Radius (BER) and Particle Swarm Optimization (PSO) algorithms. The BER–PSO algorithm was trained and assessed using experimental data. The outcomes demonstrated the BER–PSO method’s excellent capacity to pinpoint the nonlinear relationship between process responses and operating conditions. Additionally, in comparison to the other models under consideration, it provided higher statistical performance measures for the forecasting of pressure drop values and outlet temperatures for hot and cold fluids. The findings of many algorithms were compared to decide which was the most accurate.

Author Contributions

Conceptualization, A.E.K. and E.M.S.E.-S.; Methodology, A.I. and E.-S.M.E.-k.; Software, E.-S.M.E.-k. and M.M.E.; Validation, M.E.-S.; Formal analysis, A.E.K., M.M.E., A.A.A., S.A.W., E.M.S.E.-S. and M.E.-S.; Investigation, M.M.E. and M.E.-S.; Resources, S.A.W.; Data curation, E.-S.M.E.-k. and E.M.S.E.-S.; Writing—original draft, A.I. and A.A.A.; Writing—review & editing, A.I.; Visualization, A.A.A.; Supervision, A.E.K. and S.A.W.; Project administration, F.K.K. and D.S.K.; Funding acquisition, F.K.K. and D.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R300), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R300), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abujazar, M.S.S.; Fatihah, S.; Ibrahim, I.A.; Kabeel, A.; Sharil, S. Productivity modelling of a developed inclined stepped solar still system based on actual performance and using a cascaded forward neural network model. J. Clean. Prod. 2018, 170, 147–159. [Google Scholar] [CrossRef]
  2. Al-Khudhiri, A.I. Optimal Design of Hybrid MSF/RO Desalination Plant. Master’s Thesis, Department of Chemical Engineering, College of Engineering, Cairo, Egypt, 2006. [Google Scholar]
  3. Bacha, H.B.; Damak, T.; Bouzguenda, M.; Maalej, A.Y.; Dhia, H.B. A methodology to design and predict operation of a solar collector for a solar-powered desalination unit using the SMCEC principle. Desalination 2003, 156, 305–313. [Google Scholar] [CrossRef]
  4. Wang, X.; Ng, K.C. Experimental investigation of an adsorption desalination plant using low-temperature waste heat. Appl. Therm. Eng. 2005, 25, 2780–2789. [Google Scholar] [CrossRef]
  5. Elsheikh, A.H.; Sharshir, S.W.; Elaziz, M.A.; Kabeel, A.; Guilan, W.; Haiou, Z. Modeling of solar energy systems using artificial neural network: A comprehensive review. Sol. Energy 2019, 180, 622–639. [Google Scholar] [CrossRef]
  6. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef] [PubMed]
  8. El-Kenawy, E.S.M.; Mirjalili, S.; Alassery, F.; Zhang, Y.D.; Eid, M.M.; El-Mashad, S.Y.; Aloyaydi, B.A.; Ibrahim, A.; Abdelhamid, A.A. Novel Meta-Heuristic Algorithm for Feature Selection, Unconstrained Functions and Engineering Problems. IEEE Access 2022, 10, 40536–40555. [Google Scholar] [CrossRef]
  9. El-kenawy, E.S.M.; Albalawi, F.; Ward, S.A.; Ghoneim, S.S.M.; Eid, M.M.; Abdelhamid, A.A.; Bailek, N.; Ibrahim, A. Feature Selection and Classification of Transformer Faults Based on Novel Meta-Heuristic Algorithm. Mathematics 2022, 10, 3144. [Google Scholar] [CrossRef]
  10. Akinola, O.O.; Ezugwu, A.E.; Agushaka, J.O.; Zitar, R.A.; Abualigah, L. Multiclass feature selection with metaheuristic optimization algorithms: A review. Neural Comput. Appl. 2022, 34, 19751–19790. [Google Scholar] [CrossRef] [PubMed]
  11. Trojovská, E.; Dehghani, M. A new human-based metahurestic optimization method based on mimicking cooking training. Sci. Rep. 2022, 12, 14861. [Google Scholar] [CrossRef]
  12. El-Kenawy, E.S.M.; Mirjalili, S.; Abdelhamid, A.A.; Ibrahim, A.; Khodadadi, N.; Eid, M.M. Meta-Heuristic Optimization and Keystroke Dynamics for Authentication of Smartphone Users. Mathematics 2022, 10, 2912. [Google Scholar] [CrossRef]
  13. Alhussan, A.A.; Khafaga, D.S.; El-Kenawy, E.S.M.; Ibrahim, A.; Eid, M.M.; Abdelhamid, A.A. Pothole and Plain Road Classification Using Adaptive Mutation Dipper Throated Optimization and Transfer Learning for Self Driving Cars. IEEE Access 2022, 10, 84188–84211. [Google Scholar] [CrossRef]
  14. Abdelhamid, A.A.; El-Kenawy, E.S.M.; Alotaibi, B.; Amer, G.M.; Abdelkader, M.Y.; Ibrahim, A.; Eid, M.M. Robust Speech Emotion Recognition Using CNN + LSTM Based on Stochastic Fractal Search Optimization Algorithm. IEEE Access 2022, 10, 49265–49284. [Google Scholar] [CrossRef]
  15. El-kenawy, E.S.M.; Abdelhamid, A.A.; Ibrahim, A.; Mirjalili, S.; Khodadad, N.; Al duailij, M.A.; Alhussan, A.A.; Khafaga, D.S. Al-Biruni Earth Radius (BER) Metaheuristic Search Optimization Algorithm. Comput. Syst. Sci. Eng. 2023, 45, 1917–1934. [Google Scholar] [CrossRef]
  16. Bello, R.; Gomez, Y.; Nowe, A.; Garcia, M.M. Two-Step Particle Swarm Optimization to Solve the Feature Selection Problem. In Proceedings of the Seventh International Conference on Intelligent Systems Design and Applications (ISDA 2007), Rio de Janeiro, Brazil, 20–24 October 2007; pp. 691–696. [Google Scholar] [CrossRef]
  17. Kabeel, A.; El-Said, E.M. A hybrid solar desalination system of air humidification, dehumidification and water flashing evaporation: Part II. Experimental investigation. Desalination 2014, 341, 50–60. [Google Scholar] [CrossRef]
  18. Al-Tashi, Q.; Abdul Kadir, S.J.; Rais, H.M.; Mirjalili, S.; Alhussian, H. Binary Optimization Using Hybrid Grey Wolf Optimization for Feature Selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar] [CrossRef]
  19. Şenel, F.A.; Gokçe, F.; Yuksel, A.S.; Yigit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  22. Samareh Moosavi, S.H.; Khatibi Bardsiri, V. Satin Bowerbird Optimizer. Eng. Appl. Artif. Intell. 2017, 60, 1–15. [Google Scholar] [CrossRef]
  23. Fister, I.; Yang, X.S.; Fister, I.; Brest, J. Memetic firefly algorithm for combinatorial optimization. arXiv 2012, arXiv:1204.5165. [Google Scholar]
  24. Kabir, M.M.; Shahjahan, M.; Murase, K. A new local search based hybrid genetic algorithm for feature selection. Neurocomputing 2011, 74, 2914–2928. [Google Scholar] [CrossRef]
  25. Karakonstantis, I.; Vlachos, A. Bat algorithm applied to continuous constrained optimization problems. J. Inf. Optim. Sci. 2020, 42, 57–75. [Google Scholar] [CrossRef]
  26. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  27. El-kenawy, E.S.M.; Mirjalili, S.; Ghoneim, S.S.M.; Eid, M.M.; El-Said, M.; Khan, Z.S.; Ibrahim, A. Advanced Ensemble Model for Solar Radiation Forecasting using Sine Cosine Algorithm and Newton’s Laws. IEEE Access 2021, 9, 115750–115765. [Google Scholar] [CrossRef]
  28. Jang, S.; Jang, Y.E.; Kim, Y.J.; Yu, H. Input initialization for inversion of neural networks using k-nearest neighbor approach. Inf. Sci. 2020, 519, 229–242. [Google Scholar] [CrossRef]
  29. Tharwat, A. Parameter investigation of support vector machine classifier with kernel functions. Knowl. Inf. Syst. 2019, 61, 1269–1302. [Google Scholar] [CrossRef]
  30. Noi, P.T.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef] [Green Version]
  31. El-Said, E.M.; Elaziz, M.A.; Elsheikh, A.H. Machine learning algorithms for improving the prediction of air injection effect on the thermohydraulic performance of shell and tube heat exchanger. Appl. Therm. Eng. 2021, 185, 116471. [Google Scholar] [CrossRef]
  32. Kabeel, A.; El-Said, E.M. A hybrid solar desalination system of air humidification dehumidification and water flashing evaporation: A comparison among different configurations. Desalination 2013, 330, 79–89. [Google Scholar] [CrossRef]
  33. Pandey, A.; Kumar, R.R.; B, K.; Laghari, I.A.; Samykano, M.; Kothari, R.; Abusorrah, A.M.; Sharma, K.; Tyagi, V. Utilization of solar energy for wastewater treatment: Challenges and progressive research trends. J. Environ. Manag. 2021, 297, 113300. [Google Scholar] [CrossRef]
  34. Rashidi, S.; Karimi, N.; Yan, W.M. Applications of machine learning techniques in performance evaluation of solar desalination system—A concise review. Eng. Anal. Bound. Elem. 2022, 144, 399–408. [Google Scholar] [CrossRef]
  35. Kumar, R.R.; Pandey, A.; Samykano, M.; Aljafari, B.; Ma, Z.; Bhattacharyya, S.; Goel, V.; Ali, I.; Kothari, R.; Tyagi, V. Phase change materials integrated solar desalination system: An innovative approach for sustainable and clean water production and storage. Renew. Sustain. Energy Rev. 2022, 165, 112611. [Google Scholar] [CrossRef]
  36. Gwon, H.; Ahn, I.; Kim, Y.; Kang, H.J.; Seo, H.; Cho, H.N.; Choi, H.; Jun, T.J.; Kim, Y.H. Self–Training With Quantile Errors for Multivariate Missing Data Imputation for Regression Problems in Electronic Medical Records: Algorithm Development Study. JMIR Public Health Surveill. 2021, 7, e30824. [Google Scholar] [CrossRef]
  37. Ibrahim, A.; Mirjalili, S.; El-Said, M.; Ghoneim, S.S.M.; Al-Harthi, M.M.; Ibrahim, T.F.; El-Kenawy, E.S.M. Wind Speed Ensemble Forecasting Based on Deep Learning Using Adaptive Dynamic Optimization Algorithm. IEEE Access 2021, 9, 125787–125804. [Google Scholar] [CrossRef]
  38. Kalantari, S.; Motameni, H.; Akbari, E.; Rabbani, M. Optimal components selection based on fuzzy-intra coupling density for component-based software systems under build-or-buy scheme. Complex Intell. Syst. 2021, 7, 3111–3134. [Google Scholar] [CrossRef]
  39. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
Figure 1. HDH-SSF experimental setup schematic diagram. F (Flow meter), T (Thermocouple), S (Pyrometer), H (Thermo hygrometer), V (Valve), P (Pump), LI (Level indicator), MT (Mixing Tank), TK (Tank), EH (Electric Heater), SV (Solenoid), SWH (Solar Water Heater), SAH (Solar Air Heater), HHEX (Helical Heat Exchanger), FC (Flashing Chamber), C1 (Condenser), and C2 (Dehumidifier).
Figure 1. HDH-SSF experimental setup schematic diagram. F (Flow meter), T (Thermocouple), S (Pyrometer), H (Thermo hygrometer), V (Valve), P (Pump), LI (Level indicator), MT (Mixing Tank), TK (Tank), EH (Electric Heater), SV (Solenoid), SWH (Solar Water Heater), SAH (Solar Air Heater), HHEX (Helical Heat Exchanger), FC (Flashing Chamber), C1 (Condenser), and C2 (Dehumidifier).
Energies 16 01185 g001
Figure 2. The given bBER–PSO and comparable algorithms are depicted in a box plot over the average error metric.
Figure 2. The given bBER–PSO and comparable algorithms are depicted in a box plot over the average error metric.
Energies 16 01185 g002
Figure 3. For the HDH tested data, a box plot of the offered BER–PSO and comparing algorithms.
Figure 3. For the HDH tested data, a box plot of the offered BER–PSO and comparing algorithms.
Energies 16 01185 g003
Figure 4. For the HDH-tested data, a histogram of the offered BER–PSO and comparing algorithms is shown.
Figure 4. For the HDH-tested data, a histogram of the offered BER–PSO and comparing algorithms is shown.
Energies 16 01185 g004
Figure 5. BER–PSO and comparable methods for the HDH-tested data are provided in QQ and residual plots for the HDH-tested data.
Figure 5. BER–PSO and comparable methods for the HDH-tested data are provided in QQ and residual plots for the HDH-tested data.
Energies 16 01185 g005
Figure 6. ROC curve comparing the PSO algorithm and the provided BER–PSO algorithm for the HDH testing data.
Figure 6. ROC curve comparing the PSO algorithm and the provided BER–PSO algorithm for the HDH testing data.
Energies 16 01185 g006
Figure 7. For the SSF tested data, a box plot of the offered BER–PSO and comparing algorithms.
Figure 7. For the SSF tested data, a box plot of the offered BER–PSO and comparing algorithms.
Energies 16 01185 g007
Figure 8. For the SSF tested data, a histogram of the offered BER–PSO and comparing algorithms is shown.
Figure 8. For the SSF tested data, a histogram of the offered BER–PSO and comparing algorithms is shown.
Energies 16 01185 g008
Figure 9. BER–PSO and comparable methods for the SSF tested data are provided in QQ and residual plots.
Figure 9. BER–PSO and comparable methods for the SSF tested data are provided in QQ and residual plots.
Energies 16 01185 g009
Figure 10. ROC curve comparing the PSO algorithm and the provided BER–PSO algorithm for the SSF testing data.
Figure 10. ROC curve comparing the PSO algorithm and the provided BER–PSO algorithm for the SSF testing data.
Energies 16 01185 g010
Table 1. Parameters for BER–PSO algorithm configuration.
Table 1. Parameters for BER–PSO algorithm configuration.
Parameter (s)Value (s)
No. of Agents10
No. of Iterations80
No. of Runs20
DimensionNo. of features
W m a x , W m i n [0.9, 0.6]
C 1 , C 2 [2, 2]
α of F n 0.99
β of F n 0.01
Table 2. Parameter settings for the compared algorithms.
Table 2. Parameter settings for the compared algorithms.
AlgorithmParameter (s)Value (s)
PSOAcceleration constants[2, 2]
Inertia W m a x , W m i n [0.6, 0.9]
Particles10
Iterations80
GWOa2 to 0
Iterations80
Wolves10
GACross over0.9
Mutation ratio0.1
Selection mechanismRoulette wheel
Iterations80
Agents10
WOAr[0, 1]
Iterations80
Whales10
a2 to 0
SBOSize of step0.94
Mutation probability0.05
Upper and lower limit0.02
FANumber of fireflies10
BAPluse rate0.5
Loudness0.5
Frequency[0, 1]
BBOProbability of Immigration[0, 1]
Probability of Mutation0.05
Probability of Habitat modification1.0
Size of step1.0
Rate of migration1.0
Maximum immigration1.0
Table 3. Feature selection performance metrics.
Table 3. Feature selection performance metrics.
MetricValue
Average Error 1 1 M j = 1 M 1 N i = 1 N M a t c h ( C i , L i )
Average Select Size 1 M j = 1 M s i z e ( g j * ) D
Average Fitness 1 M j = 1 M g j *
Best Fitness M i n j = 1 M g j *
Worst Fitness M a x j = 1 M g j *
Standard Deviation 1 M 1 ( g j * M e a n ) 2
Table 4. Results of the proposed bBER–PSO algorithm and comparison.
Table 4. Results of the proposed bBER–PSO algorithm and comparison.
Performance MetricbBER–PSObGWObGWO_PSObPSObBAbWOAbBBObSBObFAbGA
Average error0.56230.57950.61880.61330.62290.61310.58150.62160.61170.5931
Average Select size0.51510.71510.84840.71510.85450.87850.87890.88540.74960.6575
Average Fitness0.62550.64170.650.64010.6630.64790.64580.67980.6920.6531
Best Fitness0.52730.5620.60350.62040.55270.6120.63550.62290.61070.5564
Worst Fitness0.62580.62890.71350.68810.65430.68810.7220.70260.70830.6715
Standard deviation Fitness0.44780.45250.47070.45190.46180.45410.49680.51280.48870.4541
Table 5. Results of an ANOVA test comparing the presented bBER–PSO method to other algorithms.
Table 5. Results of an ANOVA test comparing the presented bBER–PSO method to other algorithms.
SSDFMSF (DFn, DFd)p Value
Treatment (between columns)0.0627290.006969F (9, 140) = 184.1p < 0.0001
Residual (within columns)0.0052991403.79 × 10 5 --
Total0.06802149---
Table 6. Results of the provided bBER–PSO method and compared algorithms in the Wilcoxon Signed Rank Test.
Table 6. Results of the provided bBER–PSO method and compared algorithms in the Wilcoxon Signed Rank Test.
bBER–PSObGWObGWO_PSObPSObBAbWAObBBObSBObFAbGA
Theoretical median0000000000
Actual median0.56230.57950.61880.61330.62290.61310.58150.62160.61170.5931
Number of values15151515151515151515
Wilcoxon Signed Rank Test
Sum of signed ranks (W)120120120120120120120120120120
Sum of positive ranks120120120120120120120120120120
Sum of negative ranks0000000000
p value (two tailed)<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001<0.0001
Exact or estimate?ExactExactExactExactExactExactExactExactExactExact
Significant (alpha = 0.05)?YesYesYesYesYesYesYesYesYesYes
How big is the discrepancy?
Discrepancy0.56230.57950.61880.61330.62290.61310.58150.62160.61170.5931
Table 7. Performance evaluation metrics.
Table 7. Performance evaluation metrics.
MetricValue
RMSE 1 N n = 1 N ( V n ^ V n ) 2
RRMSE R M S E n = 1 N V n ^ × 100
MAE 1 N n = 1 N | V n ^ V n |
NSE 1 n = 1 N ( V n V n ^ ) 2 n = 1 N ( V n V n ^ ¯ ) 2
MBE 1 N n = 1 N ( V n ^ V n )
R2 1 n = 1 N ( V n V n ^ ) 2 n = 1 N n = 1 N V n ) V n 2
WI 1 n = 1 N | V n ^ V n | n = 1 N | V n V n ¯ | + | V n ^ V n ^ ¯ |
r n = 1 N ( V n ^ V n ^ ¯ ) ( V n V n ¯ ) n = 1 N ( V n ^ V n ^ ¯ ) 2 n = 1 N ( V n V n ¯ ) 2
Table 8. Experimental findings of the BER–PSO based model for the HDH test data in comparison to basic and ensemble models.
Table 8. Experimental findings of the BER–PSO based model for the HDH test data in comparison to basic and ensemble models.
ModelMAEMBERMSERRMSE%rNSEWIR2
DTR0.0473−0.001820.063617.08710.96270.92640.88700.9267
MLP0.03500.00080.0473412.71640.97950.95930.91620.9594
KNR0.0129−0.00220.01885.04130.99690.99360.96920.9938
SVR0.03350.00480.044812.04840.98190.96340.91980.9641
RFR0.0788−0.00030.102127.42330.90050.81050.81180.8109
AVE0.03250.00030.04425.04130.98300.96440.92240.9664
EKNR0.0089−0.00100.01303.48710.99850.99690.97880.9970
BER–PSO0.00340.00080.00530.98150.99970.99930.99060.9994
Table 9. Descriptive statistics of the compared methods for the HDH data and the provided BER–PSO algorithm.
Table 9. Descriptive statistics of the compared methods for the HDH data and the provided BER–PSO algorithm.
BER–PSOPSOGWOWOAGA
Number of values1616161616
Minimum0.0043140.0067890.007120.008440.009178
25% Percentile0.0053140.0077890.008120.009440.009978
Median0.0053140.0077890.008120.009440.009978
75% Percentile0.0053140.0077890.008120.009440.009978
Maximum0.0053140.0079890.009120.009990.009998
Range0.0010.00120.0020.001550.00082
Mean0.0052520.0077390.0081630.0094310.009929
Std. Deviation0.000250.0002580.0004040.0003050.0002
Std. Error of Mean6.25 × 10 5 6.46 × 10 5 0.0001017.61 × 10 5 5.01 × 10 5
Sum0.084020.12380.13060.15090.1589
Table 10. Results of an ANOVA test comparing the proposed BER–PSO method against other algorithms.
Table 10. Results of an ANOVA test comparing the proposed BER–PSO method against other algorithms.
SSDFMSF (DFn, DFd)p Value
Treatment (between columns)0.00021445.35 × 10 5 F (4, 75) = 628.5p < 0.0001
Residual (within columns)6.38 × 10 6 758.51 × 10 8 --
Total0.0002279---
Table 11. Results of the given BER–PSO algorithm and compared algorithms in the Wilcoxon Signed Rank Test.
Table 11. Results of the given BER–PSO algorithm and compared algorithms in the Wilcoxon Signed Rank Test.
BER–PSOPSOGWOWOAGA
Theoretical median00000
Actual median0.0053140.0077890.008120.009440.009978
Number of values1616161616
Wilcoxon Signed Rank Test
Sum of signed ranks (W)136136136136136
Sum of positive ranks136136136136136
Sum of negative ranks00000
p value (two tailed)<0.0001<0.0001<0.0001<0.0001<0.0001
Exact or estimate?ExactExactExactExactExact
Significant (alpha = 0.05)?YesYesYesYesYes
How big is the discrepancy?
Discrepancy0.0053140.0077890.008120.009440.009978
Table 12. Results of the experimental comparison of the basic and ensemble models with the BER–PSO based model for the SSF test data.
Table 12. Results of the experimental comparison of the basic and ensemble models with the BER–PSO based model for the SSF test data.
ModelMAEMBERMSERRMSE%rNSEWIR2
DTR0.0533−0.00290.070818.46870.95220.90640.86890.9066
MLP0.0445−0.00100.057414.98910.96880.93840.89050.9385
KNR0.0252−0.00460.03629.44930.98800.97550.93800.9761
SVR0.03620.00610.047712.44100.97890.95750.91090.9583
RFR0.08070.00480.104027.13740.89490.79800.80130.8009
AVE0.03680.00050.05049.44930.97730.95260.90950.9551
EKNR0.0219−0.00180.03057.96790.99130.98260.94600.9827
BER–PSO0.00310.00030.004900.90310.99970.99940.99140.9994
Table 13. Descriptive statistics of the compared methods for the SSF data and the provided BER–PSO algorithm.
Table 13. Descriptive statistics of the compared methods for the SSF data and the provided BER–PSO algorithm.
BER–PSOPSOGWOWOAGA
Number of values1616161616
Minimum0.0046980.006660.007130.009150.00916
25% Percentile0.0048980.007660.008130.009450.00966
Median0.0048980.007660.008130.009450.00966
75% Percentile0.0048980.007660.008130.009450.00966
Maximum0.0048980.007660.009130.00990.00998
Range0.00020.0010.0020.000750.00082
Mean0.0048790.0075910.0081730.0094780.009668
Std. Deviation5.44 × 10 5 0.000250.0004030.0001570.000172
Std. Error of Mean1.36 × 10 5 6.24 × 10 5 0.0001013.93 × 10 5 4.29 × 10 5
Sum0.078070.12150.13080.15170.1547
Table 14. Results of the ANOVA test comparing the presented BER–PSO algorithm against other algorithms for the SSF data.
Table 14. Results of the ANOVA test comparing the presented BER–PSO algorithm against other algorithms for the SSF data.
SSDFMSF (DFn, DFd)p Value
Treatment (between columns)0.00023845.96 × 10 5 F (4, 75) = 1057p < 0.0001
Residual (within columns)4.23 × 10 6 755.64 × 10 8 --
Total0.00024379---
Table 15. Results of the Wilcoxon Signed Rank Test for the SSF data using the provided BER–PSO algorithm and comparative algorithms.
Table 15. Results of the Wilcoxon Signed Rank Test for the SSF data using the provided BER–PSO algorithm and comparative algorithms.
BER–PSOPSOGWOWOAGA
Theoretical median00000
Actual median0.0048980.007660.008130.009450.00966
Number of values1616161616
Wilcoxon Signed Rank Test
Sum of signed ranks (W)136136136136136
Sum of positive ranks136136136136136
Sum of negative ranks00000
p value (two tailed)<0.0001<0.0001<0.0001<0.0001<0.0001
Exact or estimate?ExactExactExactExactExact
Significant (alpha = 0.05)?YesYesYesYesYes
How big is the discrepancy?
Discrepancy0.0048980.007660.008130.009450.00966
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ibrahim, A.; El-kenawy, E.-S.M.; Kabeel, A.E.; Karim, F.K.; Eid, M.M.; Abdelhamid, A.A.; Ward, S.A.; El-Said, E.M.S.; El-Said, M.; Khafaga, D.S. Al-Biruni Earth Radius Optimization Based Algorithm for Improving Prediction of Hybrid Solar Desalination System. Energies 2023, 16, 1185. https://doi.org/10.3390/en16031185

AMA Style

Ibrahim A, El-kenawy E-SM, Kabeel AE, Karim FK, Eid MM, Abdelhamid AA, Ward SA, El-Said EMS, El-Said M, Khafaga DS. Al-Biruni Earth Radius Optimization Based Algorithm for Improving Prediction of Hybrid Solar Desalination System. Energies. 2023; 16(3):1185. https://doi.org/10.3390/en16031185

Chicago/Turabian Style

Ibrahim, Abdelhameed, El-Sayed M. El-kenawy, A. E. Kabeel, Faten Khalid Karim, Marwa M. Eid, Abdelaziz A. Abdelhamid, Sayed A. Ward, Emad M. S. El-Said, M. El-Said, and Doaa Sami Khafaga. 2023. "Al-Biruni Earth Radius Optimization Based Algorithm for Improving Prediction of Hybrid Solar Desalination System" Energies 16, no. 3: 1185. https://doi.org/10.3390/en16031185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop