Next Article in Journal
Heavy–Heavy and Heavy–Light Mesons in Cold Nuclear Matter
Previous Article in Journal
Modeling Academic Social Networks Using Covering and Matching in Intuitionistic Fuzzy Influence Graphs
Previous Article in Special Issue
High-Efficiency and Ultrawideband Polarization Conversion Metasurface Based on Topology and Shape Optimizaiton Design Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Network-Based Fuzzy Inference System Training Using Nine Different Metaheuristic Optimization Algorithms for Time-Series Analysis of Brent Oil Price and Detailed Performance Analysis

1
Department of Computer Engineering, Engineering Architecture Faculty, Nevşehir Hacı Bektaş Veli University, Nevşehir 50100, Türkiye
2
CEKA Software R&D Co. Ltd., Nevşehir 50100, Türkiye
3
Departments of Mathematics, Faculty of Arts and Sciences, Nevşehir Hacı Bektaş Veli University, Nevşehir 50100, Türkiye
4
Department of Computer Technologies, Nevşehir Vocational School, Nevşehir Hacı Bektaş Veli University, Nevşehir 50100, Türkiye
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 786; https://doi.org/10.3390/sym17050786 (registering DOI)
Submission received: 12 March 2025 / Revised: 6 May 2025 / Accepted: 14 May 2025 / Published: 19 May 2025

Abstract

:
Brent oil holds a significant position in the global energy market, as oil prices in many regions are indexed to it. Therefore, forecasting the future price of Brent oil is of great importance. In recent years, artificial intelligence techniques have been widely applied in modeling and prediction tasks. In this study, an Adaptive Neuro-Fuzzy Inference System (ANFIS), a well-established AI approach, was employed for the time-series forecasting of Brent oil prices. To ensure effective learning and improve prediction accuracy, ANFIS was trained using nine different metaheuristic algorithms: Artificial Bee Colony (ABC), Selfish Herd Optimizer (SHO), Biogeography-Based Optimization (BBO), Multi-Verse Optimizer (MVO), Teaching–Learning-Based Optimization (TLBO), Cuckoo Search (CS), Moth Flame Optimization (MFO), Marine Predator Algorithm (MPA), and Flower Pollination Algorithm (FPA). Symmetric training procedures were applied across all algorithms to ensure fair and consistent evaluation. The analyses were conducted on the lowest and highest daily, weekly, and monthly Brent oil prices. Mean squared error (MSE) was used as the primary performance metric. The results showed that all algorithms achieved effective prediction performance. Among them, BBO and TLBO demonstrated superior accuracy and stability, particularly in handling the complexities of Brent oil forecasting. This study contributes to the literature by combining ANFIS and metaheuristics within a symmetric framework of experimentation and evaluation.

1. Introduction

For many years, crude oil has been one of the most significant energy and financial assets in the world and continues to be so. Roughly one-third of the world’s energy is derived from crude oil, which may be refined into a variety of fuels to satisfy varied consumer needs [1]. It is also a common raw material for petroleum-based items in daily life. Since fluctuations in the price of crude oil have a domino effect on the global economy, forecasting its price is crucial for planning long-term strategies. Because of its very volatile and turbulent structure—which is especially influenced by politics—accurate prediction-making is crucial [1,2]. In order to make our predictions, we must first discuss artificial intelligence, an area of study that has seen significant growth in popularity recently and has been employed widely. The primary methods of artificial intelligence include heuristic optimization algorithms, fuzzy logic, artificial neural networks, and neuro-fuzzy systems. Numerous issues in the actual world have been resolved with them [3,4,5]. Fuzzy sets and inference systems are the most preferred approach for solving ambiguous and imprecise situations [6]. However, it is crucial to keep in mind that they are unable to make rules on their own or carry out the learning process [7]. Nonetheless, self-organization, self-interaction, and environmental learning are all possible with artificial neural networks (ANNs) [8]. The Adaptive Network-Based Fuzzy Inference System (ANFIS) [9], which integrates the characteristics of artificial neural networks and fuzzy inference systems, is one of the most well-known neuro-fuzzy systems. It combines the best features of both systems: fuzzy, which performs well in mapping via membership functions and alpha cuts, and ANN, which is great at self-organizing [5,8]. Hence, it provides a reliable approach to problem modeling and identification.

2. Literature Review

During the past few decades, forecasting the price of crude oil has made extensive use of conventional statistical and econometric methods [10,11]. Amano made one of the first study proposals about oil market forecasting [12]. The author predicted the oil market using a small-scale econometric model. In order to forecast crude oil prices in the 1980s, Huntington used an advanced econometric model [13]. Furthermore, a probabilistic model was used by Abramson and Finizza to forecast oil prices [14]. When the price series being studied is linear or nearly linear, the models mentioned above can produce accurate forecast results. However, there is a significant amount of nonlinearity and irregularity in real-world crude oil price series [10,11,15].
To deal with the limitations of classic models, some nonlinear and advanced artificial intelligence (AI) models have been applied to predict crude oil [15]. Wang, Yu, and Lai integrated an ANN model with a knowledge database that includes historical events and their influence on oil prices. According to the authors, performance for a hybrid ANN approach was 81%, and it was 61% for a pure ANN system [16]. Mirmirani and Li applied genetic algorithms for predicting the price of crude oil and compared their findings with the VAR model [17].
Using intrinsic mode function inputs and an adaptive linear ANN learning paradigm, Yu, Wang, and Keung forecasted the West Texas Intermediate (WTI) crude oil and Brent petrol spot prices for the years 1986–2006 [10]. Kulkarni and Haidar provided an excellent description of building an ANN model [2]. They employed a multilayer feedforward neural network to estimate the direction of the crude oil spot price up to three days ahead of time, using data spanning from 1996 to 2007. For one, two, and three days in the future, respectively, their forecast accuracy was 78%, 66%, and 53%.
Gori et al. trained and tested an ANFIS method that was able to estimate oil prices for the years 1999 to 2003 by using data on oil prices from July 1973 to January 1999 [18]. Chiroma et al. applied a novel approach, a co-active neuro-fuzzy inference system (CANFIS), to predict crude oil price by using monthly data of WTI [19]. They developed this approach in place of ANFIS and frequently used techniques to increase forecast accuracy. Mombeini and Yazdani suggested a hybrid model based on ARIMA (AutoRegressive Integrated Moving Average) and ANFIS to study the fluctuation and volatility of prices of West Texas Intermediate (WTI) crude oil markets to create a more exact and accurate model. Several statistical studies utilizing the MAPE, R2, and PI tests were carried out in order to reach this purpose [20]. The objective of Abdollahi and Ebrahimi in their study was to present a strong hybrid model for accurate Brent oil price forecasts [21]. The suggested hybrid model includes ANFIS, Autoregressive Fractionally Integrated Moving Average (ARFIMA), and Markov-switching models. To effectively capture the linear and nonlinear characteristics, these three techniques were combined. The technique put out by AbdElaziz et al. depends on using a modified salp swarm algorithm (SSA) to improve the ANFIS’s performance [22]. They compared the outcome with nine further modified ANFIS approaches. Anshori et al. examined a case study and optimized the initial ANFIS parameters using the Cuckoo Search technique to estimate global crude oil prices [23]. Eliwa et al. [24] used 30-year gasoline prices to anticipate prices using the ANFIS model. By supporting this model with VAR (Vector Autoregression) and ARIMA models, they were able to obtain high accuracy and significant correlation.
Recently, deep learning and hybrid models, such as LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit), SVM (Support Vector Machine), RF (Random Forest), XGBoost (eXtreme Gradient Boosting), and other hybrid approaches, have begun to appear as novel methodologies in time-series analyses. Awijen et al. [25] presented a comparative research study on the use of machine learning and deep learning to anticipate oil prices during crises. In the study, processes were performed primarily utilizing RNN (recurrent neural network), LSTM, and SVM algorithms. Jabeur et al. [26] predicted the fall in oil prices using some machine learning techniques with neural network models. They found that among the methods applied, such as RF, LightGBM (Light Gradient-Boosting Machine), XGBoost, and CatBoost, RF and LightGBM offered the best results. Jiang et al. [27] compared the LSTM model to other methods such as AR (Autoregression), SVR, RNN (recurrent neural network), and GRU and determined that the LSTM model produced better outcomes for China’s crude oil forecast. In order to predict and test the prices of Brent and WTI crude oil at various time-series frequencies, Hasan et al. [28] present a model they call LKDSR, which combines machine learning techniques like k-nearest neighbor regression, linear regression, regression tree, support vector regression, and ridge regression. Furthermore, Sezer et al.’s study [29], a comprehensive literature review on the use of deep learning for forecasting financial time series, is valuable.
Iftikhar et al. [30] conducted a comprehensive analysis for Brent oil price forecasting by evaluating hybrid combinations of linear and nonlinear time-series models using the Hodrick–Prescott filter. Using European Brent crude oil spot data, Zhao et al. [31] constructed a three-layer LSTM model to predict prices, with highly positive outcomes. Dong et al. [32] used VMD to eliminate noise from the data and PSR (Phase Space Reconstruction) to rebuild the price of crude oil. Lastly, they used CNN-BILSTM (a hybrid bidirectional LSTM and CNN architecture) to make multi-step predictions. By using SVM, ARIMA, and LSTM techniques to analyze crude oil prices, Naeem et al. [33] developed a hybrid model for crude oil price prediction. In order to improve the forecasting accuracy of crude oil prices and properly analyze the linear and nonlinear features of crude oil, Xu et al. [34] developed hybrid approaches. For this purpose, they used models such as ARIMAX, GRU, LSTM, and MLP (Multilayer Perceptron). Sen et al. [35] investigated the prediction of crude oil prices using ANN, LSTM, and GRU models. They optimized the hyperparameters of LSTM and GRU using the PSO (Particle Swarm Optimization) method. Jin et al. [36] forecasted daily and monthly prices for Henry Hub natural gas, New York Harbor No. 2 heating oil, and WTI and Brent crude oil using nonlinear autoregressive neural network models. Various model configurations, training methods, hidden neurons, delays, and data segmentations are taken into account while evaluating the performance.
ANFIS was chosen due to its ability to combine the strengths of both neural networks and fuzzy logic, making it highly suitable for modeling complex, nonlinear systems such as time-series prediction problems. Additionally, ANFIS allows flexible adaptation through learning, while also maintaining interpretability through fuzzy rules.
When compared with hybrid models, ANFIS produces successful results when the proper parameters are introduced for time-series analysis. While hybrid models require more computations and make the model more complicated, they cannot produce a noticeable effect. As a result, thanks to ANFIS, we achieve successful results in a simpler way without the need for further processing.
In this study, metaheuristic algorithms were employed in the training process of ANFIS. Achieving effective results with ANFIS largely depends on the quality of the training process. A review of the literature shows that metaheuristic algorithms are commonly used for training ANFIS and have led to successful outcomes in various applications. Therefore, in the context of Brent oil price prediction, metaheuristic optimization was selected as the training strategy for ANFIS. Specifically, nine widely used and literature-supported metaheuristic algorithms—known for their strong performance in ANFIS training—were implemented in this study.
In this study, the entire experimental setup was constructed with a methodologically symmetric design. All metaheuristic algorithms were applied under the same training conditions, including identical datasets, ANFIS configurations, input–output pairings, and evaluation metrics. This symmetry in training and evaluation ensured fairness, consistency, and repeatability across the experiments. Such a structured and balanced framework contributes not only to the reliability of the comparative analysis but also reflects the core principles of symmetric design in artificial intelligence research.

3. Materials and Methods

3.1. Selfish Herd Optimizer

Each individual in a herd groups up with other conspecifics in an attempt to enhance its chance of avoiding predator attacks; however, it does not consider how this behavior may influence the chances of survival of other individuals [37]. SHO is an optimization algorithm based on the simulation of selfish herd behavior observed in individuals in animal herds at risk of predation. In this algorithm, there are two different kinds of search agents: packs of predators (P) and members of a selfish herd (H) known as the prey. The survival value of each individual in the population is obtained by the following formula:
S V i = f i f b e s t f b e s t f w o r s t
where f i denotes the fitness value of i, which is the individual’s position, and f b e s t and f w o r s t are the best and worst fitness values reached after running the SHO [37]. The two movements that make up the herd movement operator are the leader movement and the following and deserting movement of the herd. The leader movement and the herd’s following and deserting movements are the two movements that make up the herd movement operator. The next iteration updates the herd leader’s position using the following formula:
h L t + 1 = h L t + c t ,       i f   S h L t = 1 h L t + s t ,       i f   S h L t < 1
where c t and s t are movement vectors that depend on the selfish repulsion experiment and selfish attraction experiment, respectively [37].
Furthermore, members of an aggregation other than the leader are classified into two groups: herd followers (HFs) and herd defectors (HDs). Each herd member’s updated location is computed using the following equation [37]:
h i t + 1 = h i t + f i t ,       i f   h i t H F t h i t + d i t ,       i f   h i t H D t
where f i t is the herd following vector and d i t is the herd deserting vector. Then, SHO considers the location of a specific herd member while modeling the movement of each predator p i within the attacking predator P, as seen below:
p i t + 1 = p i t + 2 q h r t p i t
where q is a random number in the interval 0 , 1 [37].
Lastly, two phases are applied: the predation phase, which determines the predation probability for each individual in the threatened herd, and the renewal phase, which replenishes the population in the event that hunting causes it to decline by creating new individuals through mating operations. In the end, the iteration terminates if the stopping condition is satisfied.

3.2. Biogeography-Based Optimization

This optimization technique was developed by Dan Smith in 2008 and is based on the migration and dispersal of live organisms in an ecosystem [38]. The method treats each solution conceptually as an island, and these islands are optimized using the migration and emigration behavior of species. The migration of species, the emergence of new species, and the extinction of existing ones are all explained by this mathematical modeling [39]. BBO analyzes whether migration and change will occur, respectively, using its two functioning mechanisms: migration and mutation.
For the habitat suitability index (HSI) corresponding to the fitness value, if the solution vector suitability index variables (SIVs) are suitable for the considered habitat, it is called a high his; otherwise, it is called a low HSI. Here, the SIV is the independent variable of the habitat, and the HSI is the dependent variable. The ratio λ is used to probabilistically decide whether to migrate each SIV in the solution. In the migration part, if the given SIV value for the i th solution is chosen to migrate, the ratio μ is used to probabilistically decide whether to migrate a randomly selected SIV variable for the i th solution [39]. If a solution has a low probability, its existence is unexpected. As a result, it is likely to evolve into a different solution. In contrast, a solution with a high probability is less likely to transform into another solution. As a result, the probability of a solution is determined using the following equation:
m S = m m a x 1 P s P m a x
where S is a solution and the parameter m m a x is user-defined [39].

3.3. Multi-Verse Optimizer

The Big Bang theory states that the universe began with this explosion. According to the theory, there were several explosions, each of which created a new universe. The Multi-Verse Optimization (MVO) technique is inspired by the three major sources in this theory: white holes, black holes, and wormholes [40,41]. The MVO algorithm is divided into two parts: exploration (using white and black holes) and exploitation (using wormholes). The exploration section provides the most promising places for finding the best local optima. Wormholes are utilized in the second stage, called exploitation, to search local areas for the global best. MVO uses a roulette wheel selection process to obtain universe matrix input values across different universes [40,41]. If we suppose that wormhole tunnels bridge one universe with the best universe yet created to accommodate local changes in each universe, then this mechanism has the following formula [40]:
X i j = X j + T D R × u b j l b j × r 4 + l b j   i f   r 3 < 0.5 X j T D R × u b j l b j × r 4 + l b j i f   r 3 0.5   i f   r 2 < W E P               x i j                                                                     i f   r 2 W E P
where X j denotes the j th parameter of the best universe, x i j is the j th parameter of the i th universe, T D R and W E P are two coefficients, l b j and u b j are the lower and upper bounds, and r 2 , r 3 , and r 4 are random values between 0 and 1. The travel distance ratio (TDR), which determines the distance rate (variation) at which an object can be transported, and the wormhole existence probability (WEP), which indicates the likelihood that wormholes exist in universes, are two coefficients with the following formulas [40]:
W E P = m i n + l × m a x m i n L
T D R = 1 l 1 p L 1 p

3.4. Teaching–Learning-Based Optimization

This is an algorithm constructed around classical learning theory, which was inspired by the ability of students to acquire knowledge and the ability of teachers to teach. It is divided into two phases: learning from the teacher (teacher phase) and learning through student engagement (learner phase) [42].
The teacher phase begins with identifying the teacher who provides the best solution in the population. The probability of success at this stage is distributed according to a Gaussian distribution. Although it is not realistically practical, a competent teacher is expected to raise the level of the students to that of their own. The truth is that the teacher attempts to increase the average class results depending on their own strengths. Therefore, at this stage, the current answer is modified according to the following expression [42]:
x n e w , i = x o l d , i + r i M n e w T F M i
where r i is a random value from 0 to 1, T F is a teaching factor, M i is the mean, and M n e w is the new mean. Here, T F might be either 1 or 2.
In the learner phase, students connect with one another and share information in order to raise their knowledge levels. Interacting with a more informed student leads to increased knowledge by learning new things from them. If the new student provides a better answer than the previous student as a result of their learning, the following changes are made [42]:
I f   f x i < f x j ,   t h e n   x o l d , i + r i x i x j I f   f x j < f x i ,   t h e n   x o l d , i + r i x j x i

3.5. Cuckoo Search

Some cuckoo species engage in parasitic behavior, placing their own eggs in the foreign nest and tossing the eggs of the nest owner bird to increase the survival probability of their own eggs. This behavior served as the inspiration for the CS algorithm. In the algorithm, each egg is considered a solution. The initial population is the number of randomly generated eggs in the nest where the egg is placed. From just one egg chosen by the cuckoo to be placed in this nest, a new solution is produced as follows [43]:
x i t + 1 = x i t + α L e v y λ
where α > 0 denotes the step size.
The best nests with high-quality eggs are kept for future generations, whereas P a values of the rest are reproduced. Here, P a [ 0 , 1 ] represents the probability that the cuckoo egg will be detected by the host bird [43].

3.6. Moth Flame Optimization

This algorithm was inspired by the motions of moths around a light source. The developed MFO technique treats possible solutions as moths and problem variables as their location in space. Because the MFO method is population-based, each moth represents a potential solution, and each location is represented as a decision matrix [44]. Another key component of this method is flames, which are represented by a matrix similar to the moth matrix. In this technique, both moths and flames represent solutions; the only difference is how they are handled and transformed at each iteration. While moths are the basic search components that move around the search space using the spiral flight mechanism, flames indicate the best locations obtained by the moths up to the relevant iteration [44]. The algorithm defines the main update mechanism as a logarithmic spiral, with the moth as the starting point and the flame position as the end point, as follows:
S M i , F j = D i e b t c o s 2 π t + F j
where D i is the distance of the i th moth for the j th flame, constant b defines the form of the logarithmic spiral, and t is a random value between −1 and 1 [44].

3.7. Marine Predator Algorithm

Inspired by the predator–prey relationship between marine predators and their prey, this algorithm was developed to depend on the probability of encounters between marine predators and their prey [45,46]. The initial solution starts with random variables in the search space and continues by defining the best solution values for prey and predator in two same-sized matrices named prey and elite [45]. In the first phase, the prey moves rapidly and conducts an exploratory phase to look for its food using Brownian motion. On the other hand, the predator monitors the prey’s movements while remaining motionless. Then, the prey matrices are updated according to the following formulas [45]:
s t e p s i z e i = R B ( E l i t e i R B P r e y i ) P r e y i = P r e y i + P R s t e p s i z e i
where I = 1 , 2 , , n , constant P = 0.5 , and R [ 0 , 1 ] .
In Phase 2, the prey and predator move at the same speed; the predator uses Brownian motion, and the prey uses Levy motion. The following formulas are used to update the prey matrices [45]:
s t e p s i z e i = R L ( E l i t e i R L P r e y i ) P r e y i = P r e y i + P R s t e p s i z e i
where i = 1 , 2 , , n / 2 .
s t e p s i z e i = R B ( R B E l i t e i P r e y i ) P r e y i = E l i t e i + P C F s t e p s i z e i
Here, i = n / 2 , , n , and C F = 1 I t e r M a x I t e r 2 I t e r M a x I t e r .
In the third phase, the algorithm employs the predator’s Levy motion for the duration of the iteration, assuming that the predator moves faster than the prey. After that, the prey matrices are updated using the subsequent formulas [45]:
s t e p s i z e i = R L ( R L E l i t e i P r e y i ) P r e y i = E l i t e i + P C F s t e p s i z e i
The procedure continues until the algorithm’s stopping condition is satisfied or the maximum number of iterations is reached.

3.8. Flower Pollination Algorithm

The basis of the FPA was inspired by the reproductive process of flowering plants. There are two main ways that flowers are pollinated: biotic and abiotic. While biotic methods—that is, by organisms like flies, insects, bees, butterflies, etc.—pollinate the majority of flowering plants, abiotic methods—that is, by inanimate objects like the wind—pollinate some flowering plants. Pollen can travel over great distances thanks to insects’ long-range flight capabilities. This characteristic helps to ensure the best reproduction during the flower pollination process. The following equation illustrates how this circumstance can be expressed mathematically [47]:
x i t + 1 = x i t + γ L g * x i t
where the current best solution is represented by g * , the step size is adjusted by γ , the solution vector is represented by x i t + 1 , and the Levy distribution L > 0 denotes the strength of pollination.
The rule for local pollination is as follows [47]:
x i t + 1 = x i t + γ L x j t x k t
where x j t and x k t denote pollen types found in different flowers of the same plant species.

3.9. Artificial Bee Colony Algorithm

This is a metaheuristic optimization technique inspired by the foraging behavior of bees. The algorithm includes three types of bees: the onlooker bee, which waits in the dance area to select a food source; the employed bee, which searches for nectar by visiting known food sources and sharing the source information with the onlooker bee; and the scout bee, which searches for food at random around the hive [48,49]. The colony of ABC has an equal number of worker and spectator bees, and each food source has a single worker bee. Stated differently, the quantity of food sources surrounding the hive and the size of worker bees are chosen equally. The ABC algorithm begins with a random solution that is spread over the population that was initially created in the first step. If the nectar content of the new source is greater than that of the prior one, the bee forgets the former location and memorizes the new one. To find a potential solution, employed bees look for a better location in the nearby food source as described below [49]:
v i j = x i j + ϕ i j x i j x k j
where ϕ i j represents a random number from −1 to 1. In the second step, the employed bees share knowledge, while the onlooker bees choose the food source and compute the amount of nectar. The third stage involves sending the determined scout bees to potential food sources. These three steps continue until the stopping criteria are satisfied, after which the algorithm is terminated.

3.10. Adaptive Network-Based Fuzzy Inference System (ANFIS)

ANFIS is a hybrid artificial intelligence method consisting of the combination of artificial neural networks and fuzzy inference systems, which uses different methods in parameter calculations. An effective structure was created by combining the learning ability of artificial neural networks with the IF–THEN rule between the input and output of fuzzy logic in ANFIS. The structure of ANFIS is divided into five layers, as seen in Figure 1, and they are explained below [9,50].
Layer 1. This layer, known as the fuzzification layer, employs membership functions to generate fuzzy sets from inputs. The values of the prior parameters allow the shape of the membership functions to be determined. Also, membership functions’ shapes can be defined by the parameters in their structure.
Layer 2. The rule layer calculates the product of membership functions from the preceding layer to generate firing strength ( w i ) values as follows:
O i 2 = w i = μ a i x μ b i y x
where i = 1 and 2. The outputs of this layer are used as input weight functions for the following node.
Layer 3. This layer is known as the normalization layer, and the normalized firing strengths are calculated from the firing strengths obtained in previous layers using the following formula:
O i 3 = w i ¯ = w i w 1 + w 2
where i = 1 and 2.
Layer 4. In the defuzzification layer, the output obtained in the normalization layer is multiplied by a linear equation (first order), and the output values are calculated as follows:
O i 4 = w i ¯ x f i = w i ¯ x p i t + q i u + v i
Here, the parameter set { p , q , r } is known as the result parameter.
Layer 5. The summation layer sums the results of each rule in the defuzzification layer to generate the ANFIS output, as shown below:
O i 5 = i w i ¯ x f i
To summarize, ANFIS training includes determining structural parameters using an optimization approach, and successful training is necessary for producing successful outcomes with ANFIS.

4. Simulation Results and Discussion

Within the scope of this study, ANFIS training was carried out using nine different metaheuristic algorithms to estimate the daily, weekly, and monthly minimum and maximum values of Brent oil price, and the obtained results were analyzed in detail. The algorithms used in ANFIS training were SHO, BBO, MVO, TLBO, CS, MFO, MPA, FPA, and ABC.
The Brent oil data used in this study were taken from the investing.com website as daily, weekly, and monthly datasets. The daily dataset covered the period between 3 January 2022, and 29 December 2023. Here, it is crucial to note that daily databases did not include data for weekends and holidays. The weekly dataset contained data from 1 January 2014 to 31 December 2023, and the monthly dataset spanned from 1 January 2014 to 1 January 2024. We collected 515 data points for the daily dataset and 522 data points for the weekly dataset to achieve more fitting analysis results. In contrast, since the monthly data collection contained fewer data over a larger range, 121 data points were collected to guarantee consistency. It is also worth noting that these daily, weekly, and monthly data were acquired separately for the lowest and highest values in the specified date ranges.
In this study, six prediction problems, as outlined in Table 1, are addressed. Specifically, the aim is to estimate the minimum and maximum values that Brent oil prices can reach on a daily, weekly, and monthly basis. The data used in these estimations are structured as time series. The time-series data were transformed into input–output pairs suitable for the training of ANFIS. The main goal of this transformation is to predict future values by using past data. However, in time-series problems, it is not always clear how many past values should be used to achieve the best prediction accuracy. To reduce this uncertainty, separate datasets were prepared for daily, weekly, and monthly predictions, each consisting of two, three, and four inputs, respectively, along with one output. Through these multiple configurations, the effect of the number of inputs on prediction performance was systematically investigated.
The ANFIS structures used in the study are illustrated in Figure 2. Figure 2a shows the block diagram of an ANFIS model with two inputs, while Figure 2b presents the training process and error calculation steps for a three-input model. Figure 2c displays the system structure when four inputs are used. In all these models, the output represents the subsequent time step value in the series.
In the preprocessing phase, normalization plays an important role, especially when the dataset is large or contains high variability. For this reason, all input and output values were normalized to the [0, 1] range. All results and evaluations were made based on these normalized values.
Another critical factor that influences the performance of ANFIS is the type and number of membership functions (MFs). According to the literature, generalized bell-shaped membership functions (gbellmf) are effective in modeling normalized time-series data. Therefore, gbellmf was selected in this study to remain consistent with existing studies and to avoid unnecessary experimentation.
Additionally, the number of membership functions significantly affects learning performance. In this work, systems with different input counts were trained using two, three, and four membership functions, respectively, in order to analyze their impact on prediction accuracy. Since the number of parameters to be optimized increases with more inputs, the number of MFs was adjusted accordingly. For example, in the model with four inputs, only two MFs were used to reduce the complexity of the learning process.
Notably, 80% of the obtained dataset is used in the training process, and 20% is utilized in the testing process. All error values in the study are calculated as mean squared error (MSE). To compare the results obtained with each algorithm fairly, the population size and maximum generation number were evaluated similarly. In other words, the population size and maximum generation number were set as 20 and 2500, respectively.
Table 2 presents a summary of the overall workflow followed in this study. The process begins with data collection, where time-series data relevant to Brent oil prices are obtained. This is followed by the normalization and feature setup stage, where raw data are scaled to a uniform range and input–output pairs are prepared for model training. In the next step, the ANFIS structure is defined, including the selection of membership function types and their quantities. The model is then subjected to training and testing, allowing performance evaluation based on different configurations. Finally, the results are assessed through various evaluation metrics to determine prediction accuracy and overall model effectiveness.

4.1. Analysis for Predicting the Lowest and Highest Daily Prices of Brent Oil

The results obtained in estimating the daily minimum value of Brent oil price are presented in Table 3. When the results obtained for all applications of all algorithms are examined, it is seen that the average training error values do not exceed the 10−3 level. The mean training error values were found to be in the range of 1.7 × 10−3 to 2.6 × 10−3. The best mean training error value, 1.7 × 10−3, was achieved with TLBO, BBO, and MPA. The worst mean training error value was found with SHO. The results of other algorithms, except SHO, are 1.9 × 10−3 or better. Due to the structure of the problem, increasing the number of membership functions did not clearly improve or worsen the mean training error values. Minor behavioral differences were observed between the algorithms. The best training error value was found as 1.5 × 10−3 on the four-input system with BBO. Low standard deviation values were achieved for the training process. Except for a few applications, the value was at the 10−5 level. The best mean test error value was found to be 1.6 × 10−3, and this value was obtained from many algorithms and many applications. The training algorithms could not make the mean test error value better than 1.6 × 10−3. In addition, the best mean training error and the best mean test error values are parallel to each other. The best test error value was 1.4 × 10−3. This value can be obtained with different algorithms. Good standard deviation values were also achieved for the test, except for a few applications. The comparison graph for the real output and the predicted output, taking into account the best training error value obtained with BBO, is presented in Figure 3.
Table 4 presents a comparison of the estimates for the highest daily price of Brent oil. Changing the number of membership functions affected the results differently depending on the training algorithms. When the training results are evaluated, it is seen that the change in the number of membership functions does not change the mean error values obtained with ABC, CS, and MVO. In all applications of these algorithms, the 1.1 × 10−3 value was reached as the mean error value. The increase in the number of membership functions improved the mean training error in BBO and TLBO. The opposite situation was observed in MPA. The best mean training error value was obtained on a two-input system using MPA. This value is 9.3 × 10−4. After MPA, the best mean training error value belongs to BBO. The best training error value was found to be 7.7 × 10−4 on the four-input system using TLBO. For at least one implementation of algorithms other than CS, the best training errors are at a level of 10−4. Effective standard deviation values were obtained, especially because the training error values found by the algorithms were close to each other. When we look at the mean test error values, we see that the algorithms mostly obtain results that are close to each other. The mean test error values are in the range of 1.0 × 10−3 to 1.6 × 10−3. The best mean error value was found to be 1.0 × 10−3 with BBO, MFO, and TLBO. According to all test results, the mean error value is frequently obtained as 1.1 × 10−3. In fact, these results are parallel to the training results. As with the mean test error value, the best test error value also belongs to BBO. It is 8.3 × 10−4. In addition to that, effective standard deviation values were obtained in the test results. Figure 4 presents the comparison of graphs of real and predicted outputs plotted according to the best training error obtained for the highest daily Brent oil price. It is seen that a very successful prediction was made, and the predicted output mostly coincides with the actual output.

4.2. Analysis for Predicting the Lowest and Highest Weekly Prices of Brent Oil

Table 5 shows the results obtained with the training algorithms for predicting the lowest weekly price. Although it varies according to the training algorithms, the increase in the number of membership functions usually does not significantly affect the mean training error values. Except for a few applications of SHO, the mean training error value at a level of 10−4 was achieved. The best mean training error value was found to be 7.1 × 10−4 with BBO, MPA, and TLBO. The worst mean training error value for this problem was obtained on the four-input system using SHO. In other words, the mean training error value in all applications is in the range of 7.1 × 10−4 to 1.1 × 10−3. In addition, the best training error value was found to be 6.6 × 10−4 using TLBO and BBO. When we look at the training results, generally, successful results were obtained in all algorithms. It is seen that these results are supported by low standard deviation values. Standard deviations were found especially at the 10−4, 10−5, and 10−6 levels. No algorithm was able to find a better result than 1.0 × 10−3 in the mean test error values. In this respect, it is a fact that test error values lag behind training error values. The best mean test error was found using FPA, BBO, MFO, CS, MPA, MVO, and TLBO. The worst mean error value was reached with SHO. In other words, it is possible to say that the mean test error values were obtained in the range of 1.0 × 10−3 to 1.5 × 10−3. The best test error value was found to be 9.4 × 10−4. The effective standard deviation value was reached in the testing process as well as in the training process. Figure 5 shows the comparison graph of the predicted output and the real output for the mean error value of 7.1 × 10−4. Since this result was obtained with several algorithms, the graph was drawn by considering only the result found with TLBO. The large overlap of both graphs is an indication that the training process was successful.
Table 6 shows the results obtained for predicting the highest weekly price. Increasing the number of membership functions of the inputs in ABC, BBO, and MFO also improved the solutions. Mixed behaviors were observed in other algorithms. The best mean training error was obtained by using BBO algorithms and MPAs on the two-input system. This value is 6.0 × 10−4. The worst mean training error value was found to be 1.0 × 10−3 with SHO. Other application results are between the best and worst mean values specified. The best training error value was found using MFO, and its value was 5.4 × 10−4. Effective standard deviation values were reached in the training process. Except for a few applications, standard deviation was found at the 10−5 level. During the testing process, especially in FPA and TLBO, increasing the number of membership functions used in the inputs improved the mean test error value. A similar situation was observed in three-input systems in ABC, BBO, and MVO. In other applications, stable behavior was not exhibited. The best mean error value was found on four-input systems with TLBO and MVO. This value is 5.8 × 10−4. In addition, the best error value was found to be 4.6 × 10−4. This value was obtained with four different training algorithms. Successful standard deviation values were achieved in the testing process as well as in the training process. It is seen that the standard deviation values obtained in all applications are in the 10−4 or 10−5 level. Figure 6 shows a comparison graph of the predicted output and the real output based on the best training result. It is seen that the graphs overlap except for a few output values. This shows that the training process carried out to solve the relevant problem was successful.

4.3. Analysis for Predicting the Lowest and Highest Monthly Prices of Brent Oil

Table 7 shows the results of estimating the lowest monthly price of Brent oil. According to the mean error values of the training process, the increase in the number of membership functions in ABC, BBO, MFO, and TLBO algorithms generally partially improved the results. In the FPA and some results of CS, it is seen that the increase in the number of membership functions does not affect the result. Effective results were obtained in MPA with low membership numbers. In CS, better results were obtained in two-input systems. The effect of the number of membership functions is limited. In SHO, increasing the number of membership functions worsened the mean error. The best training mean error value was found to be 2.9 × 10−3 with BBO. A 3.0 × 10−3 mean error value was achieved with MPA and TLBO algorithms. The worst mean value among all algorithms was found with SHO on the four-input system. This value is 4.3 × 10−3. The best training error value was found to be 2.1 × 10−3 with the BBO algorithm. After BBO, the best training error value belongs to TLBO. The value of TLBO is 2.2 × 10−3. Standard deviation values were reached at 10−4 and 10−5 levels for the training process. It is particularly noteworthy that the standard deviation values obtained in the training results of CS are at the 10−5 level. When all algorithms are considered, it is seen that average test error values are reached in the 4.7 × 10−3 to 6.3 × 10−3 range. The best mean test error value was found with MFO on a three-input system. This value is 4.7 × 10−3. Especially in BBO, MFO, MPA, MVO, SHO, and TLBO, it was observed that the mean test error values worsened as the number of membership functions increased. The best test error value was obtained as 4.0 × 10−3 using SHO. It is seen that in other algorithms, error values of 4.3 × 10−3, 4.4 × 10−3, 4.5 × 10−3, and 4.6 × 10−3 are mostly obtained. In the testing process, effective standard deviation values were generally achieved. In the test condition, standard deviation values were found at the 10−4 level except for some applications of MFO, CS, and SHO. As stated before, the best training error was obtained on BBO as 2.9 × 10−3. Figure 7 compares the graphs of the predicted output and the real output for this result. It is seen that the real output and the predicted output are consistent with each other except for a few points.
The results obtained regarding the estimation of the highest monthly price of Brent oil are given in Table 8. The increase in the number of membership functions, especially in ABC, BBO, MFO, and TLBO, improved the training mean errors. A similar situation was observed in MVO, except for one application. All mean error values are the same except for when using two gbellmf on a two-input system, which produced a different mean training error value of the MPA. The best mean error value among all applications was found using three gbellmf on a three-input system with BBO and TLBO algorithms. This value is 2.0 × 10−3. Apart from BBO and TLBO, the best mean training error value belongs to MPA. The worst mean training error value belongs to SHO. This value is 3.1 × 10−3. In the light of this information, it is seen that the mean error values obtained for the training process in estimating the monthly maximum value are between 2.0 × 10−3 and 3.1 × 10−3. It has also been determined that the training mean errors in two- and three-input systems are generally better than in four-input systems, and the best mean training results are obtained in these systems. The best training error value was found to be 1.6 × 10−3 using TLBO. The best training error values found with BBO and MPA are 1.7 × 10−3 and 1.9 × 10−3, respectively. Effective standard deviation values for the training process were reached. Standard deviation values at the 10−4 or 10−5 level were obtained. Looking at the test results, the change in the number of memberships generally affected the results differently. It was observed that the mean test error values of BBO and TLBO were not as good as the mean training error values. The best test mean error values were obtained to be 7.5 × 10−3 using MFO and MVO. The best test error value was found to be 4.9 × 10−3 with MFO. The test results show that standard deviation values were obtained at 10−3 and 10−4 levels. These values are consistent with the results obtained. Figure 8 shows the comparison of the real output and the predicted output for the best training error value. Both graphs largely overlap. This is one of the indicators demonstrating that the training process was successful.
The findings obtained in this study should be interpreted within the scope of certain limitations. In particular, the number of inputs, the number and type of membership functions, population size, and the maximum number of generations are the primary constraints considered. Exploring alternative configurations beyond these parameters may lead to higher-quality solutions. However, due to the significant time and computational cost required for such evaluations, the scope of the study was intentionally limited.
The use of historical data proved to be more effective in daily and weekly forecasts. The results clearly show that daily and weekly predictions outperform monthly predictions, as demonstrated in the related tables and figures. One possible reason for this difference is the limited number of data points in the monthly dataset. Additionally, the longer time gap between the input data and the predicted output in monthly forecasts increases the likelihood of external factors influencing the outcome, which may also reduce accuracy.
When examining both training and test results for daily, weekly, and monthly predictions, it is observed that standard deviation values are generally low. This indicates that the successful outcomes obtained are consistent and repeatable. The fact that test results largely aligned with training results suggests that the training process was effectively conducted and that no overfitting occurred.
Although all algorithms yielded effective results, it can be stated that BBO and TLBO performed better than the others in solving the target problem. This does not imply that the other algorithms were unsuccessful, but rather that these two algorithms were more suitable for this specific problem and configuration. It should also be considered that the observed performance differences may be influenced by the previously mentioned limitations of the study.
Furthermore, an increase in the number of inputs and membership functions also increases the number of parameters to be optimized during training, which extends the overall training time. Considering that each algorithm was executed 30 times for statistical validity, the increase in training duration should be considered a critical factor.
In addition to the findings discussed above, the study was conducted within a methodologically symmetric framework. Each of the nine metaheuristic algorithms was applied under identical conditions, using the same datasets, ANFIS configurations, and performance metrics. The number of runs, input–output structures, and evaluation criteria were all uniformly defined across algorithms and prediction types (daily, weekly, and monthly). This symmetric design ensured a fair comparison and enhanced the reproducibility and objectivity of the results. Such a balanced experimental structure not only strengthens the internal validity of the study but also aligns with the scientific focus of symmetry, where consistency and structured modeling approaches are emphasized.
These findings offer valuable insights into the energy market, particularly in the context of crude oil price forecasting. The results indicate that accurate short- and medium-term predictions can be achieved using ANFIS models trained with metaheuristic algorithms. The superior performance observed in daily and weekly forecasts, compared to monthly ones, highlights the potential of the proposed approach for short-term decision-making processes such as risk management, investment planning, and dynamic pricing strategies. Moreover, the consistently low standard deviation values observed across multiple runs suggest that the model is stable and reliable, which is essential for supporting data-driven decisions in volatile energy markets like crude oil trading.

5. Conclusions

In this study, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was trained using nine different metaheuristic algorithms—namely SHO, BBO, MVO, TLBO, CS, MFO, MPA, FPA, and ABC—to predict short- and medium-term Brent crude oil prices. The main objective was to evaluate the performance of these training algorithms in estimating the daily, weekly, and monthly minimum and maximum price levels of Brent oil. The raw time-series data were preprocessed and transformed into ANFIS-compatible input–output structures, allowing the use of past values to predict future ones. In addition, the effects of different input sizes and varying numbers of membership functions were also systematically examined. The key findings of the study can be summarized as follows:
  • All nine metaheuristic algorithms produced effective results when used to train ANFIS models for Brent oil price prediction. These findings confirm that ANFIS models enhanced with metaheuristic optimization are suitable tools for handling nonlinear and dynamic characteristics in energy market forecasting.
  • A strong consistency was observed between training and test results, indicating that the models were not overfitted and maintained generalizability across different problem sets. This also highlights the robustness of the training process and the reliability of the optimized ANFIS configurations.
  • Distinct mean error values were observed for daily, weekly, and monthly predictions. While some algorithms performed well in daily or weekly predictions, their effectiveness declined in monthly contexts. This observation emphasizes the need for context-specific tuning of algorithms depending on the prediction window.
  • The number and type of membership functions significantly influenced the prediction performance in some cases, while in others, variations in membership function count had a negligible impact. This suggests that for certain complex problem structures, the optimization process may reach saturation, preventing further improvement regardless of parameter adjustments. Such limitations might stem from the intrinsic difficulty of the problem or insufficient input diversity.
  • Across all experiments, both training and testing phases yielded low standard deviation values. This indicates that the algorithms provided stable and repeatable solutions even though each training phase started from random initial conditions. This reinforces the statistical reliability of the proposed approach.
  • Although all algorithms achieved acceptable performance levels, BBO and TLBO consistently outperformed the others in most scenarios. These two algorithms demonstrated stronger optimization capabilities in guiding ANFIS training toward more accurate predictions, especially in terms of lower error rates and faster convergence.
  • The symmetric structure of the evaluation process—equal iterations, consistent datasets, and identical parameter settings—strengthens the objectivity and reliability of the findings.
Overall, this study provides evidence that combining ANFIS with metaheuristic optimization offers a powerful framework for short- and medium-term energy price forecasting. The models developed here are particularly useful for stakeholders in the energy market who require reliable, interpretable, and adaptable prediction tools. These results are not only relevant in the context of Brent oil price forecasting but also offer broader insights into the use of adaptive hybrid AI systems in energy economics. The methodology demonstrated here can be adapted to other commodities and financial markets where nonlinear patterns dominate, and high prediction reliability is required. Future research may focus on exploring hybrid training strategies, incorporating additional economic indicators, or evaluating the models under real-time constraints to enhance practical applicability.

Author Contributions

E.K.: conceptualization, methodology, validation, software, review and editing, original draft preparation, supervision; A.K.: methodology, software, writing, data curation, original draft preparation; C.B.K.: software, writing, data curation, example analysis, visualization, original draft preparation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was produced from the project supported by TUBITAK—TEYDEB (The Scientific and Technological Research Council of Türkiye—Technology and Innovation Funding Programmes Directorate) (Project No. 3230705).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

Author Ebubekir KAYA is primarily affiliated with Nevşehir Hacı Bektaş Veli University and is also employed part-time at CEKA Software R&D Co, Ltd., the remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. No financial support was received from CEKA Software R&D Co. Ltd. The company solely contributed to the provision of a productive working environment. CEKA Software R&D Co. Ltd. was not involved in the study design, data collection, analysis, analysis, interpretation, writing of the manuscript, or the decision to submit the article for publication.

References

  1. Gupta, N.; Nigam, S. Crude oil price prediction using artificial neural network. Procedia Comput. Sci. 2020, 170, 642–647. [Google Scholar] [CrossRef]
  2. Kulkarni, S.; Haidar, I. Forecasting model for crude oil price using artificial neural networks and commodity futures prices. arXiv 2009, arXiv:0906.4838. [Google Scholar]
  3. Amin, F.; Fahmi, A.; Abdullah, S. Dealer using a new trapezoidal cubic hesitant fuzzy TOPSIS method and application to group decision-making program. Soft Comput. 2019, 23, 5353–5366. [Google Scholar] [CrossRef]
  4. Fahmi, A.; Abdullah, S.; Amin, F.; Khan, M.S.A. Trapezoidal cubic fuzzy number Einstein hybrid weighted averaging operators and its application to decision making. Soft Comput. 2019, 23, 5753–5783. [Google Scholar] [CrossRef]
  5. Karaboga, D.; Kaya, E. Estimation of number of foreign visitors with ANFIS by using ABC algorithm. Soft Comput. 2020, 24, 7579–7591. [Google Scholar] [CrossRef]
  6. Ata, R.; Koçyigit, Y. An adaptive neuro-fuzzy inference system approach for prediction of tip speed ratio in wind turbines. Expert Syst. Appl. 2010, 37, 5454–5460. [Google Scholar] [CrossRef]
  7. Bisht, D.C.; Jangid, A. Discharge modelling using adaptive neuro-fuzzy inference system. Int. J. Adv. Sci. Technol. 2011, 31, 99–114. [Google Scholar]
  8. Okwu, M.O.; Tartibu, L.K.; Ojo, E.; Adume, S.; Gidiagba, J.; Fadeyi, J. ANFIS model for cost analysis in a dual source multi-destination system. Procedia Comput. Sci. 2023, 217, 1266–1279. [Google Scholar] [CrossRef]
  9. Jang, J.-S. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  10. Yu, L.; Wang, S.; Lai, K.K. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm. Energy Econ. 2008, 30, 2623–2635. [Google Scholar] [CrossRef]
  11. Lang, K.; Auer, B.R. The economic and financial properties of crude oil: A review. N. Am. J. Econ. Financ. 2020, 52, 100914. [Google Scholar] [CrossRef]
  12. Amano, A. A small forecasting model of the world oil market. J. Policy Model. 1987, 9, 615–635. [Google Scholar] [CrossRef]
  13. Huntington, H.G. Oil price forecasting in the 1980s: What went wrong? Energy J. 1994, 15, 1–22. [Google Scholar] [CrossRef]
  14. Abramson, B.; Finizza, A. Probabilistic forecasts from probabilistic models: A case study in the oil market. Int. J. Forecast. 1995, 11, 63–72. [Google Scholar] [CrossRef]
  15. Hamdi, M.; Aloui, C. Forecasting crude oil price using artificial neural networks: A literature survey. Econ. Bull 2015, 35, 1339–1359. [Google Scholar]
  16. Wang, S.; Yu, L.; Lai, K.K. A novel hybrid AI system framework for crude oil price forecasting. In Proceedings of the Chinese Academy of Sciences Symposium on Data Mining and Knowledge Management, Beijing, China, 12–14 July 2004; pp. 233–242. [Google Scholar]
  17. Mirmirani, S.; Cheng Li, H. A comparison of VAR and neural networks with genetic algorithm in forecasting price of oil. In Applications of Artificial Intelligence in Finance and Economics; Emerald Group Publishing Limited: Leeds, England, 2004; pp. 203–223. [Google Scholar]
  18. Gori, F.; Ludovisi, D.; Cerritelli, P. Forecast of oil price and consumption in the short term under three scenarios: Parabolic, linear and chaotic behaviour. Energy 2007, 32, 1291–1296. [Google Scholar] [CrossRef]
  19. Chiroma, H.; Abdulkareem, S.; Abubakar, A.; Zeki, A.; Gital, A.Y.u.; Usman, M.J. Co—Active neuro-fuzzy inference systems model for predicting crude oil price based on OECD inventories. In Proceedings of the 2013 International Conference on Research and Innovation in Information Systems (ICRIIS), Kuala Lumpur, Malaysia, 27–28 November 2013; pp. 232–235. [Google Scholar]
  20. Mombeini, H.; Yazdani-Chamzini, A. Developing a new approach for forecasting the trends of oil price. Bus. Manag. Rev. 2014, 4, 120. [Google Scholar]
  21. Abdollahi, H.; Ebrahimi, S.B. A new hybrid model for forecasting Brent crude oil price. Energy 2020, 200, 117520. [Google Scholar] [CrossRef]
  22. Abd Elaziz, M.; Ewees, A.A.; Alameer, Z. Improving adaptive neuro-fuzzy inference system based on a modified salp swarm algorithm using genetic algorithm to forecast crude oil price. Nat. Resour. Res. 2020, 29, 2671–2686. [Google Scholar] [CrossRef]
  23. Anshori, M.Y.; Rahmalia, D.; Herlambang, T.; Karya, D.F. Optimizing Adaptive Neuro Fuzzy Inference System (ANFIS) parameters using Cuckoo Search (Case study of world crude oil price estimation). J. Phys. Conf. Ser. 2021, 1836, 012041. [Google Scholar] [CrossRef]
  24. Eliwa, E.H.I.; El Koshiry, A.M.; Abd El-Hafeez, T.; Omar, A. Optimal gasoline price predictions: Leveraging the ANFIS regression model. Int. J. Intell. Syst. 2024, 1, 8462056. [Google Scholar] [CrossRef]
  25. Awijen, H.; Ben Ameur, H.; Ftiti, Z.; Louhichi, W. Forecasting oil price in times of crisis: A new evidence from machine learning versus deep learning models. Ann. Oper. Res. 2025, 345, 979–1002. [Google Scholar] [CrossRef]
  26. Jabeur, S.B.; Khalfaoui, R.; Arfi, W.B. The effect of green energy, global environmental indexes, and stock markets in predicting oil price crashes: Evidence from explainable machine learning. J. Environ. Manag. 2021, 298, 113511. [Google Scholar] [CrossRef] [PubMed]
  27. Jiang, Z.; Zhang, L.; Zhang, L.; Wen, B. Investor sentiment and machine learning: Predicting the price of China’s crude oil futures market. Energy 2022, 247, 123471. [Google Scholar] [CrossRef]
  28. Hasan, M.; Abedin, M.Z.; Hajek, P.; Coussement, K.; Sultan, M.N.; Lucey, B. A blending ensemble learning model for crude oil price forecasting. Ann. Oper. Res. 2024, 1–31. [Google Scholar] [CrossRef]
  29. Sezer, O.B.; Gudelek, M.U.; Ozbayoglu, A.M. Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Appl. Soft Comput. 2020, 90, 106181. [Google Scholar] [CrossRef]
  30. Iftikhar, H.; Zafar, A.; Turpo-Chaparro, J.E.; Canas Rodrigues, P.; López-Gonzales, J.L. Forecasting day-ahead brent crude oil prices using hybrid combinations of time series models. Mathematics 2023, 11, 3548. [Google Scholar] [CrossRef]
  31. Zhao, Y.; Hu, B.; Wang, S. Prediction of brent crude oil price based on lstm model under the background of low-carbon transition. arXiv 2024, arXiv:2409.12376. [Google Scholar]
  32. Dong, Y.; Jiang, H.; Guo, Y.; Wang, J. A novel crude oil price forecasting model using decomposition and deep learning networks. Eng. Appl. Artif. Intell. 2024, 133, 108111. [Google Scholar] [CrossRef]
  33. Naeem, M.; Aamir, M.; Yu, J.; Albalawi, O. A novel approach for reconstruction of IMFs of decomposition and ensemble model for forecasting of crude oil prices. IEEE Access 2024, 12, 34192–34207. [Google Scholar] [CrossRef]
  34. Xu, Y.; Liu, T.; Fang, Q.; Du, P.; Wang, J. Crude oil price forecasting with multivariate selection, machine learning, and a nonlinear combination strategy. Eng. Appl. Artif. Intell. 2025, 139, 109510. [Google Scholar] [CrossRef]
  35. Sen, A.; Choudhury, K.D. Forecasting the Crude Oil prices for last four decades using deep learning approach. Resour. Policy 2024, 88, 104438. [Google Scholar] [CrossRef]
  36. Jin, B.; Xu, X. Price forecasting through neural networks for crude oil, heating oil, and natural gas. Meas. Energy 2024, 1, 100001. [Google Scholar] [CrossRef]
  37. Fausto, F.; Cuevas, E.; Valdivia, A.; González, A. A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 2017, 160, 39–55. [Google Scholar] [CrossRef]
  38. Saraçoğlu, B.; Güvenç, U.; Dursun, M.; Poyraz, G.; Duman, S. Biyocağrafya Tabanlı Optimizasyon Metodu Kullanarak Asenkron Motor Parametre Tahmini. İleri Teknol. Bilim. Derg. 2013, 2, 46–54. [Google Scholar]
  39. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  41. Rosales Muñoz, A.A.; Grisales-Noreña, L.F.; Montano, J.; Montoya, O.D.; Perea-Moreno, A.-J. Application of the multiverse optimization method to solve the optimal power flow problem in alternating current networks. Electronics 2022, 11, 1287. [Google Scholar] [CrossRef]
  42. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided design 2011, 43, 303–315. [Google Scholar] [CrossRef]
  43. Yang, X.-S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  44. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  45. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  46. Mugemanyi, S.; Qu, Z.; Rugema, F.X.; Dong, Y.; Wang, L.; Bananeza, C.; Nshimiyimana, A.; Mutabazi, E. Marine predators algorithm: A comprehensive review. Mach. Learn. Appl. 2023, 12, 100471. [Google Scholar] [CrossRef]
  47. Yang, X.-S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation, Orléan, France, 3–7 September 2012; pp. 240–249. [Google Scholar]
  48. Kaya, E. Adaptif ağ tabanlı bulanık çıkarım sistemleri (anfis)\’nin yapay arı koloni algoritması ile eğitilmesi (Adaptive network based fuzzy inference system (anfis) training by using artificial bee colony algorithm). Ph.D. Thesis, Erciyes University, Talas, Turkey, 2017. [Google Scholar]
  49. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  50. Karaboga, D.; Kaya, E. Adaptive network based fuzzy inference system (ANFIS) training approaches: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2263–2293. [Google Scholar] [CrossRef]
Figure 1. The general ANFIS structure consists of two inputs and one output [48].
Figure 1. The general ANFIS structure consists of two inputs and one output [48].
Symmetry 17 00786 g001
Figure 2. Block diagram created for Brent oil price prediction using (a) two (b) three (c) four input systems.
Figure 2. Block diagram created for Brent oil price prediction using (a) two (b) three (c) four input systems.
Symmetry 17 00786 g002
Figure 3. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for DMin.
Figure 3. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for DMin.
Symmetry 17 00786 g003
Figure 4. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for DMax.
Figure 4. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for DMax.
Symmetry 17 00786 g004
Figure 5. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for WMin.
Figure 5. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for WMin.
Symmetry 17 00786 g005
Figure 6. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for WMax.
Figure 6. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for WMax.
Symmetry 17 00786 g006
Figure 7. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for MMin.
Figure 7. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for MMin.
Symmetry 17 00786 g007
Figure 8. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for MMax.
Figure 8. Comparison of graphs of real and predicted outputs plotted according to the best training error obtained for MMax.
Symmetry 17 00786 g008
Table 1. List of problems regarding the forecasting of Brent oil.
Table 1. List of problems regarding the forecasting of Brent oil.
ProblemDefinition
DMinPredicting the lowest daily price of Brent oil
DMaxPredicting the highest daily price of Brent oil
WMinPredicting the lowest weekly price of Brent oil
WMaxPredicting the highest weekly price of Brent oil
MMinPredicting the lowest monthly price of Brent oil
MMaxPredicting the highest monthly price of Brent oil
Table 2. Step-by-step workflow of the proposed ANFIS model.
Table 2. Step-by-step workflow of the proposed ANFIS model.
StepDescription
1Data Collection
2Normalization and Feature Setup
3ANFIS Structure Definition
4Training and Testing
5Evaluation Metrics
Table 3. Comparison of the results obtained for the solution of the DMin problem.
Table 3. Comparison of the results obtained for the solution of the DMin problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC221.9 × 10−31.8 × 10−34.2 × 10−51.6 × 10−31.4 × 10−31.1 × 10−4
31.9 × 10−31.8 × 10−33.5 × 10−51.6 × 10−31.5 × 10−31.0 × 10−4
41.8 × 10−31.7 × 10−34.6 × 10−51.6 × 10−31.5 × 10−31.2 × 10−4
321.9 × 10−31.7 × 10−36.3 × 10−51.6 × 10−31.4 × 10−31.0 × 10−4
31.9 × 10−31.7 × 10−35.0 × 10−51.6 × 10−31.5 × 10−31.1 × 10−4
421.9 × 10−31.8 × 10−35.4 × 10−51.7 × 10−31.5 × 10−31.4 × 10−4
FPA221.9 × 10−31.7 × 10−33.5 × 10−51.6 × 10−31.5 × 10−39.5 × 10−5
31.9 × 10−31.8 × 10−32.1 × 10−51.6 × 10−31.4 × 10−31.2 × 10−4
41.9 × 10−31.8 × 10−32.3 × 10−51.6 × 10−31.5 × 10−39.0 × 10−5
321.9 × 10−31.8 × 10−33.1 × 10−51.6 × 10−31.5 × 10−37.1 × 10−5
31.9 × 10−31.8 × 10−34.1 × 10−51.6 × 10−31.5 × 10−31.1 × 10−4
421.9 × 10−31.8 × 10−34.7 × 10−51.6 × 10−31.5 × 10−37.9 × 10−5
BBO221.8 × 10−31.6 × 10−38.8 × 10−51.6 × 10−31.4 × 10−31.3 × 10−4
31.7 × 10−31.6 × 10−36.1 × 10−51.7 × 10−31.6 × 10−39.9 × 10−5
41.7 × 10−31.6 × 10−38.3 × 10−51.6 × 10−31.5 × 10−37.5 × 10−5
321.7 × 10−31.6 × 10−35.5 × 10−51.6 × 10−31.5 × 10−33.1 × 10−5
31.7 × 10−31.6 × 10−37.2 × 10−51.6 × 10−31.5 × 10−35.7 × 10−5
421.7 × 10−31.5 × 10−31.1 × 10−41.6 × 10−31.5 × 10−33.9 × 10−3
MFO221.8 × 10−31.7 × 10−37.4 × 10−51.6 × 10−31.5 × 10−33.9 × 10−5
31.8 × 10−31.6 × 10−37.8 × 10−51.6 × 10−31.5 × 10−34.8 × 10−5
41.8 × 10−31.6 × 10−37.8 × 10−51.6 × 10−31.5 × 10−35.6 × 10−5
321.8 × 10−31.7 × 10−38.3 × 10−51.6 × 10−31.5 × 10−36.7 × 10−5
31.8 × 10−31.6 × 10−37.5 × 10−51.6 × 10−31.5 × 10−36.2 × 10−5
421.8 × 10−31.7 × 10−36.7 × 10−51.6 × 10−31.5 × 10−36.0 × 10−5
CS221.8 × 10−31.7 × 10−33.8 × 10−51.6 × 10−31.5 × 10−31.7 × 10−4
31.8 × 10−31.8 × 10−32.7 × 10−51.6 × 10−31.4 × 10−37.7 × 10−5
41.8 × 10−31.8 × 10−32.7 × 10−51.6 × 10−31.4 × 10−31.1 × 10−4
321.9 × 10−31.8 × 10−32.7 × 10−51.6 × 10−31.4 × 10−39.3 × 10−5
31.9 × 10−31.8 × 10−32.1 × 10−51.6 × 10−31.5 × 10−37.6 × 10−5
421.9 × 10−31.8 × 10−33.5 × 10−51.6 × 10−31.5 × 10−39.8 × 10−5
MPA221.7 × 10−31.6 × 10−37.4 × 10−51.6 × 10−31.5 × 10−35.8 × 10−5
31.7 × 10−31.6 × 10−38.5 × 10−51.6 × 10−31.4 × 10−36.6 × 10−5
41.7 × 10−31.6 × 10−36.8 × 10−51.6 × 10−31.5 × 10−37.1 × 10−5
321.7 × 10−31.6 × 10−38.1 × 10−51.6 × 10−31.5 × 10−35.6 × 10−5
31.8 × 10−31.6 × 10−36.9 × 10−51.6 × 10−31.5 × 10−34.6 × 10−5
421.8 × 10−31.6 × 10−36.7 × 10−51.6 × 10−31.4 × 10−38.6 × 10−5
MVO221.8 × 10−31.7 × 10−35.9 × 10−51.6 × 10−31.5 × 10−37.0 × 10−5
31.8 × 10−31.7 × 10−36.0 × 10−51.6 × 10−31.5 × 10−37.9 × 10−5
41.8 × 10−31.7 × 10−36.4 × 10−51.6 × 10−31.5 × 10−34.8 × 10−5
321.8 × 10−31.7 × 10−35.2 × 10−51.6 × 10−31.5 × 10−37.9 × 10−5
31.8 × 10−31.6 × 10−36.0 × 10−51.6 × 10−31.4 × 10−37.3 × 10−5
421.8 × 10−31.7 × 10−35.4 × 10−51.6 × 10−31.5 × 10−35.3 × 10−5
SHO221.9 × 10−31.8 × 10−31.1 × 10−41.6 × 10−31.5 × 10−31.4 × 10−4
31.9 × 10−31.8 × 10−31.1 × 10−41.7 × 10−31.5 × 10−31.6 × 10−4
41.9 × 10−31.8 × 10−39.0 × 10−51.7 × 10−31.5 × 10−31.0 × 10−4
321.9 × 10−31.8 × 10−31.3 × 10−41.6 × 10−31.4 × 10−31.5 × 10−4
32.1 × 10−31.9 × 10−31.4 × 10−41.9 × 10−31.5 × 10−33.1 × 10−4
422.6 × 10−31.9 × 10−32.0 × 10−32.3 × 10−31.4 × 10−32.1 × 10−3
TLBO221.8 × 10−31.6 × 10−38.6 × 10−51.6 × 10−31.5 × 10−38.7 × 10−5
31.8 × 10−31.6 × 10−38.8 × 10−51.6 × 10−31.5 × 10−32.9 × 10−4
41.7 × 10−31.6 × 10−39.1 × 10−51.6 × 10−31.5 × 10−31.0 × 10−4
321.8 × 10−31.7 × 10−37.4 × 10−51.6 × 10−31.5 × 10−36.1 × 10−5
31.7 × 10−31.6 × 10−37.4 × 10−51.6 × 10−31.5 × 10−34.9 × 10−5
421.8 × 10−31.6 × 10−38.8 × 10−51.6 × 10−31.5 × 10−35.6 × 10−5
Best results are given in bold.
Table 4. Comparison of the results obtained for the solution of the DMax problem.
Table 4. Comparison of the results obtained for the solution of the DMax problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC221.1 × 10−31.0 × 10−35.1 × 10−51.2 × 10−39.3 × 10−41.0 × 10−4
31.1 × 10−39.3 × 10−45.6 × 10−51.1 × 10−39.6 × 10−47.1 × 10−5
41.1 × 10−31.0 × 10−32.7 × 10−51.1 × 10−39.2 × 10−49.2 × 10−5
321.1 × 10−39.5 × 10−46.1 × 10−51.1 × 10−38.8 × 10−41.1 × 10−4
31.1 × 10−31.0 × 10−34.2 × 10−51.1 × 10−38.9 × 10−41.8 × 10−4
421.1 × 10−31.0 × 10−34.1 × 10−51.1 × 10−31.0 × 10−39.1 × 10−4
FPA221.1 × 10−39.9 × 10−43.7 × 10−51.2 × 10−31.0 × 10−39.5 × 10−5
31.1 × 10−31.1 × 10−32.4 × 10−51.1 × 10−39.2 × 10−48.9 × 10−5
41.1 × 10−31.1 × 10−31.7 × 10−51.1 × 10−39.4 × 10−48.2 × 10−5
321.1 × 10−39.1 × 10−44.8 × 10−51.1 × 10−39.7 × 10−41.9 × 10−4
31.2 × 10−31.1 × 10−32.3 × 10−51.2 × 10−31.0 × 10−31.4 × 10−4
421.2 × 10−31.1 × 10−34.2 × 10−51.51 × 10−39.7 × 10−43.2 × 10−4
BBO221.0 × 10−38.6 × 10−48.9 × 10−51.1 × 10−38.4 × 10−49.6 × 10−5
39.5 × 10−48.4 × 10−49.9 × 10−51.0 × 10−38.9 × 10−48.1 × 10−5
49.4 × 10−48.1 × 10−41.1 × 10−41.0 × 10−38.3 × 10−49.8 × 10−5
321.1 × 10−38.2 × 10−49.1 × 10−51.1 × 10−38.8 × 10−41.7 × 10−4
39.8 × 10−48.1 × 10−41.0 × 10−41.1 × 10−38.7 × 10−41.7 × 10−4
429.6 × 10−48.1 × 10−49.7 × 10−51.3 × 10−39.8 × 10−42.0 × 10−4
MFO221.1 × 10−38.7 × 10−47.1 × 10−51.1 × 10−31.0 × 10−35.0 × 10−5
31.0 × 10−38.5 × 10−41.0 × 10−41.1 × 10−31.0 × 10−34.9 × 10−5
41.0 × 10−38.1 × 10−41.1 × 10−41.1 × 10−39.1 × 10−45.7 × 10−5
321.1 × 10−39.3 × 10−44.2 × 10−51.1 × 10−39.4 × 10−48.4 × 10−5
31.1 × 10−38.5 × 10−46.3 × 10−51.0 × 10−39.2 × 10−47.3 × 10−5
421.0 × 10−38.4 × 10−47.5 × 10−51.4 × 10−31.0 × 10−32.4 × 10−4
CS221.1 × 10−31.0 × 10−32.4 × 10−51.4 × 10−39.2 × 10−41.0 × 10−4
31.1 × 10−31.0 × 10−32.1 × 10−51.2 × 10−31.0 × 10−39.6 × 10−5
41.1 × 10−31.0 × 10−32.5 × 10−51.2 × 10−31.0 × 10−31.1 × 10−4
321.1 × 10−31.0 × 10−33.5 × 10−51.2 × 10−39.2 × 10−41.3 × 10−4
31.1 × 10−31.0 × 10−33.1 × 10−51.2 × 10−39.6 × 10−42.2 × 10−4
421.1 × 10−31.0 × 10−32.7 × 10−51.3 × 10−39.3 × 10−42.2 × 10−4
MPA229.3 × 10−48.2 × 10−41.0 × 10−41.1 × 10−38.9 × 10−47.1 × 10−5
39.7 × 10−48.3 × 10−41.1 × 10−41.1 × 10−39.2 × 10−47.4 × 10−5
41.0 × 10−38.4 × 10−41.0 × 10−41.1 × 10−31.0 × 10−31.1 × 10−4
329.8 × 10−48.5 × 10−48.9 × 10−51.1 × 10−39.7 × 10−47.1 × 10−5
31.0 × 10−38.4 × 10−48.4 × 10−51.1 × 10−39.9 × 10−47.6 × 10−5
429.8 × 10−48.6 × 10−47.1 × 10−51.3 × 10−31.0 × 10−32.0 × 10−4
MVO221.1 × 10−39.1 × 10−44.8 × 10−51.1 × 10−39.9 × 10−45.1 × 10−5
31.1 × 10−39.5 × 10−45.7 × 10−51.1 × 10−39.7 × 10−47.0 × 10−5
41.1 × 10−39.7 × 10−44.3 × 10−51.1 × 10−39.9 × 10−48.2 × 10−5
321.1 × 10−39.1 × 10−45.4 × 10−51.1 × 10−39.0 × 10−41.0 × 10−4
31.1 × 10−39.2 × 10−45.3 × 10−51.1 × 10−39.2 × 10−41.2 × 10−4
421.1 × 10−39.1 × 10−45.5 × 10−51.3 × 10−39.1 × 10−42.2 × 10−4
SHO221.1 × 10−39.2 × 10−48.7 × 10−51.2 × 10−39.0 × 10−41.6 × 10−4
31.1 × 10−38.8 × 10−41.2 × 10−41.2 × 10−38.5 × 10−41.6 × 10−4
41.1 × 10−31.0 × 10−38.4 × 10−51.2 × 10−39.2 × 10−41.7 × 10−4
321.1 × 10−31.0 × 10−35.2 × 10−51.3 × 10−39.0 × 10−43.6 × 10−4
31.0 × 10−31.1 × 10−32.0 × 10−41.6 × 10−31.0 × 10−35.5 × 10−4
421.3 × 10−31.0 × 10−31.4 × 10−41.6 × 10−39.9 × 10−43.3 × 10−4
TLBO221.1 × 10−38.3 × 10−46.8 × 10−51.1 × 10−31.0 × 10−34.3 × 10−5
31.0 × 10−38.5 × 10−41.2 × 10−41.1 × 10−39.8 × 10−46.3 × 10−5
49.7 × 10−48.3 × 10−41.1 × 10−41.1 × 10−39.9 × 10−45.4 × 10−5
321.0 × 10−38.2 × 10−41.0 × 10−41.1 × 10−39.5 × 10−44.2 × 10−5
39.9 × 10−48.2 × 10−41.2 × 10−41.0 × 10−39.1 × 10−41.1 × 10−4
429.6 × 10−47.7 × 10−48.3 × 10−51.4 × 10−39.3 × 10−43.1 × 10−4
Best results are given in bold.
Table 5. Comparison of the results obtained for the solution of the WMin problem.
Table 5. Comparison of the results obtained for the solution of the WMin problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC227.8 × 10−47.2 × 10−43.2 × 10−51.1 × 10−39.8 × 10−48.0 × 10−5
37.8 × 10−47.2 × 10−43.6 × 10−51.1 × 10−39.4 × 10−46.4 × 10−5
47.8 × 10−47.4 × 10−42.2 × 10−51.1 × 10−31.0 × 10−36.7 × 10−5
328.1 × 10−47.1 × 10−45.1 × 10−51.1 × 10−39.7 × 10−47.2 × 10−5
38.2 × 10−47.5 × 10−43.8 × 10−51.1 × 10−39.9 × 10−47.7 × 10−5
428.6 × 10−47.9 × 10−45.0 × 10−51.2 × 10−31.0 × 10−37.8 × 10−5
FPA227.5 × 10−47.3 × 10−41.2 × 10−51.0 × 10−39.4 × 10−44.4 × 10−5
37.7 × 10−47.3 × 10−42.1 × 10−51.1 × 10−38.7 × 10−47.8 × 10−5
47.7 × 10−47.4 × 10−41.4 × 10−51.1 × 10−39.9 × 10−45.7 × 10−5
327.9 × 10−47.4 × 10−43.0 × 10−51.1 × 10−31.0 × 10−36.1 × 10−5
38.1 × 10−47.6 × 10−42.7 × 10−51.1 × 10−39.5 × 10−46.1 × 10−5
428.4 × 10−47.7 × 10−44.4 × 10−51.1 × 10−31.0 × 10−38.9 × 10−5
BBO227.2 × 10−47.1 × 10−46.8 × 10−61.0 × 10−39.7 × 10−42.8 × 10−5
37.2 × 10−47.0 × 10−44.3 × 10−61.0 × 10−39.6 × 10−44.4 × 10−5
47.1 × 10−46.6 × 10−41.4 × 10−51.0 × 10−39.9 × 10−43.0 × 10−5
327.2 × 10−47.0 × 10−49.1 × 10−61.1 × 10−39.5 × 10−45.7 × 10−5
37.1 × 10−47.0 × 10−47.8 × 10−61.1 × 10−39.9 × 10−45.0 × 10−5
427.2 × 10−46.6 × 10−41.6 × 10−51.1 × 10−39.6 × 10−45.6 × 10−5
MFO227.4 × 10−47.2 × 10−44.5 × 10−61.0 × 10−39.9 × 10−42.4 × 10−5
37.3 × 10−47.2 × 10−44.1 × 10−61.0 × 10−39.9 × 10−42.2 × 10−5
47.2 × 10−47.1 × 10−48.6 × 10−61.0 × 10−39.9 × 10−43.1 × 10−5
327.3 × 10−47.0 × 10−49.6 × 10−61.0 × 10−39.8 × 10−42.1 × 10−5
37.3 × 10−47.1 × 10−41.1 × 10−51.1 × 10−39.8 × 10−45.1 × 10−5
427.4 × 10−47.2 × 10−41.6 × 10−51.1 × 10−31.0 × 10−38.7 × 10−5
CS227.4 × 10−47.3 × 10−45.3 × 10−61.0 × 10−39.4 × 10−43.6 × 10−5
37.4 × 10−47.3 × 10−48.4 × 10−61.0 × 10−39.6 × 10−44.7 × 10−5
47.5 × 10−47.3 × 10−41.3 × 10−51.0 × 10−39.6 × 10−44.7 × 10−5
327.6 × 10−47.4 × 10−41.5 × 10−51.1 × 10−39.7 × 10−49.9 × 10−5
37.8 × 10−47.5 × 10−41.5 × 10−51.1 × 10−31.0 × 10−36.1 × 10−5
428.0 × 10−47.6 × 10−42.7 × 10−51.1 × 10−39.9 × 10−47.9 × 10−5
MPA227.2 × 10−47.1 × 10−46.2 × 10−61.0 × 10−39.9 × 10−41.5 × 10−5
37.2 × 10−47.0 × 10−47.0 × 10−61.0 × 10−39.8 × 10−41.9 × 10−5
47.2 × 10−47.1 × 10−45.8 × 10−61.0 × 10−39.7 × 10−42.3 × 10−5
327.1 × 10−46.7 × 10−41.1 × 10−51.1 × 10−39.7 × 10−44.7 × 10−5
37.1 × 10−46.8 × 10−41.2 × 10−51.1 × 10−39.6 × 10−41.3 × 10−4
427.1 × 10−46.9 × 10−46.9 × 10−61.1 × 10−31.0 × 10−32.8 × 10−5
MVO227.3 × 10−47.2 × 10−44.1 × 10−61.0 × 10−39.9 × 10−42.6 × 10−5
37.3 × 10−47.2 × 10−45.8 × 10−61.0 × 10−39.9 × 10−41.9 × 10−5
47.3 × 10−47.1 × 10−49.2 × 10−61.0 × 10−39.8 × 10−42.8 × 10−5
327.3 × 10−47.1 × 10−49.8 × 10−61.1 × 10−39.7 × 10−44.3 × 10−5
37.3 × 10−47.2 × 10−41.4 × 10−51.1 × 10−39.4 × 10−49.5 × 10−5
427.4 × 10−47.2 × 10−42.4 × 10−51.1 × 10−39.5 × 10−45.5 × 10−5
SHO227.5 × 10−47.2 × 10−43.9 × 10−51.1 × 10−39.6 × 10−47.0 × 10−5
37.9 × 10−47.3 × 10−46.2 × 10−51.1 × 10−39.9 × 10−41.6 × 10−4
48.1 × 10−47.4 × 10−46.3 × 10−51.1 × 10−39.9 × 10−49.8 × 10−5
328.2 × 10−47.1 × 10−41.4 × 10−41.1 × 10−39.9 × 10−41.8 × 10−4
31.0 × 10−37.7 × 10−41.8 × 10−41.1 × 10−31.0 × 10−32.6 × 10−4
421.1 × 10−37.5 × 10−43.1 × 10−41.5 × 10−31.1 × 10−35.1 × 10−4
TLBO227.2 × 10−47.0 × 10−48.8 × 10−61.0 × 10−39.9 × 10−41.6 × 10−5
37.2 × 10−46.9 × 10−41.1 × 10−51.0 × 10−39.8 × 10−42.7 × 10−5
47.2 × 10−46.9 × 10−41.2 × 10−51.0 × 10−39.7 × 10−42.6 × 10−5
327.2 × 10−46.8 × 10−41.4 × 10−51.1 × 10−31.0 × 10−33.1 × 10−5
37.1 × 10−46.6 × 10−41.5 × 10−51.1 × 10−39.9 × 10−44.7 × 10−5
427.2 × 10−46.7 × 10−41.5 × 10−51.1 × 10−39.9 × 10−45.4 × 10−5
Best results are given in bold.
Table 6. Comparison of the results obtained for the solution of the WMax problem.
Table 6. Comparison of the results obtained for the solution of the WMax problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC227.5 × 10−46.3 × 10−45.2 × 10−57.0 × 10−45.1 × 10−42.7 × 10−4
37.4 × 10−46.6 × 10−44.2 × 10−57.0 × 10−44.7 × 10−41.4 × 10−4
47.2 × 10−46.3 × 10−44.1 × 10−58.1 × 10−45.4 × 10−47.6 × 10−4
327.8 × 10−46.2 × 10−46.4 × 10−57.3 × 10−45.0 × 10−42.2 × 10−4
37.7 × 10−46.5 × 10−44.6 × 10−56.9 × 10−45.5 × 10−45.1 × 10−5
428.2 × 10−47.1 × 10−44.8 × 10−56.8 × 10−45.2 × 10−41.2 × 10−4
FPA227.2 × 10−46.1 × 10−43.8 × 10−56.4 × 10−44.7 × 10−41.2 × 10−4
37.4 × 10−46.7 × 10−43.2 × 10−56.3 × 10−45.0 × 10−48.9 × 10−5
47.4 × 10−46.8 × 10−43.1 × 10−56.1 × 10−45.0 × 10−47.8 × 10−5
327.7 × 10−46.7 × 10−43.7 × 10−56.5 × 10−45.0 × 10−41.2 × 10−4
38.0 × 10−47.1 × 10−43.3 × 10−56.3 × 10−45.3 × 10−46.9 × 10−5
428.4 × 10−47.7 × 10−44.0 × 10−56.8 × 10−45.4 × 10−48.31 × 10−5
BBO226.3 × 10−45.7 × 10−43.5 × 10−56.7 × 10−44.6 × 10−41.1 × 10−4
36.1 × 10−45.5 × 10−43.7 × 10−57.4 × 10−44.8 × 10−42.3 × 10−4
46.0 × 10−45.6 × 10−42.7 × 10−57.1 × 10−44.7 × 10−42.5 × 10−4
326.4 × 10−45.5 × 10−45.7 × 10−56.9 × 10−45.1 × 10−42.1 × 10−4
36.3 × 10−45.6 × 10−45.2 × 10−56.4 × 10−45.0 × 10−47.7 × 10−4
426.6 × 10−45.6 × 10−46.6 × 10−56.6 × 10−44.8 × 10−41.6 × 10−4
MFO227.1 × 10−46.1 × 10−49.3 × 10−56.7 × 10−45.1 × 10−41.1 × 10−4
36.6 × 10−45.5 × 10−45.6 × 10−56.9 × 10−44.6 × 10−41.2 × 10−4
46.4 × 10−45.4 × 10−44.6 × 10−56.5 × 10−45.1 × 10−49.4 × 10−5
326.9 × 10−45.8 × 10−46.0 × 10−56.6 × 10−45.1 × 10−49.4 × 10−5
36.6 × 10−46.0 × 10−43.9 × 10−56.9 × 10−44.8 × 10−42.0 × 10−4
427.0 × 10−45.5 × 10−46.0 × 10−56.1 × 10−44.8 × 10−48.2 × 10−5
CS226.6 × 10−46.4 × 10−42.7 × 10−56.3 × 10−44.8 × 10−41.1 × 10−4
37.0 × 10−46.7 × 10−41.7 × 10−56.0 × 10−44.9 × 10−45.5 × 10−5
47.0 × 10−46.7 × 10−42.0 × 10−56.0 × 10−45.1 × 10−42.6 × 10−5
327.2 × 10−47.0 × 10−41.9 × 10−56.3 × 10−45.2 × 10−43.9 × 10−5
37.4 × 10−47.0 × 10−43.1 × 10−56.4 × 10−45.3 × 10−41.6 × 10−5
427.1 × 10−47.0 × 10−41.6 × 10−56.6 × 10−45.4 × 10−43.3 × 10−5
MPA226.1 × 10−45.7 × 10−42.6 × 10−56.9 × 10−44.7 × 10−49.2 × 10−5
36.0 × 10−45.7 × 10−42.0 × 10−56.5 × 10−44.7 × 10−41.2 × 10−4
46.2 × 10−45.8 × 10−43.8 × 10−56.8 × 10−44.6 × 10−41.7 × 10−4
326.4 × 10−45.8 × 10−44.5 × 10−56.4 × 10−45.0 × 10−46.4 × 10−5
36.3 × 10−45.8 × 10−42.8 × 10−56.7 × 10−45.4 × 10−41.2 × 10−4
426.5 × 10−45.7 × 10−45.7 × 10−56.3 × 10−45.2 × 10−46.1 × 10−5
MVO226.6 × 10−46.0 × 10−44.4 × 10−56.8 × 10−45.3 × 10−41.1 × 10−4
36.5 × 10−46.0 × 10−44.0 × 10−57.1 × 10−45.1 × 10−41.7 × 10−4
46.5 × 10−46.1 × 10−43.9 × 10−56.4 × 10−44.9 × 10−41.1 × 10−4
326.8 × 10−46.0 × 10−45.1 × 10−56.4 × 10−45.3 × 10−41.7 × 10−4
36.8 × 10−46.1 × 10−45.0 × 10−56.3 × 10−44.7 × 10−48.9 × 10−5
427.2 × 10−46.2 × 10−46.5 × 10−55.8 × 10−44.6 × 10−46.2 × 10−5
SHO227.2 × 10−46.1 × 10−48.1 × 10−56.6 × 10−45.2 × 10−41.1 × 10−4
37.5 × 10−46.2 × 10−47.8 × 10−56.4 × 10−44.7 × 10−47.7 × 10−5
47.8 × 10−46.3 × 10−47.1 × 10−56.8 × 10−45.6 × 10−49.3 × 10−5
328.6 × 10−46.7 × 10−41.8 × 10−47.1 × 10−44.9 × 10−42.3 × 10−4
31.0 × 10−37.5 × 10−42.9 × 10−47.2 × 10−45.6 × 10−43.1 × 10−4
429.5 × 10−46.4 × 10−41.7 × 10−48.6 × 10−44.9 × 10−42.5 × 10−4
TLBO226.3 × 10−45.8 × 10−42.7 × 10−57.4 × 10−45.3 × 10−48.0 × 10−5
36.1 × 10−45.6 × 10−43.3 × 10−57.0 × 10−44.8 × 10−41.2 × 10−4
46.2 × 10−45.5 × 10−44.5 × 10−56.4 × 10−44.7 × 10−41.2 × 10−4
326.3 × 10−45.7 × 10−45.1 × 10−57.3 × 10−45.1 × 10−48.9 × 10−5
36.2 × 10−45.5 × 10−43.8 × 10−56.7 × 10−45.0 × 10−41.3 × 10−4
426.6 × 10−45.5 × 10−47.1 × 10−55.8 × 10−44.8 × 10−46.4 × 10−5
Best results are given in bold.
Table 7. Comparison of the results obtained for the solution of the MMin problem.
Table 7. Comparison of the results obtained for the solution of the MMin problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC223.5×10−32.9 × 10−31.7 × 10−45.2 × 10−34.5 × 10−38.2 × 10−4
33.4 ×10−32.9 × 10−31.6 × 10−45.3 × 10−34.4 × 10−36.1 × 10−4
43.3 × 10−33.0 × 10−39.4 × 10−55.3 × 10−34.5 × 10−35.4 × 10−4
323.5 × 10−33.0 × 10−31.5 × 10−45.4 × 10−34.3 × 10−39.2 × 10−4
33.4 × 10−33.0 × 10−31.4 × 10−45.4 × 10−34.5 × 10−36.8 × 10−4
423.4 × 10−33.2 × 10−31.1 × 10−45.6 × 10−34.5 × 10−36.6 × 10−4
FPA223.5 × 10−33.4 × 10−35.5 × 10−54.8 × 10−34.5 × 10−32.6 × 10−4
33.5 × 10−33.4 × 10−36.8 × 10−55.1 × 10−34.5 × 10−35.7 × 10−4
43.5 × 10−33.3 × 10−37.7 × 10−55.0 × 10−34.5 × 10−34.9 × 10−4
323.6 × 10−33.4 × 10−39.4 × 10−55.0 × 10−34.4 × 10−33.3 × 10−4
33.6 × 10−33.4 × 10−31.0 × 10−45.3 × 10−34.5 × 10−35.3 × 10−4
423.7 × 10−33.4 × 10−31.8 × 10−45.4 × 10−34.5 × 10−38.0 × 10−4
BBO223.3 × 10−32.6 × 10−32.7 × 10−45.0 × 10−34.5 × 10−33.9 × 10−4
33.0 × 10−32.2 × 10−33.1 × 10−45.2 × 10−34.5 × 10−34.0 × 10−4
43.0 × 10−32.4 × 10−32.5 × 10−45.4 × 10−34.6 × 10−34.7 × 10−4
323.2 × 10−32.5 × 10−32.5 × 10−45.1 × 10−34.3 × 10−33.8 × 10−4
32.9 × 10−32.1 × 10−33.6 × 10−45.4 × 10−34.6 × 10−35.8 × 10−4
423.0 × 10−32.4 × 10−32.7 × 10−45.2 × 10−34.3 × 10−36.8 × 10−4
MFO223.5 × 10−32.7 × 10−32.4 × 10−44.8 × 10−34.5 × 10−32.7 × 10−4
33.4 × 10−32.6 × 10−32.7 × 10−44.9 × 10−34.5 × 10−32.8 × 10−4
43.3 × 10−32.6 × 10−32.4 × 10−45.0 × 10−34.5 × 10−33.1 × 10−4
323.4 × 10−32.6 × 10−32.7 × 10−44.7 × 10−34.4 × 10−33.2 × 10−4
33.2 × 10−32.5 × 10−32.6 × 10−45.0 × 10−34.4 × 10−34.3 × 10−4
423.3 × 10−32.6 × 10−32.4 × 10−45.4 × 10−34.5 × 10−31.8 × 10−3
CS223.5 × 10−33.3 × 10−35.2 × 10−55.2 × 10−34.5 × 10−31.3 × 10−3
33.4 × 10−33.3 × 10−34.4 × 10−55.0 × 10−34.5 × 10−34.3 × 10−4
43.4 × 10−33.3 × 10−35.5 × 10−55.2 × 10−34.54× 10−34.9 × 10−4
323.5 × 10−33.4 × 10−35.2 × 10−55.1 × 10−34.3 × 10−34.8 × 10−4
33.5 × 10−33.4 × 10−33.3 × 10−55.4 × 10−34.5 × 10−34.7 × 10−4
423.5 × 10−33.4 × 10−38.9 × 10−56.3 × 10−34.6 × 10−31.7 × 10−3
MPA223.1 × 10−32.5 × 10−33.0 × 10−45.0 × 10−34.5 × 10−33.5 × 10−4
33.2 × 10−32.7 × 10−32.4 × 10−45.1 × 10−34.5 × 10−34.8 × 10−4
43.1 × 10−32.5 × 10−32.4 × 10−45.3 × 10−34.5 × 10−35.2 × 10−4
323.0 × 10−32.4 × 10−32.9 × 10−44.9 × 10−34.3 × 10−33.1 × 10−4
33.1 × 10−32.5 × 10−32.7 × 10−45.4 × 10−34.5 × 10−35.4 × 10−4
423.0 × 10−32.3 × 10−33.2 × 10−45.6 × 10−34.5 × 10−39.0 × 10−4
MVO223.4 × 10−32.8 × 10−31.6 × 10−44.8 × 10−34.4 × 10−33.7 × 10−4
33.4 × 10−33.0 × 10−31.3 × 10−45.0 × 10−34.6 × 10−33.0 × 10−4
43.3 × 10−32.9 × 10−31.7 × 10−45.1 × 10−34.6 × 10−34.2 × 10−4
323.4 × 10−32.8 × 10−31.8 × 10−44.8 × 10−34.4 × 10−32.5 × 10−4
33.4 × 10−33.1 × 10−31.1 × 10−45.2 × 10−34.5 × 10−34.4 × 10−4
423.3 × 10−33.0 × 10−31.4 × 10−45.3 × 10−34.5 × 10−36.0 × 10−4
SHO223.5 × 10−33.2 × 10−32.5 × 10−44.9 × 10−34.5 × 10−33.7 × 10−4
33.5 × 10−32.7 × 10−33.1×10−45.2 × 10−34.4 × 10−34.9 × 10−4
43.7 × 10−33.0 × 10−35.5×10−45.5 × 10−34.6 × 10−36.4 × 10−4
323.8 × 10−33.1 × 10−34.2×10−45.8 × 10−34.0 × 10−33.0 × 10−3
34.2 × 10−33.4 × 10−37.9×10−46.0 × 10−34.8 × 10−31.0 × 10−3
424.3 × 10−33.3 × 10−37.5×10−46.1 × 10−34.8 × 10−31.4 × 10−3
TLBO223.4 × 10−32.7 × 10−31.9×10−44.8 × 10−34.5 × 10−32.1 × 10−4
33.3 × 10−32.6 × 10−31.7×10−45.0 × 10−34.6 × 10−33.5 × 10−4
43.1 × 10−32.4 × 10−32.8×10−45.2 × 10−34.4 × 10−34.4 × 10−4
323.1 × 10−32.2 × 10−33.7×10−44.9 × 10−34.4 × 10−33.1 × 10−4
33.1 × 10−32.3 × 10−32.8×10−45.3 × 10−34.6 × 10−34.0 × 10−4
423.0 × 10−32.3 × 10−33.8×10−45.3 × 10−34.5 × 10−35.8 × 10−4
Best results are given in bold.
Table 8. Comparison of the results obtained for the solution of the MMax problem.
Table 8. Comparison of the results obtained for the solution of the MMax problem.
AlgorithmNumber of InputsNumber of MFsResults
TrainTest
MeanBestStandard DeviationMeanBestStandard Deviation
ABC222.6 × 10−32.2 × 10−31.4 × 10−48.6 × 10−36.6 × 10−31.1 × 10−3
32.4 × 10−32.2 × 10−39.0 × 10−58.3 × 10−35.4 × 10−31.0 × 10−3
42.3 × 10−32.1 × 10−37.08× 10−58.2 × 10−36.8 × 10−38.9 × 10−4
322.5 × 10−32.2 × 10−31.5 × 10−48.4 × 10−35.2 × 10−31.3 × 10−3
32.3 × 10−32.1 × 10−31.1 × 10−48.4 × 10−36.0 × 10−31.2 × 10−3
422.4 × 10−32.0 × 10−31.5 × 10−48.2 × 10−35.6 × 10−31.4 × 10−3
FPA222.6 × 10−32.3 × 10−31.1 × 10−48.1 × 10−37.0 × 10−37.8 × 10−4
32.6 × 10−32.4 × 10−38.6 × 10−58.3 × 10−37.0 × 10−31.0 × 10−3
42.5 × 10−32.4 × 10−37.4 × 10−58.2 × 10−36.6 × 10−37.0 × 10−4
322.7 × 10−32.4 × 10−39.2 × 10−58.0 × 10−36.4 × 10−38.9 × 10−4
32.6 × 10−32.4 × 10−31.1 × 10−48.3 × 10−35.8 × 10−31.1 × 10−3
422.8 × 10−32.4 × 10−31.9 × 10−48.5 × 10−35.9 × 10−31.5 × 10−3
BBO222.3 × 10−32.0 × 10−31.5 × 10−47.7 × 10−35.5 × 10−39.4 × 10−4
32.2 × 10−32.0 × 10−31.4 × 10−47.7 × 10−36.2 × 10−39.4 × 10−4
42.1 × 10−32.0 × 10−31.2 × 10−47.9 × 10−36.2 × 10−31.1 × 10−3
322.2 × 10−31.9 × 10−31.8 × 10−48.7 × 10−36.6 × 10−31.1 × 10−3
32.0 × 10−31.7 × 10−31.7 × 10−48.8 × 10−35.9 × 10−32.7 × 10−3
422.2 × 10−31.9 × 10−32.0 × 10−48.1 × 10−36.2 × 10−31.4 × 10−3
MFO222.5 × 10−32.1 × 10−32.5 × 10−47.5 × 10−36.1 × 10−36.9 × 10−4
32.4 × 10−32.0 × 10−32.1 × 10−47.8 × 10−36.4 × 10−37.5 × 10−4
42.3 × 10−32.1 × 10−31.6 × 10−47.5 × 10−36.4 × 10−37.2 × 10−4
322.5 × 10−32.1 × 10−32.2 × 10−48.2 × 10−36.5 × 10−37.9 × 10−4
32.4 × 10−32.0 × 10−31.8 × 10−48.3 × 10−36.5 × 10−37.4 × 10−4
422.4 × 10−32.1 × 10−32.3 × 10−48.2 × 10−34.9 × 10−31.1 × 10−3
CS222.5 × 10−32.4 × 10−37.4 × 10−57.9 × 10−36.5 × 10−37.1 × 10−4
32.5 × 10−32.4 × 10−36.7 × 10−57.8 × 10−35.6 × 10−39.2 × 10−4
42.5 × 10−32.4 × 10−34.2 × 10−58.1 × 10−36.1 × 10−31.0 × 10−3
322.6 × 10−32.4 × 10−38.7 × 10−58.8 × 10−36.4 × 10−32.5 × 10−3
32.5 × 10−32.3 × 10−31.0 × 10−48.5 × 10−36.2 × 10−31.3 × 10−3
422.7 × 10−32.5 × 10−31.2 × 10−48.1 × 10−35.6 × 10−31.3 × 10−3
MPA222.3 × 10−32.1 × 10−39.3 × 10−57.8 × 10−36.2 × 10−39.4 × 10−4
32.2 × 10−32.1 × 10−39.7 × 10−58.3 × 10−35.9 × 10−31.1 × 10−3
42.2 × 10−32.0 × 10−31.1 × 10−48.0 × 10−36.0 × 10−39.5 × 10−4
322.2 × 10−31.9 × 10−31.4 × 10−48.4 × 10−36.5 × 10−31.4 × 10−3
32.2 × 10−31.9 × 10−31.3 × 10−48.6 × 10−36.4 × 10−39.0 × 10−4
422.2 × 10−31.9 × 10−32.1 × 10−49.0 × 10−36.4 × 10−31.3 × 10−3
MVO222.5 × 10−32.2 × 10−31.8 × 10−47.5 × 10−36.2 × 10−37.7 × 10−4
32.4 × 10−32.2 × 10−31.1 × 10−47.8 × 10−35.8 × 10−38.9 × 10−4
42.4 × 10−32.2 × 10−31.2 × 10−48.2 × 10−36.1 × 10−39.4 × 10−4
322.5 × 10−32.1 × 10−32.0 × 10−48.0 × 10−36.1 × 10−39.1 × 10−4
32.4 × 10−32.2 × 10−31.4 × 10−48.7 × 10−36.1 × 10−32.7 × 10−3
422.5 × 10−32.1 × 10−32.8 × 10−47.8 × 10−35.7 × 10−31.3 × 10−3
SHO222.7 × 10−32.3 × 10−32.4 × 10−48.2 × 10−36.1 × 10−38.8 × 10−4
32.6 × 10−32.1 × 10−33.9 × 10−48.9 × 10−37.1 × 10−31.3 × 10−3
42.7 × 10−32.1 × 10−33.7 × 10−48.6 × 10−36.4 × 10−31.1 × 10−3
322.9 × 10−32.2 × 10−34.9 × 10−48.5 × 10−35.9 × 10−31.5 × 10−3
33.1 × 10−32.4 × 10−34.8 × 10−49.0 × 10−37.4 × 10−31.2 × 10−3
423.1 × 10−32.3 × 10−34.1 × 10−48.6 × 10−35.9 × 10−32.3 × 10−3
TLBO222.4 × 10−32.0 × 10−31.6 × 10−48.0 × 10−36.5 × 10−31.1 × 10−3
32.3 × 10−31.9 × 10−31.6 × 10−48.0 × 10−35.7 × 10−31.3 × 10−3
42.2 × 10−31.8 × 10−31.5 × 10−47.9 × 10−36.3 × 10−37.7 × 10−4
322.3 × 10−32.0 × 10−31.8 × 10−48.3 × 10−36.7 × 10−39.3 × 10−4
32.0 × 10−31.6 × 10−32.0 × 10−49.2 × 10−37.5 × 10−31.1 × 10−3
422.1 × 10−31.6 × 10−32.4 × 10−48.9 × 10−36.0 × 10−31.4 × 10−3
Best results are given in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaya, E.; Kaya, A.; Baştemur Kaya, C. Adaptive Network-Based Fuzzy Inference System Training Using Nine Different Metaheuristic Optimization Algorithms for Time-Series Analysis of Brent Oil Price and Detailed Performance Analysis. Symmetry 2025, 17, 786. https://doi.org/10.3390/sym17050786

AMA Style

Kaya E, Kaya A, Baştemur Kaya C. Adaptive Network-Based Fuzzy Inference System Training Using Nine Different Metaheuristic Optimization Algorithms for Time-Series Analysis of Brent Oil Price and Detailed Performance Analysis. Symmetry. 2025; 17(5):786. https://doi.org/10.3390/sym17050786

Chicago/Turabian Style

Kaya, Ebubekir, Ahmet Kaya, and Ceren Baştemur Kaya. 2025. "Adaptive Network-Based Fuzzy Inference System Training Using Nine Different Metaheuristic Optimization Algorithms for Time-Series Analysis of Brent Oil Price and Detailed Performance Analysis" Symmetry 17, no. 5: 786. https://doi.org/10.3390/sym17050786

APA Style

Kaya, E., Kaya, A., & Baştemur Kaya, C. (2025). Adaptive Network-Based Fuzzy Inference System Training Using Nine Different Metaheuristic Optimization Algorithms for Time-Series Analysis of Brent Oil Price and Detailed Performance Analysis. Symmetry, 17(5), 786. https://doi.org/10.3390/sym17050786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop