Next Article in Journal
Data-Driven Deployment of Cargo Drones: A U.S. Case Study Identifying Key Markets and Routes
Next Article in Special Issue
Comparison of Meta-Heuristic Optimization Algorithms for Global Maximum Power Point Tracking of Partially Shaded Solar Photovoltaic Systems
Previous Article in Journal
Identification of Mechanical Parameters in Flexible Drive Systems Using Hybrid Particle Swarm Optimization Based on the Quasi-Newton Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying Particle Swarm Optimization Variations to Solve the Transportation Problem Effectively

by
Chrysanthi Aroniadi
and
Grigorios N. Beligiannis
*
Department of Food Science and Technology, University of Patras, Agrinio Campus, G. Seferi 2, 30100 Agrinio, Greece
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(8), 372; https://doi.org/10.3390/a16080372
Submission received: 14 July 2023 / Revised: 28 July 2023 / Accepted: 30 July 2023 / Published: 3 August 2023
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)

Abstract

:
The Transportation Problem (TP) is a special type of linear programming problem, where the objective is to minimize the cost of distributing a product from a number of sources to a number of destinations. Many methods for solving the TP have been studied over time. However, exact methods do not always succeed in finding the optimal solution or a solution that effectively approximates the optimal one. This paper introduces two new variations of the well-established Particle Swarm Optimization (PSO) algorithm named the Trigonometric Acceleration Coefficients-PSO (TrigAc-PSO) and the Four Sectors Varying Acceleration Coefficients PSO (FSVAC-PSO) and applies them to solve the TP. The performances of the proposed variations are examined and validated by carrying out extensive experimental tests. In order to demonstrate the efficiency of the proposed PSO variations, thirty two problems with different sizes have been solved to evaluate and demonstrate their performance. Moreover, the proposed PSO variations were compared with exact methods such as Vogel’s Approximation Method (VAM), the Total Differences Method 1 (TDM1), the Total Opportunity Cost Matrix-Minimal Total (TOCM-MT), the Juman and Hoque Method (JHM) and the Bilqis Chastine Erma method (BCE). Last but not least, the proposed variations were also compared with other PSO variations that are well known for their completeness and efficiency, such as Decreasing Weight Particle Swarm Optimization (DWPSO) and Time Varying Acceleration Coefficients (TVAC). Experimental results show that the proposed variations achieve very satisfactory results in terms of their efficiency and effectiveness compared to existing either exact or heuristic methods.

1. Introduction

The Transportation Problem (TP) is one of the most significant types of linear programming problems. The aim of the TP is to minimize the cost of transportation of a given commodity from a number of sources or origins (e.g., factory manufacturing facility) to a number of destinations (e.g., warehouse, store) [1]. Over the years, many classical and stochastic search approaches have been applied for the purpose of solving the TP.
The Northwest Corner method (NWC) is one of the methods that obtains a basic feasible solution to various transportation problems [2]. This process very easily allocates the amounts when few demand and destination stations exist. Moreover, frequently, the exported solution does not approach the optimal. The Minimum Cost Method (MCM) [3] is an alternative method which can yield an initial basic feasible solution. The MCM succeeds in lowering total costs by taking into consideration the lowest available cost values while finding the initial solution. An innovative approach comes from the Vogel Approximation Method (VAM); the VAM is an upgraded version of the MCM which results in a basic feasible solution close to the optimal solution [3]. Both of them take the unit transportation costs into account and obtain satisfactory results; however, VAM is rather slow and computationally intensive for a large range of values. Nevertheless, it has been proven that in problems with a small range of values and a relatively small number of variables, the above exact methods are quite efficient.
In some cases, TP has a complex structure, multifaceted parameters and a huge amount of data to be studied. Therefore, exact methods do not succeed in finding a suitable solution in an acceptable time period; a result, it is unpractical to use them. Taking into consideration the above, apart from conventional solution techniques, various heuristic and metaheuristic methods have been designed to capitalize on their potential capabilities. Specifically, metaheuristic algorithms attempt to find the best feasible solution, surpassing the other technique as much in terms of quality as in computational time [4]. Mitsuo Gen, Fulya Atliparmak and Lin Lin applied a Genetic Algorithm (GA) for a two-stage TP using priority-based encoding, showing that the GA has been receiving great attention and can be successfully applied for combinational optimization problems [5]. Ant Colony Optimization (ACO) algorithms have already proven their efficiency in many complex problems; they constitute a very useful optimization tool for many transportation problems in cases where it is impossible to find an algorithm that finds the optimal solution or in cases where the time interval does not make it possible to approve this solution [6]. The applications of hybrid methods with the combination of two or more heuristic, metaheuristic or even exact methods are also widespread. Interesting research was undertaken in 2019 by Mohammad Bagher Fakhrzad, Fariba Goodarzian and Golmohammadi [7]. In their study, four metaheuristic algorithms, including Red deer Algorithm (RDA), Stochastic Fractal Search (SFS), Genetic Algorithm (GA) and Simulated Annealing (SA), as well as two hybrid algorithms, the RDA and GA (HRDGA) algorithm and the Hybrid SFS and SA (HRDGA) algorithm, were utilized to solve the TP, demonstrating significant effectiveness [7].
Motivated by the above-mentioned applications of metaheuristic algorithms to cope with the TP, this work deals with the application of Particle Swarm Optimization (PSO) to solve the TP effectively. The PSO algorithm was first introduced by Dr. Kennedy and Dr. Eberhart in 1995 and was known as a novel population-based stochastic algorithm, working out complex non-linear optimization problems [8]. The basic idea was originally inspired by simulations of the social behavior of animals such as bird flocking, fish schooling, etc. Possessing their own intelligence, birds of the group connect with each other, sharing their experiences, and follow and trust the mass in order to reach their food or migrate safely without knowing in advance the optimal way to achieve it. The proposed research is expected to enhance the abilities of both the social behavior and personal behavior of the birds. It is observed that the original PSO has deficits in premature convergence, especially for problems with multiple local optimums [9]. The swarm’s ability to function with social experience as well as personal experience is determined in the algorithm through two stochastic acceleration components, known as the cognitive and social components [10]. These components have the aptitude to guide the particles in the original PSO method to the optimum point as the correct selection of their values is the key influence on the success and efficiency of the algorithm. Much research has been carried out with a focus on finding out the best combination of these components [10].
First, this paper examines approaches that have already been applied with great success to solve the TP. Adding to the above, two new PSO variations are presented and applied to solve the TP, operating proper transformations of the main PSO parameters. Experimental results show that these new PSO variations have very good performance and efficiency in solving the TP compared to the former methods.
In order to confirm the technical merit and the applied value of our study, 32 instances of the TP with different sizes have been solved to evaluate and demonstrate the performance of the proposed PSO variations. Their experimental results are compared with those of well-known exact methods, proving their superiority over them. One major innovation of the proposed variations is the appropriate combination of acceleration coefficients (parameters c1, c2) and inertia weight (parameter w) [11] (see Section 3) in order to come up with better computational results compared to existing approaches. Exhaustive experimental results demonstrate that the performance of the new PSO variations noted significantly higher performance not only compared to the exact methods already applied to solve the TP but also compared to the other PSO variations already introduced in the respective literature. Furthermore, in order to check the stability of the proposed PSO variations, many different combinations of the main PSO parameters were tested and validated.
The contribution of the paper is as follows:
  • According to our knowledge, PSO has already been applied for solving the fixed- charged TP, and a heuristic approach was used in order to find the shortest path in a network of routes with a standard number of points connected to each other. For the first time, the PSO-based algorithms are applied to solve the basic TP in a large amount of test instances effectively, not only finding the optimal means of items distribution but also discovering the optimal value.
  • Moreover, two new PSO variations are introduced, which sustain balance between exploration and exploitation of the search space. These variations proved to be very efficient in solving the TP, achieving better results compared not only to deterministic but also to other already-known PSO-based methods.
  • A thorough experimental analysis has been performed on the PSO variations applied to solve the TP to prove their efficiency and stability.
The remainder of the paper is organized as follows: Section 2 presents the mathematical formulation of the TP. The PSO algorithm is briefly described in Section 3. Section 4 presents the initialization procedure of the basic feasible solutions and the steps of the PSO algorithm for the TP. Both the existing PSO variations as well as the new ones are presented in detail in Section 5. A well-documented case study is conducted in Section 6, in order to compare the performance of five exact methods with the classic PSO and its variations. Lastly, conclusive remarks and future recommendations are presented in Section 7.

2. Transportation Problem (TP)

Many researchers have developed various types of transportation models. The most prevalent was presented by Hitchcock in 1941 [12]. Similar studies were conducted later by Koopmans in 1949 [13] and in 1951 by Dantzig [14]. It is well known that the problem has become quite widespread, so several extensions of transportation model and methods have been subsequently developed. However, how is the Transportation Problem defined?
The TP can be described as a distribution problem, with m suppliers S i (warehouses or factories) and n destinations D j (customers or demand points). Each of the m suppliers can be allocated to any of the n destinations at a unit shopping cost c i j , which corresponds to the route from point i to point j. The available quantities of each supplier S i , i = 1, 2,…, m are denoted as s i ,   and those of each destination D j , j = 1, 2, …, n are denoted as d j . The objective is to determine how to allocate the available amounts from the supply stations to the destination stations while simultaneously achieving the minimum transport cost and also satisfying demand and supply constraints [12].
The mathematical model of the TP can be formulated as follows:
min Z = i = 1 m j = 1 n c i j · x i j ;
i = 1 m x i j d j   f o r   j = 1 ,   2 ,   ,   n ;
j = 1 n x i j s i   f o r   i = 1 ,   2 ,   ,   m ;
x i j 0   f o r   i = 1 ,   2 ,   ,   m ,   j = 1 ,   2 ,   ,   n .
Equation (1) represents the objective function to be minimized. Equation (2) contains the supply constraints according to which the available number of origin points must be more than or equal to the quantity demanded from the destination points. Respectively, the sum of the amount to be transferred from source S i to destination D j must be less than or equal to the available quantity than we possess, as presented in Equation (3). A necessary condition is depicted in Equation (4), as units x i j must take positive and integer values. Without loss of generality, we assume that in this paper, both the supplies and demands are equal following the balanced condition model.
As already mentioned, there are several methods which can lead to finding a basic feasible solution. However, most of the currently used methods for solving transportation problems are considered complex and very expansive in terms of execution time. As a result, it is appealing to seek and discover a metaheuristic approach based on the PSO algorithm to solve the TP efficiently and effectively.

3. Particle Swarm Optimization (PSO)

The Particle Swarm Optimization (PSO) algorithm is considered to be one of the modern innovative heuristic algorithms since its methodology over the years has become extremely prevalent due to its simplicity of implementation; it leads very easily to satisfactory solutions [15]. According to the PSO algorithm, the collective behavior of animals has been analyzed in detail with an eye forward to function as a robust method in order to solve optimization problems in a wide variety of applications [16].
In PSO, each candidate solution can be defined as a particle and the whole swarm can be considered as the population of the algorithm. The particles improve themselves by cooperating and sharing information among the swarm, and they succeed in learning and improving to provide the highest possible efficiency. More precisely, each particle through the search space is intended to find the best value for its individual fitness and, simultaneously, to minimize the objective function by satisfying all the constraints of the problem. Each particle is studied from a perspective that contains three different parameters: position; velocity; and its previous best positions.
Consequently, in n-dimensional search space, each particle of the swarm is represented by x i j = ( x i 1 ,   x i 2 ,   ,   x i j ) , and the equation of its position is as follows:
x i j t + 1 = x i j t + v i j t + 1 ,   i = 1 ,   2 , , n   κ α ι   j = 1 ,   2 , ,   n ,
where x i j t + 1   is the current position, x i j t is the previous position and v i j t + 1 is the velocity which determines the movement of each particle in the current iteration ( t + 1 ).
Respectively, the velocity of the particle is denoted by v i j and is given by the following equation:
v i j t + 1 = w   v i j t + c 1 r 1 p b e s t i j t x i j t + c 2 r 2 g b e s t i j t x i j t , i = 1 ,   2 , , n   κ α ι   j = 1 ,   2 , . , n .
where
  • v i j t + 1 denotes the velocity in the current iteration and v i j t is the velocity in the previous iteration.
  • w   is the inertia weight, used to balance the global exploitation and local exploitation, providing a memory of the previous particle’s direction which prevents major changes in the suggested direction of the particles.
  • r 1 and r 2 are two variables which are randomly derived from uniform distribution in range [0, 1].
  • c 1   and c 1   are defined as “acceleration coefficients” which have a huge effect on the efficiency of the PSO method. The constant c 1   conveys how much confidence a particle has in itself, while c 2 expresses how much confidence a particle has in the swarm.
  • The variable p b e s t i j t is the best position of the particle until the iteration t , whereas g b e s t i j t is the finest position of the whole swarm until the same iteration.
  • The term c 1 r 1 p b e s t i j t x i j t is known as the cognitive component; it acts as a kind of memory that stores the best previous positions that the particle has achieved. The cognitive component reflects the tendency of the particles to return to their best positions.
  • The term c 2 r 2 g b e s t i j t x i j t   is called the social component. In this particular case, the particles behave according to the knowledge that they have obtained from the swarm, having as a guide the swarm’s best position.
The acceleration coefficients c 1   and c 2 , together with the random variables r 1 and r 2 ,   affect to a great extent the evolution of cognitive and social component and hence the velocity value, which, as is known, is mainly responsible for the ultimate direction of the particles.

4. The Basic PSO for Solving the TP

The proposed PSO algorithm used to solve the TP is presented in this section. The primary goal is the initialization of the particles according to the problem’s instances. This is achieved through a sub-algorithm (an initialization algorithm), as presented below. Initially, the amounts of the supply and demand were defined in tables. Subsequently, through control conditions, the amounts were randomly distributed, satisfying the constraints of the sums of supply and demand.
First, Algorithm 1 creates two vectors, namely, Supply and Demand, which are its input, as shown in lines 1 and 2. Next, variables m and n are computed. These variables are equal to the values of parameters Supply and Demand, respectively. Then, a matrix is created consisting of random real numbers (line 7). In line 10, the elements of the candidate solution matrix are rounded to the nearest integer as the amounts of commodities should be non-negative integer values. In the following lines of the algorithm, a process of readjustment and redistribution of matrix L begins so that its values correspond to the given Supply and Demand amounts. In lines 11 and 12, the sum of all elements of each row of matrix L is stored in vector Sumrow, while the sum of all elements of each column of matrix L is stored in vector Sumcol. Then, two new vectors, namely, s and d, are created by subtracting Sumrow from Supply and Sumcol from Demand, respectively. In the following lines, for each cell of the final matrix, the shortcomings of the matrix are located and assembled appropriately to each cell by zeroing out the excess amount of vectors s and d. The output of Algorithm 1 is a matrix consisting of the initial solutions (Initial Basic Feasible Solutions—IBFS), which comprises the input of Algorithm 2 (see below). All possible Initial Basic Feasible Solutions (IBFS) are non-negative integer values satisfying the supply and demand constraints.
Algorithm 1: Initialization algorithm
Algorithms 16 00372 i001
Next, we present the structure of the basic PSO algorithm, which will be applied to solve the TP (Algorithm 2). The process starts with the initialization of the population size npop, the maximum number of iterations t m a x , the personal and social acceleration coefficients c 1   and c 2 , the random variables r 1 and r 2 and, finally, the inertia weight w (line 1). Moreover, subsequently, the Supply, Demand and Cost matrixes are defined (line 2).
Line 6 calculates the total transport cost of each particle. Then, in lines 7 and 8, whether the total cost of the current particle is less than the minimum transport cost calculated up to then is checked. If the statement is true, the value of global best cost is upgraded, and this particle is now defined as the best. This process is continued for all candidate particles. In lines 9 to 14, through an iterative loop, the position and velocity of the particles are calculated using Equations (5) and (6). The algorithm exports the particle with the optimal position and its respective optimal transport cost.
Algorithm 2: Particle Swarm Optimization algorithm
Algorithms 16 00372 i002
The exported values of the particle’s position, although satisfying demand and supply constraints, were observed to be taking occasionally negative and/or fractional values. These values cannot support the aspect of the solution since the values are quoted in quantities (only positive values are allowed); therefore, appropriate modifications have been made for the final form of the particle position.
Two sub-algorithms were designed to repair the algorithm, replacing negative and fractional volumes with natural numbers without breaking the supply and demand conditions.
Algorithm 3 takes as input a matrix—particle(i). Position—that has negative values in its cells. The aim is for the negative elements to be eliminated as in [17]. Through an iterative process, which is illustrated in line 3, the algorithm checks each line of the cell of the table and sets as neg the value of the cell with the negative value. Subsequently, it searches the maximum element of the column where its negative element was found, as shown in lines 5 and 6. The cardinal value of the negative value is subtracted from the cell with the largest negative value, while the cell with the negative value is set to zero. In line 9, a cell is randomly selected from the row that corresponds to the negative element. If the value is positive, the cardinal of the negative element is subtracted from it. Simultaneously, a cell of this row is counterbalanced by adding to it the cardinal of the negative cell as shown in line 13. Algorithm 3 exports the particle(i).Position with non-negative values, while sustaining the supply and demand conditions.
Algorithm 3: Negative values repair algorithm
Algorithms 16 00372 i003
Applying the above transformation, the result is a matrix with positive but also fractional elements. Algorithm 4 takes as its input the matrix of particles’ positions after removing the negative elements. In line 3, a new matrix named pos is defined as containing the integer elements of matrix particle(i).Position. In line 4, a vector named sumrow is created which contains the sum of each row of the pos matrix; while in line 5, a vector named sumcol is created, containing the sum of each column. In lines 6 and 7, the differences between the quantities of the Supply and sumrow and Demand and sumcol matrices are noted, respectively, in order to record the quantities missing from the pos matrix. Then, through an iterative loop, the u cell of s(u) is compared with the v cell of d(v). The minimum quantity of these two is selected and entered into the pos matrix, reallocating the integer amounts in an appropriate manner to satisfy the available supply and demand items. The algorithm terminates when vectors s and v are zeroed and the integer quantities are inserted into pos matrix, which is the output of Algorithm 4. The final solution is a non-negative integer solution matrix satisfying the requested constraints.
Algorithm 4: Amend fractions
Algorithms 16 00372 i004

5. Variations of PSO

This section presents two already-known and two new variations of the classical implementation of the PSO, which are presented and used in this contribution to solve the TP. These variations are investigated in order to improve the performance of the classical PSO algorithm.

5.1. Decreasing Weight Particle Swarm Optimization (DWPSO)

The inertia weight w is the most influential parameter with respect to both the success rate and the function evaluation [18]. In DWPSO, the inertia factor is linearly decreasing. The decision to use this variation was not arbitrary; DWPSO is one of the classic and very effective PSO variations since its superiority remains imperishable over years. Through DWPSO, the algorithm focuses on diversity at former iterations and on convergence at latter ones [18]. The right and proper selection of the inertia weight provides a balance among global and local exploitation and results in fewer iterations, on average, to find a sufficiently optimal solution [19]. Exploitation is the capacity of particles to converge to the same peak of the objective function and remain there without wanting to obtain better solutions in their wider field. On the contrary, in the exploration condition, the particles are in constant search, discovering beneficial solutions. After constant research regarding the figurative of inertia weight, Shi and Eberhart concluded that values in the interval [0.9, 1.2] have a positive effect on the improvement of the solution [20]. A linearly decreasing inertia weight with c 1 = 2 , c 2 = 2 and w between 0.4 and 0.9 was used by Shi and Eberhart, too. According to their claim, w n e w is the new inertia weight, which linearly decreases from 0.9 to 0.4.
Equation (7) for DWPSO is given as
w n e w t = w m a x w m a x w m i n · t t m a x ,
where w m a x is set as 0.9, performing extensive exploration, and w m i n   is equal to 0.4, performing more exploitation. Moreover, t is the current iteration of the algorithm and t m a x   is the maximum number of iterations. A large portion of researchers’ results illustrate that linearly decreasing in the inertia weight can greatly improve the performance of PSO, having better results than the classic implementation of the algorithm.

5.2. Time-Varying Acceleration Coefficients (TVAC)

In population-based optimization methods, proper control of global and local exploration is essential for the efficient identification of the optimum solution.
Rathweera et al. introduced the TVAC in PSO [11]. According to their research, the cognitive parameter c 1 starts with a high value and linearly decreases to a low value, whereas the social parameter c 2 starts with a low value and linearly increases to a high value [21]. On the one hand, with a large value for the cognitive parameter and small value for the social parameter at the beginning, particles are moving by their own experience according to their own best positions, being able to move freely without following the mass. On the other hand, a small value for the cognitive parameter and a large value for the social parameter help the particles to escape from the area around their personal best positions and allow them to enhance the global search in the latter stages of the optimization procedure, converging toward the global optima. This concept can be mathematically represented as
c 1 = c 1 i c 1 i c 1 f · t t m a x ;
c 2 = c 2 i c 2 i c 2 f · t t m a x ,
where c 1 i   defines the value of c 1 in the first iteration equal to 2.5 and c 1 f defines the value of c 1 in the last iteration equal to 0.5. Respectively, the value of c 2 in the first iteration is c 2 i and is set to 0.5, while the value of c 2 in the first iteration is c 2 f and is set to 2.5 [21].

5.3. Trigonometric Acceleration Coefficients-PSO (TrigAC-PSO)

In this subsection, a new variation is introduced. According to this variation, the impact of parameters c 1 and c 2 is extensively studied. First, each particle is guided by the knowledge and experience gained by the swarm (the value of c 2 is considerably bigger than the value of c 1 ). Next, relying on the learning mechanism, each particle builds its own strategy and acquires its own experience (the value of c 2 is becoming smaller while the value of c 1 is becoming bigger (see Equations (10) and (11)). This decrement of c 2 and increment of c 1 take place until both parameters are equalized to 2 in the last generation of the algorithm.
The following equations are used to calculate the cognitive and social acceleration parameters:
c 1 = c 1 f 2 + sin 2 · c 1 i · t t m a x · π 2 ;
c 2 = c 2 i + cos c 2 f · π · t 2   · t m a x 1 2 .
Here, in the first iteration, c 1 i , which is the personal acceleration value, is equal to 0.5, while c 2 i , which is the social acceleration value, is equal to 3.5. In the last iteration of the algorithm, both personal c 1 f and social c 2 f are equal to 2.
The value of inertia weight w varies according to the number of the current iteration t and the number of maximum iterations t m a x .
It is described as follows in Equation (12):
w = t m a x t t m a x .

5.4. Four Sectors Varying Acceleration Coefficients PSO (FSVAC-PSO)

In the following section, a new variation is developed. This variation is novel and comprises the major technical merit of this contribution. The major role in this variation is the multiple changes of the coefficient parameters c 1 and c 2 . In this case, the solution is approached both from the knowledge of the particle and from the experience of the whole swarm. The number of iterations is divided into four sectors. Starting from the first iteration, the social and cognitive acceleration coefficient is initialized to 2. In the first sector of iterations, the value of c 1 is increasing while the value of c 2 is decreasing. As a result, the particle is mostly influenced by its own knowledge, while the influence of the swarm on it is limited; in the second sector, the value of c 1 is decreasing while the value of c 2 is increasing to an equilibrium between the knowledge of the particle gained at the previous sector and the experience of the swarm; in the third sector, the value of c 1 is decreasing while the value of c 2 is increasing—explicitly, the particles are allowed to move towards the global best position, following the swarm’s movements; as a result, information about the global best is reallocated to all the particles for more exploration before the swarm finally converges [11]; in the fourth sector, the particles head toward both their own personal best and global best observed by the whole swarm—the concept of this variation is based on the combination of all types of different searching behaviors, as they arise for different values of the coefficient acceleration parameters, culminating in equilibrium between exploitation and exploration of the search space; finally, in the last iteration, the two coefficient parameters are equated.
The formulation is represented in detail below:
  • In the first Iteration, as already mentioned:
    c 1 = 2
    c 2 = 2 ;
  • In the first sector:
    c 1 = 2 · c 1 f c 1 i · t t m a x 1 c 2 = c 2 i 2 · c 2 f · t t m a x
    where c 1 i = 2 , c 1 f = 3 , c 2 i = 2 and c 2 f = 1 ;
  • In the second sector:
    c 1 = c 1 f · c 1 i · t 2 · t m a x 1 c 2 = 0.5 + c 2 f + c 2 i 2 · t t m a x
    where c 1 i = 3 , c 1 f = 2 , c 2 i = 1   a n d   c 2 f = 2 ;
  • In the third sector:
    c 1 = 3 2 ( c 1 i c 1 f ) · t t m a x c 2 = 0.5 + 2 · c 2 i + 1 c 2 f · t t m a x ,
    where c 1 i = 2 , c 1 f = 0.5 , c 2 i = 2   and   c 2 f = 2.5 ;
  • In the fourth sector:
    c 1 = ( 4 c 1 i + c 1 f ) · t t m a x c 2 = 3 2 + ( c 2 f c 2 i ) · t t m a x ,
    where c 1 i = 0.5 , c 1 f = 2 , c 2 i = 2.5   and   c 2 f = 2 ;
  • In the last iteration:
    c 1 = 2
    c 2 = 2 .
In the above formulations, c 1 i ,   c 1 f ,   c 2 i and c 2 f are initial and final values of cognitive and social components acceleration factors, respectively. To improve the solution quality, these coefficients are updated in such a way that the values increase and decrease at a steady pace. According to this approach, the solution avoids being trapped into a local optimum, as shown by the experimental results presented in Section 6.
As for the inertia weight w, Equation (12) is used to provide the necessary momentum for particles to roam across the search space.

6. Case Studies and Experimental Results

In this section, the proposed variations of the PSO algorithm are applied in thirty two well-known numerical examples of the TP, as shown in Table 1. The numerical examples of this study come from the research of B. Amaliah, who compared five different methods, which will be presented briefly below, regarding their performance in solving the TP [22].
Vogel’s Approximation Method (VAM) is an iterative procedure such that in each step, proper penalties for each available row and column are taken into account through the least cost and the second-least cost of the transportation matrix [22]. The Total Differences Method 1 (TDM1) was introduced by Hosseini in 2017. The method is based on VAM’s innovation to use penalties for all rows and columns of the transportation matrix. The TDM1 was developed by calculating penalty values only for rows of the transportation matrix [23]. Amaliah et al., in 2019, represented their new method, known as the Total Opportunity Cost Matrix Minimal Cost (TOCM-MT). This method has a mechanism with which to check the value of the least-cost cell before allocating the maximum units x i j ; this is in contradiction to the TDM1, which directly allocates the maximum units x i j to the least cost [24]. Juman and Hoque, in 2015, developed a formulation method called the Juman and Hoque method (JHM). Their study is based on the distribution of supply and demand quantities, taking into account the two minimum-cost cells and their redistribution through penalties [25]. Finally, the last method presented is known as the Bilqis Chastine Erma Method (BCE), which constitutes an enhanced version of the JHM [26].
The whole algorithmic approach was implemented using MATLAB R2021b. The algorithm was tested on a set of different dimensional problems. All parameters of the proposed algorithm were selected after exhaustive experimental testing. Each of the four variations was tested using different parameter values, and those values whose computational results were superior to other values were selected. The number of iterations is set to 100. The parameter r 1 is set as a random number derived from the uniform distribution in range [0, 1], and r 2 is set as the complement of r 1 ; that is, r 2 = 1 r 1 . This modification plays a significant part as it is different from the customary application where both r 1 and r 2 are randomly derived uniformly from range [0, 1]. Using the former relationship between r 1 and r 2 , we manage to achieve stronger control over these parameters’ values.
In the following table (Table 2) and Figure 1, the performance of both the exact methods and the PSO-based ones are presented for 30 Monte Carlo runs; more precisely, the best value achieved by each method is depicted. The last column presents the optimal solution of each numerical problem. As shown, the Vogel method manages to find 9 out of the 32 test instances (28.13%); the Total Differences Method 1 (TDM1) succeeds in finding more optimal solutions than the Vogel method by finding 13 out of 32 optimal solutions (40.63%); using the TOCM-MT method, the results show that the method’s performance is better still, finding the optimum in 23 out of 32 test instances (71.9%); the JHM method, which accumulated 21 optimal solutions, was less effective than TOCM-MT (65.62%); the BCE method, which achieved 27 out of 32 test instances (84.4%), proved to be the most efficient compared to all previously mentioned methods; the classic PSO, the TVAC, the TrigAC-PSO and the FSVAG-PSO achieve the optimum in 31 out of 32 test instances (96.88%), while the PWPSO achieves the optimum in 30 out of the 32 (93.76%).
One significant finding of our research is that the new PSO variation, TrigAC-PSO, which is first presented in this study, achieved very good results. The following table (Table 3) examines the deviation of VAM, TDM1, TOCM-MT, JHM, BCE, PSO, DuPSO, TVAC, TrigAC-PSO and FSVAC-PSO. The measurement of deviation shows the difference between the observed value and the expected value of a variable, and it is given by the following formula:
D e v = x i j o p t i m a l o p t i m a l ,
where x i j is the current solution.
Considering Table 3 and Figure 2, it is evident that method VAM, TDM1, TOCM-MT and 1HM appear to be more inefficient, deviating from the optimal solution at a significant scale. More precisely, the results of Table 3 show that the solutions achieved by VAM differ from the optimal solution by 5.29%, the results of TDM1 by 3.58%, the results of TOCM-MT by 1.4% and the results of JHM by 1.4%. BCE method presented higher levels of efficiency since the values of deviation were negligible. Analysis of the data of Table 3 reveals that the percentage of the deviation in classic PSO, as well as in its variations, was almost zero. Furthermore, the TVAC method was nearest to the optimal solution, followed by TrigAC-PSO, FSVAC-PSO and finally by DWPSO.
The findings from the current study provide us with the basic information for an extensive meta-analysis, allowing us to investigate which of the presented PSO variations has better performance in solving the TP. To serve this cause, many experiments were carried out which investigated different values of PSO population size (number of particles). The classic PSO, as well as each one of its variations (DWPSO, TVAC, TrigAC-PSO, FSVAC-PSO), were tested for 10, 15 and 20 particles for all 32 numerical examples. The results presented in Table 4, Table 5 and Table 6 show the performance of the classic PSO as well of its variations for 30 independent runs. The number of generations was stable and equal to 100 for all runs.
Evidence from this study, presented in Table 4, expounds the accuracy rate of each algorithm for 10 particles. The accuracy rate is given by the following formulation:
A c c u r a c y = T O R T R   ,
where T O R is the total number of runs where optimal solution was found and T R is the number of runs.
Table 4 shows that the classic PSO obtained 38.33% accuracy rate. A significant increase in accuracy rate, using 10 particles, was evident in DWPSO, which achieved 59.58% accuracy, almost twice as much as the percentage of the classic PSO. Moreover, TVAC obtain a 61.45% accuracy rate. The best results came from TrigAC-PSO, since this PSO variation achieved a 62.81% accuracy rate. Last but not least, FSVAC achieved a 59.5% accuracy rate.
Table 4. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 10 particles.
Table 4. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 10 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.03330.20.46660.56670.2333
Pr.020.76671111
Pr.0311111
Pr.0411111
Pr.050.23330.66670.83330.86670.8333
Pr.060.366710.966711
Pr.070.76670.9110.9
Pr.08000.26670.16670.1334
Pr.090.03330.30.20.26670.1
Pr.10000.033300.0333
Pr.110.10.10.13330.10.0667
Pr.1200.30.40.46670.3667
Pr.130.5334110.86671
Pr.140000.33330
Pr.150.71111
Pr.1600.46670.46670.66670.5667
Pr.170.43330.90.93330.90.8333
Pr.1811111
Pr.190.46670.70.76670.50.7333
Pr.200.46670.83330.66670.86670.7667
Pr.210.56670.53330.33330.53330.3939
Pr.220.366710.96670.83331
Pr.230.03330.03330.16670.16670.1667
Pr.2400000
Pr.250.53330.96670.93330.96671
Pr.260.46670.60.70.70.7333
Pr.270.03330.46670.56670.40.4667
Pr.280.20.26670.06670.20
Pr.290.83331111
Pr.300.86671111
Pr.310.40.83330.70.70.7
Pr.320.0667000.03330
Average0.3833380.5958340.6114590.62813130.594603
The accuracy rate results for 15 particles are presented in Table 5. DWPSO achieved 65.31% accuracy, whereas TVAC reached 66.99%. It is of particular interest that TrigAC-PSO achieved the highest accuracy rate once again by reaching 69.8%. Finally, FSVAC obtained an accuracy rate equal to 66.56%.
Table 5. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 15 particles.
Table 5. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 15 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.06670.60.63330.63330.4333
Pr.020.96671111
Pr.0311111
Pr.0411111
Pr.050.70.9111
Pr.060.710.966711
Pr.070.7667110.96670.9667
Pr.0800.06670.33330.16670.0333
Pr.090.06670.23330.23330.20.1333
Pr.100.2667000.06670
Pr.110.23330.06670.30.20.0333
Pr.120.10.40.43330.33330.2
Pr.130.5667110.96671
Pr.1400.033300.06670.1
Pr.150.30.93330.966711
Pr.1600.60.50.93330.5333
Pr.170.56670.910.96671
Pr.1811111
Pr.190.53330.90.90.66671
Pr.200.63340.96670.86670.93331
Pr.210.66670.43330.666710.4667
Pr.220.61111
Pr.230.06670.20.36670.23330.2
Pr.240.66660000
Pr.250.633310.833311
Pr.260.333310.80.66671
Pr.270.36670.40.46670.83331
Pr.280.33330.13330.10.33330.1667
Pr.2911111
Pr.300.93331111
Pr.310.366710.933311
Pr.320.13330.13330.10.16670.0333
Average0.4864630.6531220.6698940.697918750.665622
The accuracy rate results for 20 particles are presented in Table 6 and Figure 3. A high percentage of 53.33% was obtained by the classic PSO. Between DWPSO and TVAC, it is evident that both rates were sufficiently close, with accuracy rates ascending up to 66.78% and 66.56%, respectively. FSVAC, the variation which has been proposed and presented in this research, achieved accuracy rate equal to 66%. This new method evinced positive effects in terms of its validity and effectiveness. Last but not least, TrigAC-PSO demonstrated the best performance compared to all other variations, achieving 74.3%. Running the algorithm using 20 particles, TrigAC-PSO found the optimal in 31 out of 32 test instances, reaching 96.88%. Moreover, in 20 out of 32 numerical examples, this variation managed to reach the optimum in all 30 runs, with a success rate of 62.5%. The punctuality of this method rises to 75%; hence, this variation is established, compared to other variations, as the ideal option for the solution of the TP.
Table 6. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 20 particles.
Table 6. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 20 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.10.66670.73330.70.6
Pr.0211111
Pr.0311111
Pr.0411111
Pr.050.73331111
Pr.060.5667110.96671
Pr.070.90.96670.966711
Pr.080.06670.23330.30.06670.1
Pr.090.03330.40.20.33330.4
Pr.100.2000.13330.0667
Pr.110.33330.30.20.40.1
Pr.120.23330.50.46670.53330.7
Pr.130.83331111
Pr.1400.03330.03330.30.1
Pr.150.666710.966711
Pr.160.03330.66670.710.6
Pr.170.53330.9111
Pr.1811111
Pr.190.53330.63330.50.63331
Pr.20110.93330.93330.9667
Pr.210.73330.40.910.3
Pr.220.533310.910.3
Pr.230.10.33330.33330.73330.4667
Pr.240.06670000
Pr.250.83331110.9667
Pr.260.66671111
Pr.270.46670.133300.60.2667
Pr.280.46670.10.10.20.1
Pr.2911111
Pr.300.86671111
Pr.310.36671111
Pr.320.20.10.06670.23330.0667
Average0.5333310.6677060.6656250.74270310.659381
In summary, the proposed method FSVAC-PSO, although it did not demonstrate the highest average success rate, was very accurate in calculating the optimal solution in cases where the aforementioned variations were unable to approach the optimal solution. In more detail, this research experimented on population sizes of 10, 15, 20 particles over 32 well-known test instances used in the respective literature. For each problem, as already mentioned, 30 independent experimental runs were conducted. In the case of 10 particles, the classical PSO found the optimal solution in only in 3 out of 32 test instances in all 30 runs (9.4%); DWPSO found the optimal solution in 10 out of 32 test instances in all 30 runs (31.25%); while TVAC and Trig-PSO managed to find the optimal solution in 9 out of 32 test instances in all 30 runs (28.13%); finally, FSVAC was shown to be the best PSO variation, finding the optimal solution in 11 out of 32 test instances in all 30 runs (34.4%).
In the case of 15 particles, FSVAC also showed the best performance by finding in the optimal solution in 18 out of 32 test instances in all 30 runs (56.25%); the classic PSO found the optimal solution in 4 out of 32 (12.5%) test instances, and TVAC in 11 out of 32, in all 30 runs; last but not least, both DWPSO and TrigAC-PSO found the optimal value in 13 out of 32 test instances in all 30 runs (40.63%).
In the case of 20 particles, the variations TrigAC-PSO and FSVAC are still more accurate than the other PSO variations since they succeeded in finding the optimal solution in 18 out of 32 and in 17 out of 32 test instances in all 30 runs, respectively. The other PSO variations attained relatively lower success rates in finding the optimal solution in all of their runs.
In the following table (Table 7), the most important statistical measures in the cases of 20 particles for 30 independent runs are represented for all PSO variations. These experimental results demonstrate the very good performance and stability of the proposed PSO variations in solving the TP. As presented, in all cases, the mean value is very close to the best one, showing that all these variations are not only efficient but also quite stable. The value of the Coefficient of Variation (CV), which is the basic measure for proving stability of stochastic algorithms, is, for all PSO variations, quite small; more precisely, the mean CV value is for each PSO variation is as follows: Classic PSO, 2.12%; DWPSO, 1.32%; TVAC, 0.87%; TrigAC-PSO, 0.66%; and FSVAC, 1.26%. These values show that TrigAC-PSO, which is one of the new PSO variations presented in this work, is the most stable one.
The above results urged us to continue the research for an even greater number of particles, in order to study the behavior of new variations in a multi-solution environment.
More specifically, the aforementioned variations were also tested on the set of 40 and 50 particles. In this case, 10 independent runs were carried out for each test instance, reducing the chances of finding the optimal solution from the 30 independent runs that we have already performed. Selecting more particles revealed significant results.
The results showed, once again, the consistent superiority of the proposed variations. Table 8 and Figure 4 shows the accuracy achieved by each variation for 40 particles. These results provide further support for the hypothesis that TrigAC-PSO and FSVAC are still more accurate than the other PSO variations, since they attained accuracy rates 88.31% and 77.5%, respectively; the DWPSO method follows with 75.94%, and TVAC with 74.38%; last but not least is the classic PSO with 51.56%, attaining a spectacular 13% increase over the 10-particle accuracy rates, but maintaining a steady performance for 15 and 20 particles.
In the following table (Table 9), the most important statistical measures in the case of 40 particles for 10 independent runs are represented for all PSO variations. According to the particularly low values of the Coefficient of Variation (CV), we can infer that the PSO variations are extremely stable; more precisely, the mean CV value for each PSO variation is as follows: Classic PSO, 2.14%; DWPSO, 0.93%; TVAC, 0.86%; TrigAC-PSO, 0.47%; and FSVAC, 0.81%. These values show that TrigAC-PSO, which is one of the new PSO variations presented in this work, is once again the most stable method.
The following table (Table 10) and Figure 5 present the accuracy for the 50 particles. The accuracy for each PSO variation is as follows: Classic PSO, 52.5%; DWPSO, 74.3%; TVAC, 76.56%; TrigAC-PSO, 86.88%; and FSVAC, 82.19%. The two new variations range at the highest levels. These are particularly promising results, demonstrating that the increase in the particle’s number leads to an increase in the PSO variation’s accuracy, especially in the case of TrigAC-PSO and FSVAC. The results of 50 particles are equal to or better than the results that are currently presented. Overall, TrigAC-PSO was the one that obtained the most robust results.
The results of Table 11 lead to similar conclusions. In order to examine the stability for the 50 particles, it is worth comparing the CV values of the proposed variations with those of the traditional variations. Superior results are seen from TrigAC-PSO, as the CV value is equal to 0.4%, followed by the FSVAC with 0.59%. The other values of variations ranged as follows: Classic PSO, 2.19%; DWPSO, 0.77%; and TVAC, 0.83%.
The overall results demonstrate two inferences of decisive importance: first, the PSO algorithm and its variations have successfully solved the TP with maximum accuracy and efficiency; second, TrigAC-PSO, beyond any doubt, is the leading option for solving the TP in terms of both stability and the solution’s quality.

7. Conclusions

As technology is developing, the need for product improvement and trading is of high priority in obtaining a more economical solution. The PSO algorithm was applied with success in order for the TP to be solved. Furthermore, two new variations were introduced and compared to already-known variations. These variations induced exceptional results and indicated their superiority against the existing variations and the well-known exact methods in the literature. The proposed PSO variations have been tested in a variety of test instances with different combined values of inertia weight as well as social and personal acceleration parameters. It was evidently proven that the solution quality is inseparably linked with the selection of proper values for controlling the algorithm parameters. In order to see the effectiveness and stability of the proposed variations, we compared their results with those of other PSO variations for the same instances. Remarkably, the punctuality of one of our variations rose to 88%, and it was finally established as the ideal option compared to all other variations for the solution of TP.
It can be easily observed that this PSO variationis simple compared to other variations with complex structures. It was a challenge to achieve better results by creating and running simple computational algorithms, proving that keeping a balance between human and artificial intelligence is the key to the success of computational intelligence.
A more comprehensive analysis may be needed in order to examine the TP to a greater extent. Moreover, the proposed PSO variations could be applied to more complex networks such as the Sioux Fall network [27] in order to demonstrate the algorithm’s good performance and independence of the network’s size. Except for this, some other real constraints can be proposed in order to find the optimal solution for the TP with PSO algorithm variations not only in balanced instances but also in more realistic unbalanced instances in the future. Moreover, combining the proposed PSO variations with other meta-heuristic methods to solve the TP will be an interesting challenge.

Author Contributions

Conceptualization, C.A. and G.N.B.; methodology, C.A. and G.N.B.; software, C.A.; validation, C.A. and G.N.B.; formal analysis, C.A. and G.N.B.; investigation, C.A. and G.N.B.; resources, C.A.; data curation, C.A.; writing—original draft preparation, C.A.; writing—review and editing, C.A. and G.N.B.; visualization, C.A.; supervision, G.N.B.; project administration, G.N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data and the programming code used in this paper can be sent, upon request, to the interested reader. Please contact: [email protected].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karagul, K.; Sahin, Y. A novel approximation method to obtain initial basic feasible solution of transportation problem. J. King Saud Univ. Eng. Sci. 2019, 32, 211–218. [Google Scholar] [CrossRef]
  2. Deshpande, V.A. An optimal method for obtaining initial basic feasible solution of the transportation problem. In Proceedings of the National Conference on Emerging Trends in Mechanical Engineering Patel College of Engineering & Technology (GCET), Vallabh Vidyanagar (ETME-2009), Vallabh Vidyanagar, India, 5–6 March 2010; Volume 20, p. 21. [Google Scholar]
  3. Taha, H.A. Operations Research: An Introduction, 8th ed.; Pearson Prentice Hall: Hoboken, NJ, USA, 2007. [Google Scholar]
  4. Mostafa, R.R.; El-Attar, N.E.; Sabbeh, S.F.; Vidyarthi, A.; Hashim, F.A. ST-AL: A hybridized search based metaheuristic computational algorithm towards optimization of high dimensional industrial datasets. Soft Comput. 2022, 27, 13553–13581. [Google Scholar] [CrossRef] [PubMed]
  5. Gen, M.; Altiparmak, F.; Lin, L. A genetic algorithm for two-stage transportation problem using priority-based encoding. OR Spectr. 2006, 28, 337–354. [Google Scholar] [CrossRef]
  6. Swiatnicki, Z. Application of ant colony optimization algorithms for transportation problems using the example of the travelling salesman problem. In Proceedings of the 2015 4th International Conference on Advanced Logistics and Transport (ICALT), Valenciennes, France, 20 May 2015. [Google Scholar] [CrossRef]
  7. Fakhrzadi, M.; Goodarziani, F.; Golmohammadi, A.M. Addressing a fixed charge transportation problem with multiroute and different capacities by novel hybrid meta-heuristics. J. Ind. Syst. Eng. 2019, 12, 167–184. [Google Scholar]
  8. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4 October 1995. [Google Scholar]
  9. Salehizadeh, S.M.A.; Yadmellat, P.; Menhaj, M.B. Local Optima Avoidable Particle Swarm Optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Nashville, TN, USA,, 15 May 2009. [Google Scholar] [CrossRef]
  10. Lin, S.-W.; Ying, K.-C.; Chen, S.-C.; Lee, Z.-J. Particle swarm optimization for parameter determination and feature selection of support vector machines. Expert Syst. Appl. 2008, 35, 1817–1824. [Google Scholar] [CrossRef]
  11. Ratnweera, S.K.; Halgamuge, H.C. Watson Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  12. Hitchcock, F.L. The Distribution of a Product from Several Sources to Numerous Localities. J. Math. Phys. 1941, 20, 224–230. [Google Scholar] [CrossRef]
  13. Koopmans, T. Optimum Utilization of the Transportation System. Econometrica 1949, 17, 136–146. [Google Scholar] [CrossRef]
  14. Dantzig, G.B. Application of the simplex method to a traznsportation problem. In Activity Analysis of Production and Allocation; Koopmans, T.C., Ed.; John Wiley and Sons: New York, NY, USA, 1951; pp. 359–373. [Google Scholar]
  15. Rosendo, M.; Pozo, A. A hybrid particle swarm optimization algorithm for combinatorial optimization problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  16. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2017, 22, 387–408. [Google Scholar] [CrossRef]
  17. Huang, H.; Zhifang, H. Particle swarm optimization algorithm for transportation problems. In Particle Swarm Optimization; Intech: Shanghai, China, 2009; pp. 275–290. [Google Scholar]
  18. Wang, J.; Wang, X.; Li, X.; Yi, J. A Hybrid Particle Swarm Optimization Algorithm with Dynamic Adjustment of Inertia Weight Based on a New Feature Selection Method to Optimize SVM Parameters. Entropy 2023, 25, 531. [Google Scholar] [CrossRef] [PubMed]
  19. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  20. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  21. Sengupta, S.; Basak, S.; Peters, R.A., II. Particle Swarm Optimization: A Survey of Historical and Recent Developments with Hybridization Perspectives. Mach. Learn. Knowl. Extr. 2019, 1, 157–191. [Google Scholar] [CrossRef] [Green Version]
  22. Amaliah, B.; Fatichah, C.; Suryani, E. A new heuristic method of finding the initial basic feasible solution to solve the transportation problem. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 2298–2307. [Google Scholar] [CrossRef]
  23. Hosseini, E. Three new methods to find initial basic feasible solution of transportation problems. Appl. Math. Sci. 2017, 11, 1803–1814. [Google Scholar] [CrossRef]
  24. Amaliah, B.; Fatichah, C.; Suryani, E. Total opportunity cost matrix—Minimal total: A new approach to determine initial basic feasible solution of a transportation problem. J. Egypt. Inform. 2019, 20, 131–141. [Google Scholar] [CrossRef]
  25. Juman, Z.A.M.S.; Hoque, M.A. An efficient heuristic to obtain a better initial feasible solution to the transportation problem. Appl. Soft Comput. 2015, 34, 813–826. [Google Scholar] [CrossRef]
  26. Amaliah, B.; Fatichah, C.; Suryani, E. A Supply Selection Method for better Feasible Solution of balanced transportation problem. Expert Syst. Appl. 2022, 203, 117399. [Google Scholar] [CrossRef]
  27. Sun, D.; Chang, Y.; Zhang, L. An ant colony optimisation model for traffic counting location problem. Transport 2012, 165, 175–185. [Google Scholar] [CrossRef]
Figure 1. The number of optimal solutions that every method achieved.
Figure 1. The number of optimal solutions that every method achieved.
Algorithms 16 00372 g001
Figure 2. Average percentage deviation for each method.
Figure 2. Average percentage deviation for each method.
Algorithms 16 00372 g002
Figure 3. Accuracy for 20 particles.
Figure 3. Accuracy for 20 particles.
Algorithms 16 00372 g003
Figure 4. Accuracy for 40 particles.
Figure 4. Accuracy for 40 particles.
Algorithms 16 00372 g004
Figure 5. Accuracy for 50 particles.
Figure 5. Accuracy for 50 particles.
Algorithms 16 00372 g005
Table 1. Detail of 32 numerical examples of the TP.
Table 1. Detail of 32 numerical examples of the TP.
NoFrom JournalNameProblem SizeOptimal Solution
1Srinivasan and Thompson (1977)Pr.13·4880
2Deshmukh (2012)Pr.23·4743
3Ramadan and Ramadan (2012)Pr.33·35600
4Schrenk et al. (2011)Pr.43·459
5Samuel (2012)Pr.53·428
6Imam et al. (2009)Pr.63·4435
7Adlakha and Kowalski (2009)Pr.74·5390
8Kaur et al. (2018)Pr.83·51580
9G. Patel et al. (2017)Pr.94·449
10Ahmed et al. (2016b)Pr.104·4410
11Ahmed et al. (2016b)Pr.113·42850
12Ahmed et al. (2016a)Pr.123·5183
13Uddin and Khan (2016)Pr.133·4799
14Uddin and Khan (2016)Pr.143·5273
15Das et al. (2014a)Pr.153·41160
16Khan et al. (2015a)Pr.163·4200
17Azad and Hossain (2017)Pr.173·4240
18Morade (2017)Pr.183·3820
19Jude (2016)Pr.193·4190
20Jude (2016)Pr.204·483
21Hosseini (2017)Pr.213·43460
22Amaliah et al. (2019)Pr.223·4910
23Amaliah et al. (2019)Pr.234·41670
24Amaliah et al. (2019)Pr.244·42280
25Amaliah et al. (2019)Pr.253·42460
26Amaliah et al. (2019)Pr.263·3291
27Juman and Hoque (2015)Pr.273·34525
28Juman and Hoque (2015)Pr.283·4920
29Juman and Hoque (2015)Pr.293·4809
30Juman and Hoque (2015)Pr.303·4417
31Juman and Hoque (2015)Pr.314·53458
32Juman and Hoque (2015)Pr.324·6109
Table 2. The optimal solution of each method for the 32 test instances.
Table 2. The optimal solution of each method for the 32 test instances.
No.NameVAMTDM1T0CM-MTJHMBCEPSODWPSOTVACTrigAC-PSOFSVACOptimal
(Op)
1Pr.1955880880880880880880880880880880
2Pr.2779779743743743743743743743743743
3Pr.356005600560056005600560056005600560056005600
4Pr.45959615959595959595959
5Pr.52828282828282828282828
6Pr.6475475435460435435435435435435435
7Pr.7390400390390390390390390390390390
8Pr.816001595158015801580158015801580158015801580
9Pr.94953534949494949494949
10Pr.10470435435420410410411410410410410
11Pr.1128502850285028502850285028502850285028502850
12Pr.12187186187183187183183183183183183
13Pr.13859859799799799799799799799799799
14Pr.14273273273273273290273273273273273
15Pr.1512201160116011701170116011601160116011601160
16Pr.16204200200218204200200200200200200
17Pr.17248248240240240240240240240240240
18Pr.18820820820820820820820820820820820
19Pr.19190190190190192190190190190190190
20Pr.209283838383838383838383
21Pr.2135203570346034603460346034603460346034603460
22Pr.22990990910960910910910910910910910
23Pr.2316801670167016901670167016701670167016701670
24Pr.2424002400240023402280228022862281228422882280
25Pr.2529802980250025002460246024602460246024602460
26Pr.26327291291327291291291291291291291
27Pr.2751254550522545254525452545254525452545254525
28Pr.28960960930920920920920920920920920
29Pr.29859849809809809809809809809809809
30Pr.30476465417417417417417417417417417
31Pr.3137783572351334873487345834583458345834583458
32Pr.32112117109112109109109109109109109
Table 3. The deviation (dev) of the methods for 32 numerical examples.
Table 3. The deviation (dev) of the methods for 32 numerical examples.
VAMTDM1TOCMMTJHMBCEPSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.085227000000000
Pr.020.0484520.04845200000000
Pr.030000000000
Pr.04000.0338980000000
Pr.050000000000
Pr.060.0919540.09195400.057471000000
Pr.0700.02564100000000
Pr.080.0126580.00949400000000
Pr.0900.0816330.0816330000000
Pr.100.1463410.0609760.0609760.02439000.002439000
Pr.110000000000
Pr.120.0218580.0163930.02185800.02185800000
Pr.130.0750940.07509400000000
Pr.14000000.0622710000
Pr.150.051724000.0086210.00862100000
Pr.160.02000.090.0200000
Pr.170.03330.033300000000
Pr.180000000000
Pr.1900000.01052600000
Pr.200.108434000000000
Pr.210.0173410.03179200000000
Pr.220.0879120.08791200.054945000000
Pr.230.005988000.011976000000
Pr.240.0526320.0526320.0526320.026316000.0026320.0004390.0017540.003509
Pr.250.2113820.2113820.016260.01626000000
Pr.260.123711000.123711000000
Pr.270.1325960.0055250.1546960000000
Pr.280.0434780.0434780.010870000000
Pr.290.0618050.04944400000000
Pr.300.1414870.11510800000000
Pr.310.0925390.0329670.0159050.0083860.00838600000
Pr.320.0275230.07339400.027523000000
Average0.052920.035830.0140230.014050.0021680.0019460.0001580.0000130.0000540.00011
Table 7. Statistical measures for 20 particles.
Table 7. Statistical measures for 20 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.01Mean894.7666884.3883.6882.86667884.4
St.Dev22.5184710.6127811.25447.412638.76356
Min880880880880880
Max965928940910917
cv%2.516686581.2001331.2737010.83960960.990905
Pr.02Mean743743743743743
St.Dev00000
Min743743743743743
Max743743743743743
cv%00000
Pr.03Mean56005600560056005600
St.Dev00000
Min56005600560056005600
Max56005600560056005600
cv%00000
Pr.04Mean5959595959
St.Dev00000
Min5959595959
Max5959595959
cv%00000
Pr.05Mean28.26666628282828
St.Dev0.44977640000
Min2828282828
Max2928282828
cv%1.5911902540000
Pr.06Mean441.1333333435435435435
St.Dev7.6190927750000
Min435435435435435
Max463435435435435
cv%1.727163240000
Pr.07Mean391.5390.0333390.4667390390
St.Dev5.0154932370.1825742.5560300
Min390390390390390
Max410391404390390
cv%1.2810966123.2908310.65461100
Pr.08Mean1650.9333331593.0331598.71592.13331629.533
St.Dev44.766777517.9587323.3136712.2382440.14347
Min15801580158015801580
Max17901642166116231712
cv%2.7116041941.1273291.458290.76866982.463495
Pr.09Mean52.651.352.0666751.651.16667
St.Dev2.401149152.7686671.799101.904621.89524
Min49109494949
Max63122555353
cv%4.5649223395.3970123.4553893.69112853.704062
Pr.10Mean427.4333333427.7667428.1428.3425.8
St.Dev8.9353360254.384015.168536.254105.71386
Min410411411410410
Max434431431432430
cv%2.090463081.024861.207321.46021531.341913
Pr.11Mean2913.8333332934.82864.6332857.13036.767
St.Dev186.8538214220.8248226.89211.66885380.4261
Min28502850285028502850
Max38503945297728914554
cv%6.4126461627.5243581.1170140.408415912.52734
Pr.12Mean190.5666667184.4333186.0333184.33333183.9
St.Dev7.6233911061.501344.604961.493271.39827
Min183183183183183
Max206186200186186
cv%4.000380150.8140292.4753470.81009760.760345
Pr.13Mean805.3666667799799799799
St.Dev16.775358230000
Min799799799799799
Max878799799799799
cv%2.0829466780000
Pr.14Mean319.9333333302.2290.4290.1292.2667
St.Dev21.7825196217.705256.6932816.1594428.10250
Min292273273273273
Max378335317327306
cv%6.8084558095.8587852.3048495.57030082.772298
Pr.15Mean1186.111601160.06711601160
St.Dev73.1212266900.3651400
Min11601160116011601160
Max14011160116211601160
cv%6.16484501200.03147600
Pr.16Mean217.1333333202.9667202.7333200203.2333
St.Dev7.9469505465.9797556.5333806.76034
Min200200200200109
Max237218220200119
cv%3.6599403812.9461763.22264703.326397
Pr.17Mean244.6333333241.6667240240240
St.Dev6.0257111955.195046000
Min240240240240240
Max256259240240240
cv%2.4631603192.149674000
Pr.18Mean820820820820820
St.Dev00000
Min820820820820820
Max820820820820820
cv%00000
Pr.19Mean190.8190.7190.9333190.56667190
St.Dev0.9247553260.9523110.980260.817200
Min190190190190190
Max192192192192190
cv%0.4846726030.4993770.5134070.42882640
Pr.20Mean838383.383.283.1
St.Dev000.915380.7611240.54772
Min8383838383
Max8383868686
cv%001.0989020.9148130.659113
Pr.21Mean3484.53536.4673468.234603536.1
St.Dev58.9937167971.4286434.7804063.5839
Min34603460346034603460
Max37453646364534603644
cv%1.6930324812.0197741.0028401.798138
Pr.22Mean928.3910913.6910910
St.Dev28.61534433014.868600
Min910910910910910
Max990910990910910
cv%3.0825535201.62747600
Pr.23Mean1679.11671.1331670.73316711671.367
St.Dev14.232914741.1366420.583292.348882.02541
Min16701670167016701670
Max17241675167216791678
cv%0.8476514050.0680160.0349120.14056740.121183
Pr.24Mean2403.9666672366.42372.82361.56672333.833
St.Dev44.9800594429.0962731.033218.34287117.1320
Min22802317232023222292
Max24952430242423902420
cv%1.8710766691.2295591.3078740.77672471.47626
Pr.25Mean2468.7666672460246024602460
St.Dev24.216955210000
Min24602460246024602460
Max25632460246024602460
cv%0.9809333350000
Pr.26Mean292.6291291291291
St.Dev2.4858218650000
Min291291291291291
Max299291291291291
cv%0.849563180000
Pr.27Mean4574.2333334639.9674666.43345354634.9
St.Dev73.4282118760.2291529.9697318.3491067.4073
Min45254525452945254525
Max47534675467745854675
cv%1.6052572431.2980510.6422410.40461091.454343
Pr.28Mean941.7953.3667953.0667947.26667947.9667
St.Dev22.4931407219.6740413.3595717.2645413.60396
Min920920920920920
Max974992960960968
cv%2.388567562.0636381.4017461.82256391.435068
Pr.29Mean809809809809809
St.Dev00000
Min809809809809809
Max809809809809809
cv%00000
Pr.30Mean419.5333333417417417417
St.Dev7.1038276880000
Min417417417417417
Max445417417417417
cv%1.6932689550000
Pr.31Mean3480.0666673458345834583458
St.Dev33.276203920000
Min34583458345834583458
Max35873458345834583458
cv%0.9561944380000
Pr.32Mean114.3116.9333114.4333114.7118.4333
St.Dev3.7614193954.5328943.5494853.770574.44648
Min109109109109109
Max122125127119125
cv%3.2908306163.8764773.1017943.28733723.754424
Table 8. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 40 particles.
Table 8. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 40 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.20.80.810.8
Pr.0211111
Pr.0311111
Pr.0411111
Pr.0511111
Pr.060.41111
Pr.070.81111
Pr.080.20.20.30.70.1
Pr.0900.50.30.60.3
Pr.100.20.50.20.50.6
Pr.1100.50.40.80.4
Pr.120.20.70.70.71
Pr.130.71111
Pr.1400.30.50.70.5
Pr.150.81111
Pr.160.210.811
Pr.170.31111
Pr.1811111
Pr.190.20.90.60.91
Pr.2010.9111
Pr.210.60.50.60.70.4
Pr.220.71111
Pr.230.10.70.910.8
Pr.2400000
Pr.250.71111
Pr.260.81111
Pr.270.80.50.20.90.6
Pr.2800.10.40.60.3
Pr.2911111
Pr.3011111
Pr.310.51111
Pr.320.10.20.10.20
Average0.5156250.7593750.743750.8531250.775
Table 9. Statistical measures for 40 particles.
Table 9. Statistical measures for 40 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.01Mean900.6882.6879.6880886.5
Var32.694557.2295692.065591013.94633
Min880880874880880
Max975903882880918
cv%3.6303070.8191220.23483301.57319
Pr.02Mean743743743743743
Var00000
Min743743743743743
Max743743743743743
cv%00000
Pr.03Mean56005600560056005600
Var00000
Min56005600560056005600
Max56005600560056005600
cv%00000
Pr.04Mean5959595959
Var00000
Min5959595959
Max5959595959
cv%00000
Pr.05Mean2828282828
Var00000
Min2828282828
Max2828282828
cv%00000
Pr.06Mean443.2435435435435
Var11.516170000
Min435435435435435
Max472435435435435
cv%2.5984140000
Pr.07Mean393.9390390390390
Var8.22530000
Min390390390390390
Max410390390390390
cv%2.088170000
Pr.08Mean1659.41595.41611.51586.31623.8
Var74.9936323.4103553.6019318.5355532.25179
Min15801580157115801580
Max18021650171716391671
cv%4.5193221.4673653.3262131.1684771.986192
Pr.09Mean52.450.250.549.750.4
Var1.5776211.6193281.5092311.2516661.074968
Min5049494949
Max5553535352
cv%3.0107283.2257522.9885762.5184422.132872
Pr.10Mean421.7417417.2414.3414.7
Var11.728889.0921216.9249955.9451196.848357
Min410410410410410
Max434430430425430
cv%2.7813332.1803651.6598741.4349791.6514
Pr.11Mean3273.928942852.82850.23016.8
Var557.6408121.52374.3919120.421637255.7915
Min28512850285028502850
Max43603237286428513513
cv%17.032924.1991590.1539510.0147938.478901
Pr.12Mean190.9185.2183.7183.9183
Var7.2180334.2895221.2516661.4491380
Min183183183183183
Max203196186186183
cv%3.7810542.3161570.6813640.7880030
Pr.13Mean807.4799799799799
Var13.525280000
Min799799799799799
Max827799799799799
cv%1.6751650000
Pr.14Mean300.9284.6280276.9281.8
Var15.1690510.254548.6281196.7073769.29516
Min290273273273273
Max330302290291292
cv%5.0412253.6031413.0814712.422313.298495
Pr.15Mean1160.41160116011601160
Var0.8432740000
Min11601160116011601160
Max11621160116011601160
cv%0.0726710000
Pr.16Mean215.2200201.6200200
Var8.24351604.71875700
Min200200200200200
Max221200215200200
cv%3.8306302.34065300
Pr.17Mean244.7240240240240
Var6.2369860000
Min240240240240240
Max256240240240240
cv%2.548830000
Pr.18Mean820820820820820
Var00000
Min820820820820820
Max820820820820820
cv%00000
Pr.19Mean191.4190.2190.7190.1190
Var0.8432740.6324560.9486830.3162280
Min190190190190190
Max192192192191190
cv%0.4405820.3325210.4974740.1663480
Pr.20Mean83.383.3838383
Var0.9486830.948683000
Min8383838383
Max8686838383
cv%1.1388761.138876000
Pr.21Mean3529.43482.43479.33469.33482
Var136.277728.3360840.8956716.5465728.15828
Min34603460346034603460
Max38053545359035023544
cv%3.8612130.8136941.1753990.4769430.808681
Pr.22Mean919.6910910910910
Var16.460050000
Min910910910910910
Max954910910910910
cv%1.7899140000
Pr.23Mean1674.41670.31670.116701671
Var4.7187570.4830460.31622802.538591
Min16701670167016701670
Max16861671167116701678
cv%0.2818180.028920.01893500.15192
Pr.24Mean2412.623702352.523492351.1
Var20.7482319.4764834.4230221.3489529.51252
Min23712341228723152296
Max24412397241923932394
cv%0.8599940.8217921.4632530.9088531.255264
Pr.25Mean24792460246024602460
Var31.340420000
Min24602460246024602460
Max25402460246024602460
cv%1.2642370000
Pr.26Mean292.2291291291291
Var2.6997940000
Min291291291291291
Max299291291291291
cv%0.9239540000
Pr.27Mean4550.94538.44578.54525.64556.4
Var54.6147120.2824559.818711.89736762.54634
Min45254525452545254525
Max46574585467545314675
cv%1.2000860.4469081.3065130.0419251.372714
Pr.28Mean971.9931934.1924.7930.3
Var11.9763714.9666315.051398.52512612.5614
Min960920920920920
Max993969955947960
cv%1.2322631.6075861.6113260.9219341.350253
Pr.29Mean809809809809809
Var00000
Min809809809809809
Max809809809809809
cv%00000
Pr.30Mean417417417417417
Var00000
Min417417417417417
Max417417417417417
cv%00000
Pr.31Mean3469.53458345834583458
Var12.402960000
Min34583458345834583458
Max34833458345834583458
cv%0.3574850000
Pr.32Mean115118.6121.4115117.1
Var2.943928.1267738.4485374.9441322.330951
Min109109109109112
Max119129129123120
cv%2.5599316.8522546.9592564.2992451.990565
Table 10. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 50 particles.
Table 10. Accuracy of PSO, DWPSO, TVAC, TrigAC-PSO and FSVAC for 50 particles.
PSODWPSOTVACTrigAC-PSOFSVAC
Pr.010.21111
Pr.0211111
Pr.0311111
Pr.0411111
Pr.0511111
Pr.060.71111
Pr.070.91111
Pr.080.20.30.30.40.1
Pr.0900.60.50.80.5
Pr.100.10.10.40.70.5
Pr.110.30.40.610.5
Pr.120.20.70.50.81
Pr.130.81111
Pr.1400.30.20.71
Pr.150.81111
Pr.160.11111
Pr.170.30.7111
Pr.1811111
Pr.190.20.90.80.91
Pr.200.71111
Pr.210.50.30.610.4
Pr.220.81111
Pr.230.20.80.610.8
Pr.24000.100
Pr.2511111
Pr.260.71111
Pr.270.50.30.10.60.5
Pr.280.10.40.70.70.7
Pr.2911111
Pr.3011111
Pr.310.51111
Pr.32000.10.20.3
Average0.5250.743750.7656250.868750.821875
Table 11. Statistical measures for 50 particles.
Table 11. Statistical measures for 50 particles.
PSODWPSOTVACTrigAC-PSO FSVAC
Mean896.7880880880880
Var17.795440000
Pr.01Min880880880880880
Max929880880880880
cv%1.9845480000
Mean743743743743743
Var00000
Pr.02Min743743743743743
Max743743743743743
cv%00000
Mean56005600560056005600
Var00000
Pr.03Min56005600560056005600
Max56005600560056005600
cv%00000
Mean5959595959
Var00000
Pr.04Min5959595959
Max5959595959
cv%00000
Mean2828282828
Var00000
Pr.05Min2828282828
Max2828282828
cv%00000
Mean438.9435435435435
Var6.2795970000
Pr.06Min435435435435435
Max448435435435435
cv%1.4307580000
Mean392390390390390
Var6.3245550000
Pr.07Min390390390390390
Max410390390390390
cv%1.6134070000
Mean1677.31611.71601.71585.91634.5
Var73.7398647.3005831.187787.12507341.31518
Pr.08Min15801580158015801580
Max17941713167215951705
cv%4.3963432.9348251.9471680.4492762.527696
Mean52.450.350.249.450.8
Var0.9660921.8885621.5491930.8432742.043961
Pr.09Min5149494949
Max5353535154
cv%1.8436873.7545973.0860431.7070324.023546
Mean424.1421418.6412.6416.1
Var9.3624558.5893999.0578394.2998718.292567
Pr.10Min410410410410410
Max432430430421430
cv%2.2076052.0402372.1638411.042141.992926
Mean3149.12870.42856.928502898.3
Var611.977944.5301414.0115095.44405
Pr.11Min28502850285028502850
Max44292990289428503091
cv%19.433421.5513570.49044403.293105
Mean187.4183.9184.5183.6183
Var3.6878181.4491381.5811391.2649110
Pr.12Min183183183183183
Max193186186186183
cv%1.9678860.7880030.8569860.6889490
Mean804.6799799799799
Var11.805840000
Pr.13Min799799799799799
Max827799799799799
cv%1.4672930000
Mean302.8284.2289.2278.1273
Var15.389757.87118515.597728.4911720
Pr.14Min290273273273273
Max328291328295273
cv%5.0824812.7695945.3934033.053280
Mean1184.31160116011601160
Var76.143210000
Pr.15Min11601160116011601160
Max14011160116011601160
cv%6.4293860000
Mean212.9200200200200
Var8.698020000
Pr.16Min200200200200200
Max221200200200200
cv%4.0854960000
Mean246.9241.6240240240
Var6.7733142.796824000
Pr.17Min240240240240240
Max256248240240240
cv%2.7433431.157626000
Mean820820820820820
Var00000
Pr.18Min820820820820820
Max820820820820820
cv%00000
Mean191.6190.1190.3190.1190
Var0.8432740.3162280.6749490.3162280
Pr.19Min190190190190190
Max192191192191190
cv%0.4401220.1663480.3546760.1663480
Mean83.983838383
Var1.4491380000
Pr.20Min8383838383
Max8683838383
cv%1.727220000
Mean3493.63468.43477.634603470.8
Var57.8776513.167324.84262012.76105
Pr.21Min34603456346034603460
Max36253492351934603495
cv%1.6566760.3796360.71436100.367669
Mean914.7910910910910
Var10.1330000
Pr.22Min910910910910910
Max938910910910910
cv%1.1077950000
Mean1675.21670.2167216701670.6
Var7.7430970.4216373.01846201.577621
Pr.23Min16701670167016701670
Max16941671167916701675
cv%0.4622190.0252450.1805300.094434
Mean2423.42354.52336.823472355
Var13.3516531.6482437.9057326.4868928.5151
Pr.24Min24012288228023072309
Max24462390240423882395
cv%0.5509471.344161.6221211.1285431.210832
Mean24602460246024602460
Var00000
Pr.25Min24602460246024602460
Max24602460246024602460
cv%00000
Mean292.2291291291291
Var1.9321840000
Pr.26Min291291291291291
Max295291291291291
cv%0.6612540000
Mean4585.84549.34552.345324595.2
Var69.9790445.9348642.0582916.5730775.36843
Pr.27Min45254525452545254525
Max46754675466845774675
cv%1.5259941.0097130.9238910.365691.640156
Mean957.3928929.9923.3923.6
Var21.1242412.2655817.854048.4202149.070097
Pr.28Min920920920920920
Max990960967947949
cv%2.2066471.3217221.9199950.9119690.982037
Mean809809809809809
Var00000
Pr.29Min809809809809809
Max809809809809809
cv%00000
Mean417417417417417
Var00000
Pr.30Min417417417417417
Max417417417417417
cv%00000
Mean3470.43458345834583458
Var18.422210000
Pr.31Min34583458345834583458
Max35083458345834583458
cv%0.5308380000
Mean118.1122.3123.7114.1112.6
Var5.3841336.4987188.5900463.7844713.306559
Pr.32Min11211a2109109109
Max127129129120119
cv%4.5589615.3137516.9442573.3168022.936553
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aroniadi, C.; Beligiannis, G.N. Applying Particle Swarm Optimization Variations to Solve the Transportation Problem Effectively. Algorithms 2023, 16, 372. https://doi.org/10.3390/a16080372

AMA Style

Aroniadi C, Beligiannis GN. Applying Particle Swarm Optimization Variations to Solve the Transportation Problem Effectively. Algorithms. 2023; 16(8):372. https://doi.org/10.3390/a16080372

Chicago/Turabian Style

Aroniadi, Chrysanthi, and Grigorios N. Beligiannis. 2023. "Applying Particle Swarm Optimization Variations to Solve the Transportation Problem Effectively" Algorithms 16, no. 8: 372. https://doi.org/10.3390/a16080372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop