Next Article in Journal
Mathematical Model Analysis of Substance Abuse and Hepatitis B Co-Existence with Control Interventions
Previous Article in Journal
Interconnections Between Financial Markets and Crypto-Asset Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Algorithm for Finding Initial Basic Feasible Solutions of Transportation Problems

by
Douglas Kwasi Boah
,
Suleman Abudu Fiele
* and
Christian John Etwire
Department of Mathematics, School of Mathematical Sciences, C. K. Tedam University of Technology and Applied Sciences, Navrongo P. O. Box 24, Ghana
*
Author to whom correspondence should be addressed.
AppliedMath 2026, 6(4), 58; https://doi.org/10.3390/appliedmath6040058
Submission received: 28 January 2026 / Revised: 15 March 2026 / Accepted: 18 March 2026 / Published: 9 April 2026

Abstract

This study introduces a deterministic fractional-penalty refinement of Vogel’s Approximation Method (VAM) for generating high-quality initial basic feasible solutions (IBFS) in classical transportation problems. Unlike the traditional additive regret measure employed in VAM, the proposed method uses a multiplicative contrast ratio between the two smallest admissible costs in each row and column. This modification preserves the allocation structure of VAM while introducing scale-invariant prioritization that improves sensitivity to relative cost differences.The method was evaluated on thirty-four benchmark transportation problems drawn from the literature and self-constructed large-scale instances (up to 10 × 20 ). Performance was assessed using percentage optimality gaps relative to optimal solutions obtained via the Stepping–Stone and MODI procedures. Across all instances, the proposed approach achieved a mean optimality gap of 2.78 % , compared to 5.22 % for classical VAM, 14.97 % for the Least Cost Method (LCM), and 45.78 % for the Northwest Corner Method (NWCM). Dispersion of deviations was also reduced, indicating improved robustness across heterogeneous cost structures Statistical validation confirms the improvement over VAM: the paired t-test yielded t = 3.17 ( p = 0.00163 , one-sided), and the Wilcoxon signed-rank test produced p = 6.10 × 10 5 . Computational experiments further show that the refinement does not increase runtime relative to classical IBFS procedures.The proposed method therefore constitutes a structured enhancement of VAM that improves initial solution quality while maintaining computational simplicity.

1. Introduction

The transportation problem (TP) is one of the earliest and most studied linear programming models. It was originally formulated by Hitchcock and later extended within the framework of linear programming [1,2]. Constructive heuristics for generating an initial basic feasible solution (IBFS) remain important because the quality of the IBFS can influence the number of iterations required in subsequent optimality procedures such as the Stepping-Stone or MODI methods [3].
Classical IBFS procedures include the North-West Corner Method (NWCM), the Least Cost Method (LCM), and Vogel’s Approximation Method (VAM). Among these, VAM is widely regarded as the most cost-sensitive because it incorporates a penalty defined as the difference between the two smallest admissible costs in each row or column [4]. An effective heuristic approach for determining an initial basic feasible solution to the transportation problem has been proposed, offering a practical efficiency and solution quality [5]. Despite its effectiveness, the additive penalty used in VAM is sensitive to absolute cost magnitudes and scaling. In heterogeneous cost environments, proportional dispersion between competing allocation options may not be fully captured by simple cost differences. This observation motivates the investigation of alternative penalty formulations that preserve the constructive structure of VAM while redefining its prioritization metric.
This study proposes a ratio-based (fractional) penalty defined as the quotient of the second-smallest and smallest admissible costs in each active line. The resulting measure emphasizes proportional cost contrast and remains invariant under positive cost scaling. Importantly, the allocation structure of VAM is retained; only the penalty computation is modified.
The objective of this paper is therefore not to introduce an entirely new heuristic family, but rather to provide a mathematically transparent and computationally reproducible refinement of the VAM penalty structure. Extensive benchmark testing and statistical validation are conducted to evaluate performance consistency across diverse problem instances.
The contribution of this study lies in introducing a scale-invariant multiplicative penalty mechanism within the classical Vogel’s Approximation Method framework. Unlike the traditional additive regret measure used in VAM, the proposed fractional penalty captures proportional dispersion between competing transportation costs. This modification preserves the constructive allocation structure of VAM while providing an alternative prioritization mechanism that can improve the quality and stability of initial basic feasible solutions. The effectiveness of the approach is validated through computational experiments on thirty-four benchmark transportation problems together with statistical analysis of optimality gaps.

Research Gap and Contribution

Although numerous heuristics exist for obtaining initial basic feasible solutions (IBFS) in transportation problems, many widely used procedures remain rooted in either minimum-cost selection or additive regret ideas. Common IBFS methods include the Row Minimum Method, the Column Minimum Method, and the Maximum Difference Method. Other classical baselines are the Least Cost Method (LCM) and Vogel’s Approximation Method (VAM).
Recent studies have introduced alternative penalty models and hybrid allocation strategies for constructing IBFS. For example, the Maximum Range Method was proposed to improve allocation prioritization [6]. Alternative procedures for generating initial feasible solutions have also been explored [7]. Furthermore, new penalty-based models motivated by probabilistic or distributional mechanisms have been developed [8]. Other algorithmic strategies for determining IBFS have also been reported in recent studies [9]. However, the theoretical implications of multiplicative cost dispersion in row/column prioritization remain largely unexplored.
Classical VAM employs an additive regret defined as the difference between the two smallest admissible costs. While effective, this measure is not scale-invariant and may underrepresent proportional dispersion when costs vary significantly across rows.
This study introduces a deterministic fractional-penalty refinement that replaces additive regret with a multiplicative contrast ratio. The contribution lies not in altering the allocation structure of VAM, but in modifying the prioritization mechanism in a theoretically motivated and statistically validated manner.

2. Literature Review

Transportation problems remain a central topic in operations research due to their wide applicability in logistics, supply chain management, and production planning. Classical constructive heuristics for generating an initial basic feasible solution (IBFS), namely the North-West Corner Method (NWCM), the Least Cost Method (LCM), and Vogel’s Approximation Method (VAM), are extensively discussed in standard operations research textbooks [10,11,12].
The continued relevance of these classical heuristics has been demonstrated in several comparative studies. Performance evaluations of traditional IBFS methods confirm their effectiveness as baseline techniques for transportation problems [13]. Further investigations into their computational behavior show that these heuristics remain competitive for small- and medium-scale transportation instances [14]. Additional empirical studies provide supporting evidence of their applicability in practical scenarios [15,16]. These classical methods also serve as important benchmarks for evaluating newly proposed algorithms [17].
The North-West Corner Method (NWCM) is attractive because of its simplicity and low computational cost. However, it ignores transportation costs and allocates shipments solely by traversing the transportation tableau from the top-left corner, which often leads to relatively coarse initial solutions [18]. The Least Cost Method (LCM) improves upon NWCM by selecting the smallest available unit transportation cost at each allocation step. This strategy generally produces better initial solutions than NWCM [19]. However, LCM may suffer from myopic allocation decisions and sensitivity to tie-breaking rules when multiple cells have identical costs [20].
Vogel’s Approximation Method (VAM) introduces a penalty-based strategy defined by the difference between the two smallest costs in each row or column. This penalty mechanism allows VAM to prioritize allocations where suboptimal decisions would be most costly [21]. Several studies have shown that VAM typically produces solutions closer to optimality than NWCM and LCM [22].
In practice, classical IBFS procedures must also address implementation issues such as balancing through dummy rows or columns, degeneracy prevention, and deterministic tie-breaking rules. Improper handling of these issues can significantly affect solution quality and reproducibility [23]. Standard operations research references emphasize the need for systematic implementation rules to ensure consistent computational performance [24]. The transportation problem has been extensively studied in both applied and theoretical contexts, with contributions focusing on cost minimization and optimization techniques [25,26]. However, existing approaches still face challenges in improving computational efficiency during optimality checks. These limitations motivate ongoing research aimed at developing improved IBFS algorithms with stronger cost-awareness and enhanced computational robustness.

3. Methodology: Finding Initial Basic Feasible Solutions

3.1. Notation

Let C = ( c i j ) be the m × n cost matrix with supplies a i > 0 ( i = 1 , , m ) and demands b j > 0 ( j = 1 , , n ). A transportation problem (TP) is balanced if i = 1 m a i = j = 1 n b j . If it is unbalanced, we add a dummy column (when total supply exceeds total demand) or a dummy row (when total demand exceeds total supply). The dummy line has cost 0 to all of its cells, and the added supply/demand equals the absolute imbalance.
Throughout all methods:
  • Stopping criterion: Continue allocating until all supplies and demands are satisfied.
  • Degeneracy safeguard: An IBFS must contain exactly m + n 1 basic cells. If a row and a column are both exhausted by the same allocation, cross out one line and leave the other active with a zero residual, or place a tiny bookkeeping allocation ε in a zero-cost eligible cell (do not alter totals) to maintain m + n 1 basic variables.
  • Tie-breaking (deterministic): When ties occur, use: (i) prefer row-first over column when methods require choosing a line; (ii) within a line, choose the lowest cost; if still tied, choose the leftmost (smallest column index); if still tied, choose the topmost (smallest row index). Using a fixed rule ensures reproducibility.
  • Quality measure: The IBFS cost is
    Z IBFS = i = 1 m j = 1 n c i j x i j

3.2. General Steps for Finding IBFS

1.
Balance the problem. If total supply and total demand differ, add a zero-cost dummy row (when demand exceeds supply) or dummy column (when supply exceeds demand) so totals match.
2.
Initialize. Mark all rows and columns as active, with their remaining supply/demand.
3.
Choose a line or cell (selection rule). Apply one of the rules below to decide where to allocate next.
4.
Allocate. In the chosen cell ( i , j ) , set
x i j = min { remaining supply of row i , remaining demand of column j }
5.
Update the rim. Reduce the row’s remaining supply and the column’s remaining demand by x i j .
6.
Cross out satisfied lines. Any row/column that reaches zero becomes inactive.
7.
Handle degeneracy. If a single allocation makes both a row and a column reach zero, cross out one and keep the other active with zero balance (or place a formal ε in an eligible cell) so that the final IBFS has exactly m + n 1 basic cells.
8.
Repeat. Recompute the quantities required by the selection rule on the reduced tableau and return to Step 3 until all supplies and demands are satisfied.
9.
Stop. When all rim requirements are met, the current x i j constitute an IBFS.
10.
The IBFS cost is computed using Equation (1).

Selection Rules (To Use in Step 3)

  • North–West Corner Method (NWCM): Start at the top-left active corner, allocate, then move right when a column is satisfied and down when a row is satisfied.
  • Least Cost Method (LCM): Choose the globally cheapest active cell.
  • Vogel’s Approximation Method (VAM): For each active line, the penalty is
    P = c ( 2 ) c ( 1 )
    where c ( 1 ) and c ( 2 ) are the smallest and second smallest costs in that line.
  • Mean Minima Method: For each active line, compute
    M = c ( 1 ) + c ( 2 ) 2
    and select the line with the smallest mean.

3.3. Computational Complexity Analysis

At each allocation step, the algorithm requires identifying the two smallest elements in each active row and column. This operation has linear complexity in the number of active cells per iteration. Since exactly m + n 1 allocations are performed in constructing an IBFS, the overall worst-case computational complexity remains comparable to classical VAM, namely O ( m n ) per iteration and
O ( m n ( m + n ) ) .
Because the allocation structure is identical to VAM and only the penalty computation differs, the proposed algorithm does not introduce additional asymptotic computational burden.

4. Results and Discussions

4.1. The New Algorithm for Finding Initial Basic Feasible Solutions (IBFS) of Transportation Problems

The following are the steps involved in the new algorithm.

4.1.1. Step 1: Balance Check

If the total supply does not equal total demand, the problem must be balanced. This condition is expressed as
i = 1 m a i = j = 1 n b j
If Equation (6) does not hold, a dummy row (for excess demand) or dummy column (for excess supply) with zero transportation costs is introduced to balance the problem.

4.1.2. Step 2: Row and Column Fractional Penalties

For each active row i, determine the two smallest costs among the admissible cells. These are defined as
m i ( 1 ) = min j c i j
and
m i ( 2 ) = second-smallest of { c i j }
The fractional penalty for row i is then defined as
p i = m i ( 2 ) m i ( 1 )
Similarly, for each active column j, the two smallest costs are obtained and the column penalty is defined as
q j = m j ( 2 ) m j ( 1 )
If m ( 1 ) = 0 , the corresponding line is treated as having very large priority to avoid division by zero.

4.1.3. Step 3: Selection and Allocation

The row or column with the largest penalty is selected according to
P = max { p i , q j }
Within the selected row or column, the minimum-cost cell ( i , j ) is chosen and the allocation is computed as
x i j = min ( a i , b j )
The supply and demand values are then updated accordingly.

4.1.4. Step 4: Iteration

After each allocation, the penalties are recomputed for the reduced tableau and Steps 2–3 are repeated until all supplies and demands are satisfied.

4.1.5. Step 5: IBFS Cost

The total cost of the obtained initial basic feasible solution is computed as
Z IBFS = i = 1 m j = 1 n c i j x i j

4.2. Monotonicity Property of the Fractional Penalty

The ratio-based penalty also exhibits a monotonicity property that clarifies its prioritization behavior relative to the smallest admissible cost in a row or column.
Lemma 1. 
Let m ( 1 ) and m ( 2 ) denote the smallest and second-smallest admissible costs in an active row (or column) with m ( 1 ) > 0 and m ( 2 ) m ( 1 ) . Define the fractional penalty
p = m ( 2 ) m ( 1 ) .
Then the penalty p is strictly increasing in m ( 2 ) and strictly decreasing in m ( 1 ) .
Proof.  
Consider the penalty function p ( m ( 1 ) , m ( 2 ) ) = m ( 2 ) m ( 1 ) for m ( 1 ) > 0 . The partial derivatives are
p m ( 2 ) = 1 m ( 1 ) > 0 ,
and
p m ( 1 ) = m ( 2 ) ( m ( 1 ) ) 2 < 0 .
Thus, increasing the second-smallest admissible cost increases the penalty, while increasing the smallest admissible cost decreases the penalty. Consequently, rows or columns with stronger proportional cost contrast receive higher priority in the allocation process.    □
This monotonicity property provides a theoretical justification for the prioritization mechanism of the fractional penalty. In contrast to the additive penalty used in classical Vogel’s Approximation Method, the ratio-based formulation emphasizes proportional cost dispersion, thereby capturing relative dominance among competing allocation options.

4.3. Demonstration of the New Algorithm

To illustrate the procedure, we consider a transportation problem adapted from standard literature [27]. The cost matrix, supplies, and demands are given in Table 1.
Since total supply equals total demand (105), the problem is balanced.

4.3.1. Iteration 1

The computed row and column penalties for the first iteration are presented in Table 2.
p i = m i ( 2 ) m i ( 1 ) , q j = m j ( 2 ) m j ( 1 )
The maximum penalty occurs in column D 5 . The minimum cost in column D 5 is c 45 = 30 .
x 45 = min ( 14 , 20 ) = 14
Row P 4 is exhausted; demand of D 5 reduces to 6.

4.3.2. Iteration 2

Recompute penalties on the reduced tableau.
A tie occurs between row P 1 and column D 2 . Using the deterministic row-first rule, select row P 1 . Minimum cost in row P 1 is at D 3 :
x 13 = min ( 37 , 15 ) = 15
Column D 3 is satisfied; supply of P 1 reduces to 22.

4.3.3. Iteration 3

Row P 2 has the largest penalty. Minimum cost in row P 2 is at D 6 .
x 26 = min ( 22 , 10 ) = 10
Column D 6 is satisfied; supply of P 2 reduces to 12.

4.3.4. Iteration 4

Row P 3 has the largest penalty. Minimum cost in row P 3 is at D 2 :
x 32 = min ( 32 , 20 ) = 20
Column D 2 is satisfied; supply of P 3 reduces to 12.

4.3.5. Iteration 5

Row P 1 now has the largest penalty. Minimum cost in row P 1 is at D 1 :
x 11 = min ( 22 , 15 ) = 15
Column D 1 is satisfied; supply of P 1 reduces to 7.

4.3.6. Iteration 6

Row P 2 has the largest penalty. Minimum cost in row P 2 is at D 4 :
x 24 = min ( 12 , 25 ) = 12
Row P 2 is exhausted; demand of D 4 reduces to 13.

4.3.7. Iteration 7

Row P 3 has the largest penalty. Minimum cost in row P 3 is at D 4 :
x 34 = min ( 12 , 13 ) = 12
Row P 3 is exhausted; demand of D 4 reduces to 1.

4.3.8. Iteration 8

Only row P 1 remains active.
x 14 = min ( 7 , 1 ) = 1
Column D 4 is satisfied; supply of P 1 reduces to 6.

4.3.9. Iteration 9

Final allocation:
x 15 = 6
All supplies and demands are satisfied.
Final Initial Basic Feasible Solution
X = 15 0 15 1 6 0 0 0 0 12 0 10 0 20 0 12 0 0 0 0 0 0 14 0
Number of allocations:
m + n 1 = 4 + 6 1 = 9
Hence, the solution is non-degenerate.
IBFS Cost
Z IBFS = 25 ( 15 ) + 20 ( 15 ) + 40 ( 1 ) + 45 ( 6 ) + 30 ( 12 ) + 20 ( 10 ) + 20 ( 20 ) + 35 ( 12 ) + 30 ( 14 )
Thus, the fractional-penalty algorithm produces an initial solution with total transportation cost:
Z IBFS = 2785 .
To further substantiate performance consistency, we computed the mean, median, and standard deviation of optimality gaps across all benchmark instances. Additionally, paired comparisons between VAM and the new method were conducted to determine whether improvements were systematic.

4.4. Optimality Gap Analysis

To evaluate the practical relevance of the new method, we compare the IBFS costs produced by each heuristic against the known optimal solution cost Z obtained via standard optimality procedures.
For each benchmark instance, we compute the percentage optimality gap as:
Gap ( % ) = Z IBFS Z Z × 100
This measure provides a normalized assessment of solution quality independent of cost magnitude. Lower values indicate closer proximity to optimality.
Across the benchmark set, the fractional-penalty variant frequently exhibits equal or smaller gaps compared to VAM, LCM, and NWCM. While improvements are moderate in magnitude for some instances, they are consistent across several categories of cost structures.

4.5. Comparison of the New Algorithm with Existing Algorithms in Terms of Solutions

Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 compare the initial basic feasible solution costs produced by the new algorithm with those obtained by VAM, NWCM, and LCM across the benchmark instances.

4.6. Statistical Validation of Comparative Results

To determine whether observed performance differences are systematic rather than incidental, aggregated statistics were computed across all benchmark instances. These include the mean, median, maximum, and standard deviation of optimality gaps for each heuristic.
Additionally, paired statistical tests were conducted between VAM and the proposed method. Both a paired t-test and a Wilcoxon signed-rank test were applied to the set of instance-wise gap differences.
The results indicate that the fractional-penalty variant achieves statistically significant improvements over classical VAM at conventional confidence levels in several benchmark subsets. However, in certain cases performance remains equivalent, confirming that the proposed method should be interpreted as a structured refinement rather than a universal dominance mechanism.

Reference Optimal Values

For each instance, optimality gaps were computed relative to the best-known optimal solution obtained via New Optimal Method or Stepping-Stone or MODI procedures. In cases where identical values were produced by multiple methods, the minimum verified solution value was used as the reference benchmark.

4.7. Interpretation of Aggregated Optimality Gap Statistics

Table 9 reports aggregated optimality-gap statistics computed across the thirty-four benchmark instances. For each instance, the optimality gap is defined by Equation (28). where Z denotes the optimal transportation cost obtained by the optimality procedures (Stepping–Stone/MODI).
The Northwest Corner Method (NWCM) produces the largest deviations from optimality, with a mean gap of 45.78 % and a maximum gap of 403.33 % . This confirms the well-known limitation of NWCM as a feasibility-driven rule that does not exploit cost information.
The Least Cost Method (LCM) improves substantially over NWCM, reducing the mean gap to 14.97 % and the median to 2.57 % . However, its dispersion remains relatively high (standard deviation 29.07 % ), indicating sensitivity to problem structure and allocation order.
Vogel’s Approximation Method (VAM) further tightens the initial solution quality, achieving a mean gap of 5.22 % and a median gap of 0.00 % . The zero median indicates that at least half of the instances produce VAM initial costs equal to the optimal value.
The proposed fractional-penalty variant yields the smallest overall deviation, with a mean gap of 2.78 % , median gap of 0.00 % , and a lower variability (standard deviation 5.89 % ) than the other heuristics. This suggests that the proposed penalty refinement improves both average performance and stability across heterogeneous cost structures.
Although the maximum gap for the proposed method reaches 31.25 % on the most challenging instance, the aggregated statistics consistently indicate closer-to-optimal initial solutions than NWCM, LCM, and classical VAM under the same benchmark set.

4.8. Wilcoxon Signed-Rank Test Formulation

To assess whether the fractional-penalty variant produces systematically smaller optimality gaps than classical VAM, a paired Wilcoxon signed-rank test is conducted on the instance-wise optimality gaps.
Let
d i = Gap Proposed , i Gap VAM , i , i = 1 , 2 , , n
with n = 34 denote the number of benchmark instances. Differences with d i = 0 are discarded, leaving n 0 nonzero paired differences.
Rank the absolute nonzero differences | d i | in ascending order and denote the rank of | d i | by R i (average ranks are used in case of ties). Define
W + = d i > 0 R i , W = d i < 0 R i
The Wilcoxon signed-rank test statistic is commonly taken as
T = min ( W + , W )
For large samples, a normal approximation can be used:
Z = W + μ W σ W
where
μ W = n 0 ( n 0 + 1 ) 4 , σ W = n 0 ( n 0 + 1 ) ( 2 n 0 + 1 ) 24
The hypotheses are
H 0 : median ( d i ) = 0 vs . H 1 : median ( d i ) < 0
where H 1 corresponds to the new method having smaller gaps than VAM.
Using the 34 benchmark instances, the Wilcoxon signed-rank test (one-sided; Proposed < VAM) yielded p = 6.10 × 10 5 , indicating a statistically significant reduction in optimality gaps for the new method.

4.9. Paired t-Test Formulation

To further evaluate whether the proposed fractional-penalty variant yields smaller optimality gaps than classical VAM, a paired t-test is conducted.
Let
d i = Gap Proposed , i Gap VAM , i , i = 1 , 2 , , n
where n = 34 . The sample mean and standard deviation of differences are;
d ¯ = 1 n i = 1 n d i
and the sample standard deviation of the paired differences is
s d = 1 n 1 i = 1 n ( d i d ¯ ) 2
The paired t statistic is
t = d ¯ s d / n
Under the null hypothesis
H 0 : μ d = 0
The statistic follows a t distribution with n 1 degrees of freedom. The directional alternative hypothesis is
H 1 : μ d < 0
corresponding to the new method having smaller gaps than VAM.
Using the 34 benchmark instances, the paired differences in optimality gaps yielded d ¯ = 2.44 % with standard deviation s d = 4.49 % . The resulting statistic was t = 3.17 with 33 degrees of freedom. The one-sided p-value (Proposed < VAM) was p = 0.00163 (two-sided p = 0.00327 ), indicating a statistically significant improvement at the 5% level.
Normality of paired differences was inspected prior to the t-test; therefore, both parametric and non-parametric tests were reported.

4.10. Graphical Comparison of Optimality Gaps

To further illustrate the comparative performance of the methods, we present a graphical representation of the optimality gaps across all benchmark instances as shown in Figure 1.
The mean optimality gaps for each heuristic are presented in Figure 2.

4.11. Benchmark Characteristics

The benchmark dataset comprises thirty-four transportation problems drawn from the literature and supplemented with self-generated cases. Problem sizes range from small (3 × 3) to medium and larger instances (up to 10 × 20).
Cost matrices include diverse structural patterns, including:
  • Uniformly distributed costs,
  • Highly skewed cost distributions,
  • Clustered near-equal costs,
  • Instances containing zero or near-zero costs.
This diversity enables evaluation of how multiplicative penalties behave under varying dispersion conditions.

4.12. Comparison of the Computational Speed of the New Algorithm with VAM

The New Algorithm has been compared with the Vogel’s Approximation Method (VAM). All computational experiments were conducted using MATLAB R2022b on a computer equipped with an Intel Core i7 processor (3.20 GHz), 16 GB RAM, running Windows 10 (64-bit). An elapsed time counter was incorporated into the implementation. Each algorithm was executed five times per dataset, and the average execution time was recorded to enhance reproducibility. Table 10, Table 11, Table 12, Table 13 and Table 14, give the results of that comparison.

5. Theoretical Justification of the Fractional Penalty

This section provides analytical clarification regarding the structural properties of the fractional penalty and its relationship to classical VAM.

5.1. Scale Invariance Property

Unlike the additive penalty used in VAM, the proposed ratio-based penalty possesses a scale-invariance property, which is formally stated below.
Proposition 1. 
Let all transportation costs be multiplied by a positive scalar α > 0 . The fractional penalty
p i = m i ( 2 ) m i ( 1 )
remains unchanged under this transformation.
Proof.  
Let the transformed costs be
c ˜ i j = α c i j .
Then the smallest and second-smallest costs in any active row become
m ˜ i ( 1 ) = α m i ( 1 ) , m ˜ i ( 2 ) = α m i ( 2 ) .
Hence, the transformed penalty becomes
p ˜ i = m ˜ i ( 2 ) m ˜ i ( 1 ) = α m i ( 2 ) α m i ( 1 ) = m i ( 2 ) m i ( 1 ) = p i .
Therefore, the ratio-based penalty is invariant under positive cost scaling.    □
This property ensures prioritization stability across heterogeneous cost magnitudes, a characteristic not shared by additive difference penalties.

5.2. Structural Relationship to VAM

The proposed method preserves the allocation structure of VAM. The only modification lies in the prioritization criterion.
Proposition 2. 
If the ranking of rows and columns induced by the ratio-based penalty coincides with the ranking induced by the additive penalty, then the proposed method generates the same IBFS as VAM.
Proof. 
If both penalties produce identical ordering of lines at every iteration, then the selected allocation sequence is identical. Since allocation feasibility rules remain unchanged, the final IBFS coincides with that of VAM.    □
This explains why identical solutions are sometimes observed in benchmark results. Improvements occur precisely when multiplicative contrast alters prioritization order.

5.3. Degeneracy Handling

As with classical VAM, degeneracy occurs when the number of allocations is fewer than m + n 1 . In such cases, an ε -allocation is inserted in a zero-cost admissible cell to maintain the required number of basic variables without disturbing feasibility.
Across the 34 benchmark instances, degeneracy occurred in only a limited subset of cases, and handling procedures were identical for VAM and the proposed method.

5.4. Dominance Considerations: Additive vs. Multiplicative Ordering

It is natural to ask under what conditions the multiplicative penalty produces a different prioritization order from the classical additive penalty used in VAM.
Let two candidate rows i and k have smallest and second-smallest admissible costs
( m i ( 1 ) , m i ( 2 ) ) and ( m k ( 1 ) , m k ( 2 ) ) .
The additive penalties are
Δ i = m i ( 2 ) m i ( 1 ) , Δ k = m k ( 2 ) m k ( 1 ) .
while the multiplicative penalties are;
p i = m i ( 2 ) m i ( 1 ) , p k = m k ( 2 ) m k ( 1 ) .
If the relative dispersion between the two smallest costs is proportional across rows (i.e., if m i ( 2 ) = λ m i ( 1 ) and m k ( 2 ) = λ m k ( 1 ) for the same λ ), then both additive and multiplicative penalties induce identical rankings.
m i ( 2 ) = λ m i ( 1 ) , m k ( 2 ) = λ m k ( 1 ) .
However, when proportional contrasts differ across rows—particularly in heterogeneous cost environments—the ratio-based penalty may alter prioritization even when additive differences are similar. In such cases, multiplicative ordering emphasizes relative dominance rather than absolute deviation.
Therefore, the new method does not universally dominate VAM; rather, it modifies prioritization when proportional dispersion differs from additive dispersion. This explains why identical IBFS outcomes are sometimes observed, while improvements arise in other benchmark categories.

5.5. Discussions

The comparative analysis across thirty-four datasets shows that the fractional-penalty algorithm frequently matches or improves upon the IBFS cost obtained by NWCM, LCM, and VAM. Improvements are particularly noticeable in instances where cost structures exhibit proportional clustering or multiple near-equal minima.
The computational experiments indicate that the fractional-penalty prioritization can improve the quality of initial basic feasible solutions in several heterogeneous cost environments. Although the structural framework of the method remains consistent with classical VAM, the multiplicative penalty emphasizes proportional dispersion between competing costs rather than absolute cost differences. This distinction becomes particularly relevant when transportation costs vary significantly across rows or columns. In such cases, the ratio-based prioritization may alter allocation ordering and lead to improved initial solutions. At the same time, the proposed method retains the simplicity, transparency, and computational efficiency of classical IBFS procedures, making it suitable for practical implementation.
However, the magnitude of improvement varies across datasets. In several benchmark cases, the proposed method yields identical results to VAM, indicating structural equivalence in those instances. Runtime differences are modest and should be interpreted cautiously due to microsecond-level variability inherent in MATLAB timing measurements.
Overall, the results suggest that the fractional penalty provides an alternative prioritization mechanism that can enhance IBFS quality without increasing asymptotic complexity.

5.6. Sensitivity and Boundary Conditions

The fractional penalty behaves similarly to VAM when cost dispersion within rows/columns is nearly uniform. However, when proportional contrast between the smallest two costs is high, the ratio-based penalty can alter prioritization order.
If m ( 1 ) approaches zero, special care is required to avoid numerical instability; in implementation, zero-cost safeguards were included.
These observations clarify that the proposed method is best interpreted as a structured refinement whose effectiveness depends on cost dispersion characteristics.

6. Conclusions

This study introduced a ratio-based fractional penalty refinement of Vogel’s Approximation Method for constructing initial basic feasible solutions to transportation problems. By redefining the penalty as a proportional contrast measure rather than an additive difference, the method preserves the constructive allocation structure of VAM while offering scale-invariant prioritization.
Extensive benchmark testing across thirty-four datasets demonstrates that the proposed algorithm consistently produces IBFS costs that are equal to or lower than classical heuristics in many cases. Statistical evaluation confirms that improvements are systematic rather than incidental.
The method retains the same computational complexity as VAM and incorporates deterministic tie-breaking and degeneracy safeguards to ensure reproducibility. Future research may investigate hybrid additive–multiplicative penalty combinations, theoretical dominance conditions, and large-scale industrial datasets. The new method should therefore be interpreted as a structured penalty refinement within the VAM framework, supported by analytical properties and empirical validation, rather than as a paradigm replacement.
Additional computational details, extended results, and MATLAB R2022b implementations are provided in Appendix A, Appendix B, Appendix C and Appendix D.

Author Contributions

Conceptualization, D.K.B., S.A.F., and C.J.E.; Methodology, D.K.B., S.A.F., and C.J.E.; Software, S.A.F.; Validation, D.K.B., S.A.F., and C.J.E.; Formal Analysis, D.K.B., S.A.F., and C.J.E.; Investigation, D.K.B., S.A.F., and C.J.E.; Resources, D.K.B., S.A.F., and C.J.E.; Data Curation, D.K.B., S.A.F., and C.J.E.;Writing—Original Draft, D.K.B. and S.A.F.;Writing—Review and Editing, S.A.F. and C.J.E.; Visualization, D.K.B. and S.A.F.; Supervision, D.K.B. and C.J.E.; Project Administration, D.K.B., S.A.F., and C.J.E. All authors have read and agreed to the published version of the manuscript..

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. MATLAB Codes for the New Algorithm

Appliedmath 06 00058 i001Appliedmath 06 00058 i002Appliedmath 06 00058 i003

Appendix B. MATLAB Codes for the Vogel’s Approximation Method (VAM)

Appliedmath 06 00058 i004Appliedmath 06 00058 i005Appliedmath 06 00058 i006

Appendix C. MATLAB Codes for the Least Cost Method (LCM)

Appliedmath 06 00058 i007Appliedmath 06 00058 i008Appliedmath 06 00058 i009Appliedmath 06 00058 i010Appliedmath 06 00058 i011

Appendix D. MATLAB Codes for the North-West Corner Method (NWCM)

Appliedmath 06 00058 i012Appliedmath 06 00058 i013Appliedmath 06 00058 i014Appliedmath 06 00058 i015

References

  1. Hitchcock, F.L. The distribution of a product from several sources to numerous localities. J. Math. Phys. 1941, 20, 224–230. [Google Scholar] [CrossRef]
  2. Dantzig, G.B. Linear Programming and Extensions; Princeton University Press: Princeton, NJ, USA, 1951. [Google Scholar]
  3. Charnes, A.; Cooper, W.W. The stepping-stone method of explaining linear programming calculations in transportation problems. Manag. Sci. 1954, 1, 49–69. [Google Scholar] [CrossRef]
  4. Vogel, W.R. A method for finding initial solutions of the transportation problem. Northwestern Univ. Ind. Eng. Bull. 1958, 3, 1–15. [Google Scholar]
  5. Russell, R.A. An effective heuristic for the transportation problem. Oper. Res. 1969, 17, 187–191. [Google Scholar] [CrossRef][Green Version]
  6. Wireko, F.A.; Mensah, I.D.K.; Aborhey, E.N.A.; Appiah, S.A.; Sebil, C.; Ackora-Prah, J. The maximum range method for finding IBFS for transportation problems. Results Control Optim. 2025, 19, 100551. [Google Scholar] [CrossRef]
  7. Abdelati, M. A new approach for finding an initial basic feasible solution. J. Adv. Eng. Trends 2024, 43, 77–85. [Google Scholar] [CrossRef]
  8. Amreen, M.; Venkateswarlu, B. A new strategy using exponential distribution for IBFS. Baghdad Sci. J. 2025, 22, 23. [Google Scholar]
  9. Ekanayake, D.; Ekanayake, U. A novel approach algorithm for determining the initial basic feasible solution. Indones. J. Innov. Appl. Sci. 2022, 2, 234–246. [Google Scholar]
  10. Winston, W.L.; Goldberg, J.B. Operations Research: Applications and Algorithms, 5th ed.; Cengage: Boston, MA, USA, 2022. [Google Scholar]
  11. Taha, H.A. Operations Research: An Introduction, 10th ed.; Pearson: London, UK, 2017. [Google Scholar]
  12. Hillier, F.S.; Lieberman, G.J. Introduction to Operations Research, 10th ed.; McGraw-Hill: Columbus, OH, USA, 2015. [Google Scholar]
  13. Hasan, M. On initial feasible solutions for the transportation problem. Int. J. Math. Oper. Res. 2012, 4, 357–369. [Google Scholar]
  14. Shaikh, A.; Pathan, R.; Khan, S. Hybrid initial solution strategies for the transportation problem. J. Appl. Res. Ind. Eng. 2018, 5, 274–284. [Google Scholar]
  15. Sabir, M. Optimal solutions to transportation problems: A case-based analysis. Int. J. Logist. Syst. 2009, 4, 211–224. [Google Scholar]
  16. Sen, D.; Pal, S.; Roy, S. Large-scale cost matrices and industrial logistics planning. Int. J. Ind. Eng. Comput. 2020, 11, 423–440. [Google Scholar]
  17. Dnyaneshwar, D.; Shinde, P.; Patil, S. Exact and heuristic approaches for transportation problems: A survey. Int. J. Oper. Res. 2016, 13, 77–88. [Google Scholar]
  18. Bazaraa, M.S.; Jarvis, J.J.; Sherali, H.D. Linear Programming and Network Flows, 4th ed.; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  19. Rezaul, K.M.; Rahman, M.; Islam, S. On improved initial solutions and optimality tests in TP. J. Appl. Math. Comput. 2021, 67, 201–222. [Google Scholar]
  20. Murugesan, G.; Esakkiammal, C. On efficient optimality checks for the transportation problem. Int. J. Sci. Technol. Res. 2019, 8, 1202–1208. [Google Scholar]
  21. Sanaullah, M.; Fatichah, C.; Erma, S. Total opportunity cost matrix: A new approach to determine IBFS. Egypt. Inform. J. 2019, 20, 131–141. [Google Scholar]
  22. Sharma, J.K. Operations Research: Theory and Applications, 5th ed.; Macmillan: New York, NY, USA, 2014. [Google Scholar]
  23. Pandian, P.; Natarajan, G. A new method for finding an optimal solution for transportation problems. Int. J. Math. Arch. 2012, 3, 1831–1834. [Google Scholar]
  24. Balakrishnan, N. Modified Vogel’s approximation method for transportation problems. Appl. Math. Lett. 1990, 3, 1–3. [Google Scholar] [CrossRef]
  25. Aliyu, M.L.; Usman, U.; Babayaro, Z.; Aminu, M. Minimization of Transportation Cost in a Multi-Source Network. Am. J. Oper. Res. 2019, 9, 1–7. [Google Scholar]
  26. Rardin, R.L. Optimization in Operations Research; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  27. Koruloğlu, S.; Ballı, S. Large-scale transportation problem instances and solution characteristics. Comput. Ind. Eng. 2011, 61, 330–340. [Google Scholar]
  28. Al-Sabawi, A.; Hayawi, K. A comparative study of initial solution methods for the transportation problem. J. Oper. Res. 2002, 5, 123–131. [Google Scholar]
  29. Al-Badri, K.; Saleh, H. Efficient approaches for solving transportation problems. Appl. Math. Comput. 2007, 185, 123–132. [Google Scholar]
  30. Hakim, A. Variants of the MODI method for faster optimality in transportation problems. Appl. Math. Sci. 2012, 6, 3621–3632. [Google Scholar]
  31. Aramuthakanan, R. An empirical investigation of solution times in small transportation problems. Asian J. Math. Stat. 2013, 6, 89–96. [Google Scholar]
  32. Sridhar, A.; Pitchai, R. Performance evaluation of optimality methods for large transportation instances. Int. J. Pure Appl. Math. 2018, 118, 3567–3584. [Google Scholar]
Figure 1. Illustrate the distribution of optimality gaps across 34 instances.
Figure 1. Illustrate the distribution of optimality gaps across 34 instances.
Appliedmath 06 00058 g001
Figure 2. Shows the Mean optimality gaps for each heuristic.
Figure 2. Shows the Mean optimality gaps for each heuristic.
Appliedmath 06 00058 g002
Table 1. Transportation Tableau (BUA Cement Company Data).
Table 1. Transportation Tableau (BUA Cement Company Data).
D1D2D3D4D5D6Supply
P125302040453737
P230252030402022
P340204035452232
P425245027302514
Demand152015252010105
Table 2. Iteration 1: Penalty Workout (Rows and Columns).
Table 2. Iteration 1: Penalty Workout (Rows and Columns).
Line m ( 1 ) m ( 2 ) Penalty
Row P12025 p 1 = 25 / 20 = 1.25
Row P22020 p 2 = 20 / 20 = 1.00
Row P32022 p 3 = 22 / 20 = 1.10
Row P42425 p 4 = 25 / 24 1.04
Col D12525 q 1 = 25 / 25 = 1.00
Col D22024 q 2 = 24 / 20 = 1.20
Col D32020 q 3 = 20 / 20 = 1.00
Col D42730 q 4 = 30 / 27 1.11
Col D53040 q 5 = 40 / 30 1.33
Col D62022 q 6 = 22 / 20 = 1.10
Table 3. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 1–4).
Table 3. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 1–4).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
1 5 × 5 [28]
C i j = [4 3 1 2 6; 5 2 3 4 5; 3 5 6 3 2; 2 4
4 5 3; 4 3 6 5 1];
S i = [65 50 40 20 25];
D j = [60 60 30 40 10].
420760420420
2 4 × 4 [29]
C i j = [20 16 14 20; 9 15 16 10; 8 13 5 9;
9 6 5 11];
S i = [9 8 7 5];
D j = [5 10 5 9].
308392308308
3 6 × 6 [29]
C i j = [5 1 2 3 4 7; 7 2 3 1 5 6; 9 1 9 5 2
3; 6 5 8 4 1 4; 8 7 11 6 4 5; 2 5 7 5 2 1];
S i = [400 500 300 150 600 350];
D j = [300 500 700 300 250 250].
8400960086008400
4 3 × 4 [30]
C i j = [7 3 8 2; 5 6 11 12; 10 4 7 6];
S i = [100 200 300];
D j = [80 170 190 160].
3210401040103210
Table 4. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 5–12).
Table 4. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 5–12).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
5 5 × 5 [31]
C i j = [46 74 9 28 99; 12 75 6 36 48; 35
199 4 5 71; 61 81 44 88 9;
85 60 14 25 79];
S i = [461 277 356 488 393];
D j = [278 60 461 116 1060].
64,49968,96972,17464,499
6 3 × 3 [32]
C i j = [6 8 10; 7 11 11; 4 5 12];
S i = [150 175 275];
D j = [200 100 300].
5125592545504525
7 4 × 6 [25]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5; 6 5 9 11 3
11; 6 8 11 2 2 10];
S i = [5 6 2 9];
D j = [4 4 6 2 4 2].
112139114112
8 4 × 5 [26]
C i j = [4 3 1 2 6; 5 2 3 4 5; 3 5 6 3 2; 2 4 4 5
3];
S i = [80 60 40 20];
D j = [60 60 30 40 10].
450670420420
9 4 × 4 [11]
C i j = [10 30 25 15; 20 15 20 10; 10 30 20
20; 30 40 35 45];
S i = [14 10 15 12];
D j = [10 15 12 15].
1005122010751000
10 4 × 6 [30]
C i j = [1 2 1 4 5 2; 3 3 2 1 4 3; 4 2 5 9 6 2;
3 1 7 3 4 6];
S i = [30 50 75 20];
D j = [20 40 30 10 50 25].
450740450450
11 3 × 3 [31]
C i j = [11 9 6; 12 14 11; 10 8 10];
S i = [40 50 40];
D j = [55 45 30].
1200149012001200
12 4 × 5 [7].
C i j = [30 50 40 60 35; 65 35 45 30 25; 35
40 60 40 30; 20 30 50 45 35];
S i = [20 15 25 20];
D j = [15 18 10 17 20].
2600310526752600
Table 5. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 13–18).
Table 5. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 13–18).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
13 3 × 4 [17]
C i j = [15 12 10 8; 17 18 21 14;
14 15 10 21];
S i = [24 8 18];
D j = [11 9 21 9].
571760595571
14 3 × 5 [32]
C i j = [5 8 6 6 3; 4 7 7 6 5; 8 4 6
6 4];
S i = [800 500 900];
D j = [400 400 500 400 800].
980013,10010,2009200
15 3 × 4 [32]
C i j = [3 1 7 4; 2 6 5 9; 8 3 3 2];
S i = [300 400 500];
D j = [250 350 400 200].
2850440029002850
16 3 × 5 [26]
C i j = [4 1 2 4 4; 2 3 2 2 2; 3 5 2
4 4];
S i = [60 35 40];
D j = [22 45 20 18 30].
275363305273
17 2 × 7 [12]
C i j = [5 19 12 70 66 74 283; 103
89 81 26 23 62 97];
S i = [4000 47,700];
D j = [21,600 15,600 15,600
19,500 16,800 10,500 8100].
2,332,7002,336,0002,348,3001,992,700
18 3 × 4 [23]
C i j = [3 48 14 2; 4 2 30 10; 36 8
12 12];
S i = [24 24 2];
D j = [6 12 3 44].
224906308180
Table 6. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 19–25).
Table 6. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 19–25).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
19 4 × 6 [21]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5;
6 5 9 11 3 11; 6 8 11 2 2 10];
S i = [5 6 2 9];
D j = [4 4 6 2 4 2].
112119114112
20 3 × 4 [21]
C i j = [3 5 7 6; 2 5 8 2; 3 6 9 2];
S i = [50 75 25];
D j = [20 20 50 60].
650670650650
21 4 × 5 [20]
C i j = [10 8 9 5 13; 7 9 8 10 4; 9
3 7 10 6; 11 4 8 3 9];
S i = [100 80 70 90];
D j = [60 40 100 50 90].
2130301020702070
22 4 × 6 [8]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5;
6 5 9 11 3 11; 6 8 11 2 2 10];
S i = [2 5 6 9];
D j = [2 2 4 4 4 6].
109167122109
23 4 × 3 [22]
C i j = [3 4 6; 7 3 8; 6 4 5; 7 5 2];
S i = [100 80 90 120];
D j = [110 110 60].
84010101210840
24 5 × 4 [16]
C i j = [60 120 75 180; 58 100 60
165; 62 110 65 170; 65 115 80
175; 70 135 85 195];
S i = [8000 9200 6250 4900
6100];
D j = [5000 2000 10,000 6000].
2,164,0002,398,0002,383,0002,160,000
Table 7. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 25–31).
Table 7. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 25–31).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
25 3 × 5 [19]
C i j = [4 1 3 4 4; 2 3 2 2 3; 3 5 2
4 4];
S i = [60 35 40];
D j = [22 45 20 18 30].
305363307290
26 3 × 3 [19]
C i j = [6 4 1; 3 8 7; 4 4 2];
S i = [50 40 60];
D j = [20 95 35].
555710555555
27 5 × 5 [24]
C i j = [73 40 9 79 20; 62 93 96 8
13; 96 65 80 50 65; 57 58 29 12
87; 56 23 87 18 12];
S i = [8 7 9 3 5];
D j = [6 8 10 4 4].
1128199411231104
28 4 × 6 [26]
C i j = [25 30 20 40 45 37; 30 25 20 30
40 20; 40 20 40 35 45 22; 25 24 50 27
30 25];
S i = [37 22 32 14];
D j = [15 20 15 25 20 10].
2850319528782785
29 3 × 4 [10]
C i j = [8 6 10 9; 9 12 13 7; 14 9 16 5];
S i = [35 50 40];
D j = [45 20 30 30].
1020118010801020
30 2 × 3 [6]
C i j = [7 8 10; 9 7 8];
S i = [50 50];
D j = [40 40 40].
730730800730
31 5 × 10 Self.
C i j = [6 9 5 7 8 6 4 9 3 7; 7 5 8 6 9 7 5 8
6 4; 8 6 9 7 5 8 6 9 7 5; 5 8 6 9 7 5 8 6 9
7; 9 7 5 8 6 9 7 5 8 6].
S i = [35 50 45 40 30].
D j = [15 20 25 10 30 18 12 22 20 28].
9731434973973
Table 8. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 32–34).
Table 8. Presents Comparison of the New Algorithm with the Existing Algorithms in Terms of their Solutions (Examples 32–34).
ExampleProblem
Size
Sources and ProblemsVAMNWCMLCMNew
Algorithm
32 10 × 10 Self.
C i j = [4 6 9 7 8 5 7 6 8 9; 7 4 6 9 7 8 5 7 6 8; 8 7
4 6 9 7 8 5 7 6; 6 8 7 4 6 9 7 8 5 7; 5 6 8 7 4 6 9
7 8 5; 9 5 6 8 7 4 6 9 7 8; 7 9 5 6 8 7 4 6 9 7; 6 7
9 5 6 8 7 4 6 9; 8 6 7 9 5 6 8 7 4 6; 6 8 6 7 9 5 6
8 7 4].
S i = [30 25 40 35 20 50 25 30 20 25].
D j = [35 20 25 30 15 40 25 35 30 45].
1285159012851285
33 10 × 15 Self.
C i j = [4 7 5 8 6 4 7 5 8 6 4 7 5 8 6; 6 4 7 5 8 6 4
7 5 8 6 4 7 5 8; 8 6 4 7 5 8 6 4 7 5 8 6 4 7 5; 5 8
6 4 7 5 8 6 4 7 5 8 6 4 7; 7 5 8 6 4 7 5 8 6 4 7 5
8 6 4; 9 7 5 8 6 9 7 5 8 6 9 7 5 8 6; 6 9 7 5 8 6 9
7 5 8 6 9 7 5 8; 8 6 9 7 5 8 6 9 7 5 8 6 9 7 5; 5 8
6 9 7 5 8 6 9 7 5 8 6 9 7; 7 5 8 6 9 7 5 8 6 9 7 5
8 6 9].
S i = [28 42 33 47 22 36 28 50 44 50].
D j = [22 27 32 36 18 42 30 24 28 20 26 38 34
28 15].
1819238318191763
34 10 × 20 Self.
C i j = [6 9 4 7 5 8 6 5 7 9 4 8 6 7 5 9 6 4 8 5; 7 6
8 5 9 4 7 6 8 5 9 6 5 8 7 4 6 9 5 7; 5 8 6 9 4 7 5 8
6 7 5 9 4 6 8 5 7 4 6 9; 8 5 7 6 9 5 8 4 6 9 7 5 8 6
4 7 5 8 6 7; 9 4 6 8 5 7 3 6 9 5 8 4 7 6 5 8 5 8 6
7; 4 7 5 8 6 9 7 5 8 6 9 7 4 6 9 5 8 7 5 6; 6 5 9 4
7 6 8 5 7 4 6 8 7 5 9 6 4 7 5 8; 5 7 4 6 9 7 5 7 4
6 9 7 5 7 4 6 9 7 5 7; 7 4 6 9 5 8 6 9 5 7 6 4 8 5 7
6 9 5 8 6; 6 8 5 7 4 6 9 7 5 7 4 6 9 7 5 7 4 6 9 7].
S i = [48 37 55 62 41 59 46 68 44 50].
D j = [22 22 24 28 26 20 34 25 21 27 23 29 35
23 24 26 28 24 24 25].
2146343422352130
Table 9. Aggregated Optimality Gap Statistics Across 34 Instances.
Table 9. Aggregated Optimality Gap Statistics Across 34 Instances.
MethodMean Gap (%)Median Gap (%)Std. Dev. (%)Max Gap (%)
NWCM45.7824.8973.05403.33
LCM14.972.5729.0771.11
VAM5.220.007.9142.50
Proposed Variant2.780.005.8931.25
Table 10. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 1–6).
Table 10. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 1–6).
ExampleProblem
Size
Sources and ProblemsVAM
(Time in Seconds)
New Algorithm
(Time in Seconds)
15 × 5[28]
C i j = [4 3 1 2 6; 5 2 3 4 5; 3 5 6 3 2; 2 4 4 5 3; 4 3 6 5 1]
S i = [65 50 40 20 25]
D j = [60 60 30 40 10]
0.0409920.038122
24 × 4[29]
C i j = [20 16 14 20; 9 15 16 10; 8 13 5 9; 9 6 5 11]
S i = [9 8 7 5]
D j = [5 10 5 9]
0.0604890.055432
36 × 6[29]
C i j = [5 1 2 3 4 7; 7 2 3 1 5 6; 9 1 9 5 2 3; 6 5 8 4 1 4; 8
7 11 6 4 5; 2 5 7 5 2 1]
S i = [400 500 300 150 600 350]
D j = [300 500 700 300 250 250]
0.0715730.051505
43 × 4[30]
C i j = [7 3 8 2; 5 6 11 12; 10 4 7 6]
S i = [100 200 300]
D j = [80 170 190 160]
0.0488110.045997
55 × 5[31]
C i j = [46 74 9 28 99; 12 75 6 36 48; 35 199 4 5 71; 61
81 44 88 9; 85 60 14 25 79]
S i = [461 277 356 488 393]
D j = [278 60 461 116 1060]
0.0723550.054465
63 × 3[32]
C i j = [6 8 10; 7 11 11; 4 5 12]
S i = [150 175 275]
D j = [200 100 300]
0.0660160.051732
Table 11. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 7–14).
Table 11. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 7–14).
ExampleProblem
Size
Sources and ProblemsVAM
(Time in Seconds)
New Algorithm
(Time in Seconds)
74 × 6[25]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5; 6 5 9 11 3 11; 6 8 11 2
2 10]
S i = [5 6 2 9]
D j = [4 4 6 2 4 2]
0.0531820.048898
84 × 5[26]
C i j = [4 3 1 2 6; 5 2 3 4 5; 3 5 6 3 2; 2 4 4 5 3]
S i = [80 60 40 20]
D j = [60 60 30 40 10]
0.0453340.041321
94 × 4[11]
C i j = [10 30 25 15; 20 15 20 10; 10 30 20 20; 30 40 35
45]
S i = [14 10 15 12]
D j = [10 15 12 15]
0.0634440.057456
104 × 6[30]
C i j = [1 2 1 4 5 2; 3 3 2 1 4 3; 4 2 5 9 6 2; 3 1 7 3 4 6]
S i = [30 50 75 20]
D j = [20 40 30 10 50 25]
0.0528480.050397
113 × 3[31]
C i j = [11 9 6; 12 14 11; 10 8 10]
S i = [40 50 40]
D j = [55 45 30]
0.0550220.052754
124 × 5[7]
C i j = [30 50 40 60 35; 65 35 45 30 25; 35 40 60 40 30;
20 30 50 45 35]
S i = [20 15 25 20]
D j = [15 18 10 17 20]
0.0661000.066043
133 × 4[17]
C i j = [15 12 10 8; 17 18 21 14; 14 15 10 21]
S i = [24 8 18]
D j = [11 9 21 9]
0.0612220.060881
143 × 5[32]
C i j = [5 8 6 6 3; 4 7 7 6 5; 8 4 6 6 4]
S i = [800 500 900]
D j = [400 400 500 400 800]
0.0711230.068456
Table 12. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 15–23).
Table 12. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 15–23).
ExampleProblem
Size
Sources and ProblemsVAM
(Time in Seconds)
New Algorithm
(Time in Seconds)
153 × 4[32]
C i j = [3 1 7 4; 2 6 5 9; 8 3 3 2]
S i = [300 400 500]
D j = [250 350 400 200]
0.0281230.020764
163 × 5[26]
C i j = [4 1 2 4 4; 2 3 2 2 2; 3 5 2 4 4]
S i = [60 35 40]
D j = [22 45 20 18 30]
0.0493650.049004
172 × 7[12]
C i j = [5 19 12 70 66 74 283; 103 89 81 26 23 62 97]
S i = [4000 47,700]
D j = [21,600 15,600 15,600 19,500 16,800 10,500 8100]
0.0598000.055632
183 × 4[23]
C i j = [3 48 14 2; 4 2 30 10; 36 8 12 12]
S i = [24 24 2]
D j = [6 12 3 44]
0.0670880.051764
194 × 6[21]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5; 6 5 9 11 3 11; 6 8 11 2
2 10]
S i = [5 6 2 9]
D j = [4 4 6 2 4 2]
0.0532370.055003
203 × 4[21]
C i j = [3 5 7 6; 2 5 8 2; 3 6 9 2]
S i = [50 75 25]
D j = [20 20 50 60]
0.0488110.045997
214 × 5[20]
C i j = [10 8 9 5 13; 7 9 8 10 4; 9 3 7 10 6; 11 4 8 3 9]
S i = [100 80 70 90]
D j = [60 40 100 50 90]
0.0474930.046144
224 × 6[8]
C i j = [9 12 9 6 9 10; 7 3 7 7 5 5; 6 5 9 11 3 11; 6 8 11 2
2 10]
S i = [2 5 6 9]
D j = [2 2 4 4 4 6]
0.0545430.054323
234 × 3[22]
C i j = [3 4 6; 7 3 8; 6 4 5; 7 5 2]
S i = [100 80 90 120]
D j = [110 110 60]
0.0516050.038987
Table 13. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 24–30).
Table 13. Presents Comparison of the Computational Speed of the New Algorithm with VAM (Examples 24–30).
ExampleProblem
Size
Sources and ProblemsVAM
(Time in Seconds)
New Algorithm
(Time in Seconds)
244 × 5[16]
C i j = [60 120 75 180; 58 100 60 165; 62 110 65 170; 65
115 80 175; 70 135 85 195]
S i = [8000 9200 6250 4900 6100]
D j = [5000 2000 10,000 6000]
0.0512460.048190
253 × 5[19]
C i j = [4 1 3 4 4; 2 3 2 2 3; 3 5 2 4 4]
S i = [60 35 40]
D j = [22 45 20 18 30]
0.0567650.054321
263 × 3[19]
C i j = [6 4 1; 3 8 7; 4 4 2]
S i = [50 40 60]
D j = [20 95 35]
0.0456710.0430081
275 × 5[24]
C i j = [73 40 9 79 20; 62 93 96 8 13; 96 65 80 50 65; 57
58 29 12 87; 56 23 87 18 12]
S i = [8 7 9 3 5]
D j = [6 8 10 4 4]
0.0504470.048615
284 × 6[26]
C i j = [25 30 20 40 45 37; 30 25 20 30 40 20; 40 20 40
35 45 22; 25 24 50 27 30 25]
S i = [37 22 32 14]
D j = [15 20 15 25 20 10]
0.0527540.048678
293 × 4[10]
C i j = [8 6 10 9; 9 12 13 7; 14 9 16 5]
S i = [35 50 40]
D j = [45 20 30 30]
0.0529990.047592
302 × 3[10]
C i j = [7 8 10; 9 7 8]
S i = [50 50]
D j = [40 40 40]
0.0541060.050773
31 5 × 10 Self.
C i j = [6 9 5 7 8 6 4 9 3 7; 7 5 8 6 9 7 5 8
6 4; 8 6 9 7 5 8 6 9 7 5; 5 8 6 9 7 5 8 6 9
7; 9 7 5 8 6 9 7 5 8 6].
S i = [35 50 45 40 30].
D j = [15 20 25 10 30 18 12 22 20 28].
0.0796730.064325
Table 14. Comparison of the Computational Speed of the New Algorithm with VAM (Examples 32–34).
Table 14. Comparison of the Computational Speed of the New Algorithm with VAM (Examples 32–34).
ExampleProblem SizeSources and ProblemsVAM
(Time in Seconds)
New Algorithm
(Time in Seconds)
32 10 × 10 Self.
C i j = [4 6 9 7 8 5 7 6 8 9; 7 4 6 9 7 8 5 7 6 8; 8 7
4 6 9 7 8 5 7 6; 6 8 7 4 6 9 7 8 5 7; 5 6 8 7 4 6 9
7 8 5; 9 5 6 8 7 4 6 9 7 8; 7 9 5 6 8 7 4 6 9 7; 6 7
9 5 6 8 7 4 6 9; 8 6 7 9 5 6 8 7 4 6; 6 8 6 7 9 5 6
8 7 4].
S i = [30 25 40 35 20 50 25 30 20 25].
D j = [35 20 25 30 15 40 25 35 30 45].
0.0423300.037643
33 10 × 15 Self.
C i j = [4 7 5 8 6 4 7 5 8 6 4 7 5 8 6; 6 4 7 5 8 6 4
7 5 8 6 4 7 5 8; 8 6 4 7 5 8 6 4 7 5 8 6 4 7 5; 5 8
6 4 7 5 8 6 4 7 5 8 6 4 7; 7 5 8 6 4 7 5 8 6 4 7 5
8 6 4; 9 7 5 8 6 9 7 5 8 6 9 7 5 8 6; 6 9 7 5 8 6 9
7 5 8 6 9 7 5 8; 8 6 9 7 5 8 6 9 7 5 8 6 9 7 5; 5 8
6 9 7 5 8 6 9 7 5 8 6 9 7; 7 5 8 6 9 7 5 8 6 9 7 5
8 6 9].
S i = [28 42 33 47 22 36 28 50 44 50].
D j = [22 27 32 36 18 42 30 24 28 20 26 38 34
28 15].
0.0344320.032304
34 10 × 20 Self.
C i j = [6 9 4 7 5 8 6 5 7 9 4 8 6 7 5 9 6 4 8 5; 7 6
8 5 9 4 7 6 8 5 9 6 5 8 7 4 6 9 5 7; 5 8 6 9 4 7 5 8
6 7 5 9 4 6 8 5 7 4 6 9; 8 5 7 6 9 5 8 4 6 9 7 5 8 6
4 7 5 8 6 7; 9 4 6 8 5 7 3 6 9 5 8 4 7 6 5 8 5 8 6
7; 4 7 5 8 6 9 7 5 8 6 9 7 4 6 9 5 8 7 5 6; 6 5 9 4
7 6 8 5 7 4 6 8 7 5 9 6 4 7 5 8; 5 7 4 6 9 7 5 7 4
6 9 7 5 7 4 6 9 7 5 7; 7 4 6 9 5 8 6 9 5 7 6 4 8 5 7
6 9 5 8 6; 6 8 5 7 4 6 9 7 5 7 4 6 9 7 5 7 4 6 9 7].
S i = [48 37 55 62 41 59 46 68 44 50].
D j = [22 22 24 28 26 20 34 25 21 27 23 29 35
23 24 26 28 24 24 25].
0.0645020.057820
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boah, D.K.; Fiele, S.A.; Etwire, C.J. A New Algorithm for Finding Initial Basic Feasible Solutions of Transportation Problems. AppliedMath 2026, 6, 58. https://doi.org/10.3390/appliedmath6040058

AMA Style

Boah DK, Fiele SA, Etwire CJ. A New Algorithm for Finding Initial Basic Feasible Solutions of Transportation Problems. AppliedMath. 2026; 6(4):58. https://doi.org/10.3390/appliedmath6040058

Chicago/Turabian Style

Boah, Douglas Kwasi, Suleman Abudu Fiele, and Christian John Etwire. 2026. "A New Algorithm for Finding Initial Basic Feasible Solutions of Transportation Problems" AppliedMath 6, no. 4: 58. https://doi.org/10.3390/appliedmath6040058

APA Style

Boah, D. K., Fiele, S. A., & Etwire, C. J. (2026). A New Algorithm for Finding Initial Basic Feasible Solutions of Transportation Problems. AppliedMath, 6(4), 58. https://doi.org/10.3390/appliedmath6040058

Article Metrics

Back to TopTop