Previous Article in Journal
State of the Art in Internet of Things Standards and Protocols for Precision Agriculture with an Approach to Semantic Interoperability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis

by
Pascal Buhinyori Ngango
1,
Marie-Line Lufua Binda
1,
Michel Matalatala Tamasala
1,
Pierre Sedi Nzakuna
2,*,
Vincenzo Paciello
2 and
Angelo Kuti Lusala
1,*
1
Department of Electrical and Computer Engineering, Polytechnic Faculty, University of Kinshasa, Kinshasa XI BP 127, Democratic Republic of the Congo
2
Department of Industrial Engineering, University of Salerno, 84084 Fisciano, SA, Italy
*
Authors to whom correspondence should be addressed.
Network 2025, 5(2), 15; https://doi.org/10.3390/network5020015
Submission received: 26 February 2025 / Revised: 14 April 2025 / Accepted: 29 April 2025 / Published: 2 May 2025

Abstract

:
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is widely recognized as an effective multi-criteria decision-making algorithm for handover management in terrestrial cellular networks, especially in scenarios involving dynamic and multi-faceted criteria. While TOPSIS is widely adopted in terrestrial cellular networks for handover management, its application in satellite networks, particularly in Low Earth Orbit (LEO) constellations, remains limited and underexplored. In this work, the performance of three TOPSIS algorithms is evaluated for handover management in LEO satellite networks, where efficient handover management is crucial due to rapid changes in satellite positions and network conditions. Sensitivity analysis is conducted on Standard Deviation TOPSIS (SD-TOPSIS), Entropy-TOPSIS, and Importance-TOPSIS in the context of LEO satellite networks, assessing their responsiveness to small variations in key performance metrics such as upload speed, download speed, ping, and packet loss. This study uses real-world data from “Starlink-on-the-road-Dataset”. Results show that SD-TOPSIS effectively optimizes handover management in dynamic LEO satellite networks thanks to its lower standard deviation scores and reduced score variation rate, thus demonstrating superior stability and lower sensitivity to small variations in performance metrics values compared to both Entropy-TOPSIS and Importance-TOPSIS. This ensures more consistent decision-making, avoidance of unnecessary handovers, and enhanced robustness in rapidly-changing network conditions, making it particularly suitable for real-time services that require stable, low-latency, and reliable connectivity.

1. Introduction

Satellite networks, particularly those based on Low Earth Orbit (LEO) satellite constellations, are revolutionizing global Internet access by enabling low latency and high-speed services through satellite links. Major initiatives such as SpaceX Starlink, OneWeb, and Amazon Kuiper are deploying thousands of LEO satellites to provide coverage to remote and underserved regions [1,2]. These initiatives are transforming the way the world connects, offering the potential for seamless Internet access even in the most isolated areas. These satellites, operating between 300 and 1500 km in altitude, demonstrate better performance than Geostationary (GEO) ones in terms of latency and throughput. By reducing the required time for data to travel to and from Earth, LEO satellites significantly improve the quality of communication and enable real-time applications that require low latency, such as video calls, online gaming, and autonomous vehicle communication. However, their proximity to Earth presents complex challenges, mainly in handover management, which is crucial to maintaining continuous and optimal connectivity [2,3].
Handover management in LEO networks is particularly challenging due to the rapid movement of satellites, which require constant relay switches between each other to avoid service interruptions [4,5]. These satellites move at speeds of up to 7.5 km/s, meaning that handover decisions must be made in real-time to avoid drops in connectivity.
Since performance parameters such as upload speed, download speed, ping, and packet loss vary in real-time, traditional handover management techniques exhibit critical limitations. Instead, multi-criteria decision-making handovers are required, such as the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), which has proven useful and has been widely adopted for handover management in terrestrial cellular networks [6]. In this work, we explore the extension of TOPSIS to LEO satellite networks, where decision criteria are highly dynamic. This choice does not exclude other existing techniques but rather aims to assess the applicability of TOPSIS in a context where handover conditions differ significantly from traditional cellular networks. However, while its application in terrestrial networks is well established, its use in satellite networks, particularly in dynamic LEO constellations, remains largely unexplored. This work provides a study in this direction by evaluating the effectiveness of TOPSIS in a heterogeneous LEO satellite network. Considering multiple factors simultaneously, TOPSIS offers a comprehensive approach to managing handovers in environments where traditional metrics fall short [6,7,8,9].
Among the variants of the TOPSIS algorithm, Standard Deviation TOPSIS (SD-TOPSIS), Entropy-TOPSIS, and Importance-TOPSIS stand out for their ability to handle multiple performance criteria simultaneously [10]. Each of these variants introduces unique methods for assigning weights to different performance criteria, offering different strengths depending on the specific needs of the network. However, to the best of our knowledge, the sensitivity and robustness of these variants in response to rapid fluctuations in performance criteria remain underexplored in LEO satellite networks. Given the rapidly changing nature of satellite positions, it is essential to explore how these algorithms perform when faced with sudden changes in satellite performance parameters. We also note that, to the best of our knowledge, SD-TOPSIS has never been used in the context of handover management in LEO satellite networks.
This study provides a comprehensive sensitivity analysis of these three TOPSIS variants in the context of LEO satellite networks. By focusing on the sensitivity to variations in key criteria such as upload speed, download speed, ping, and packet loss, this study aims to fill a gap in understanding how these algorithms can be adapted to the dynamic environment of LEO satellites. To ensure that the analysis accurately reflects real world conditions, the dataset used in this study was derived from “Starlink-on-the-Road-Dataset”, previously used in [11]. Since Starlink is not yet operational in our study area, we opted for this dataset because it provides a reliable basis for evaluating the effectiveness of our approach in a dynamic satellite environment. The results of this study confirm that the analysis of standard deviations and score variation rates reveals higher stability of SD-TOPSIS compared to the other algorithms under study. SD-TOPSIS stands out for its ability to maintain stable performance even when faced with fluctuations in performance criteria and ensures better connection stability.
The remainder of this paper is organized as follows: Section 2 reviews related works on heterogeneous satellite networks and handover management strategies using TOPSIS and its variants. Section 3 outlines the TOPSIS variants under study in the context of LEO networks and the methodology used. Section 4 discusses the simulation results and Section 5 concludes this work and provides insights for future research.

2. Related Work

Handover process in LEO satellite networks represents a major challenge due to the dynamic nature of satellite movements, varying communication links, and changing traffic conditions. Several techniques, including multi-criteria decision algorithms, have recently been proposed to address this challenge, with TOPSIS being one of the most commonly used methods for ranking alternatives based on multiple performance criteria.
Handover management in heterogeneous cellular networks is addressed in [10] using Entropy-TOPSIS and SD-TOPSIS. The methodology used helped to reduce the number of handovers, improve the average user throughput, and decrease radio link failures, with SD-TOPSIS performing better than Entropy-TOPSIS. However, these methods were validated and limited to relatively stable cellular networks in terrestrial environments over a long period. They may not yield the same results in the unstable environments of LEO satellite networks, where criteria vary rapidly due to satellite mobility. Authors in [6] focused on vertical handover in heterogeneous 4G cellular networks, using Multiple Attribute Decision-Making (MADM) algorithms combined with a utility function to determine the best access network based on Quality of Service (QoS) criteria. The proposed approach calculates performance for each network according to traffic classes and uses the utility function to reflect user preferences, improving key parameters such as the reversal phenomenon, the ping-pong effect, and handoff failures. However, while this study demonstrates the effectiveness of this approach in cellular networks, it does not account for the rapid variations in performance criteria typical of LEO satellite networks, such as satellite mobility and fluctuating link quality. A sensitivity analysis considering these dynamic factors is crucial for adapting this approach to satellite environments.
Network selection in heterogeneous wireless networks was addressed in [12] through two Multi-Attribute Decision-Making (MADM) methods: Multiple Analytic Network Process (M-ANP) for weighting criteria, and TOPSIS for ranking alternatives. The aim was to allow user devices to continuously choose the most appropriate access network during communication. Simulation results demonstrated the effectiveness of the proposed approach in reducing the reversal and ping-pong phenomena. While this approach proved effective in heterogeneous terrestrial networks, a sensitivity analysis considering these variations is necessary for adapting the approach to satellite networks, where factors like signal strength, bandwidth, and latency fluctuate rapidly due to satellite movement. The inclusion of TOPSIS, as mentioned in the study, could be expanded to incorporate these dynamic variations and provide a more robust solution in the context of satellite handovers. In [13], the authors propose TOPSIS-based Subset Replica (TS-REPLICA), an algorithm based on Entropy-TOPSIS to optimize replica placement in Hadoop Distributed File System (HDFS). By employing a multi-attribute matrix reflecting node performance and load, combined with entropy-based weighting, the algorithm significantly enhances performance by balancing loads in a Spark cluster. This method is limited to relatively stable terrestrial settings and does not assess the robustness of algorithms against rapid variations in performance criteria as is the case for LEO satellite networks.
A two-step handover decision and access strategy in heterogeneous GEO/LEO networks was proposed in [7]. An essential component of this approach is the use of the Importance-TOPSIS variant to rank LEO satellites when a user chooses to connect to a LEO network. Importance-TOPSIS has been compared with Entropy-TOPSIS and has proven to be the superior method in reducing the number of handovers and decreasing the probability of forced connection termination while improving the overall system throughput. Despite this, it requires further refinement. Specifically, Importance-TOPSIS adaptation to dynamic variations in criteria remains limited in a highly variable environment like satellite networks, where factors such as upload speed, download speed, and latency fluctuate rapidly.
A multi-attribute decision handover scheme is proposed in [14] for LEO mobile satellite networks, aiming to reduce handover frequency and enhance data stream stability. The scheme considers three factors: receiving signal strength, remaining service time, and satellite idle channels, using a multi-attribute decision algorithm to make handover decisions. Simulations on the Iridium satellite network show that the scheme reduces handover frequency, lowers channel utilization variance, and improves signal strength compared to other schemes. However, the study does not address the rapid fluctuations in key performance criteria, such as signal strength and satellite availability, which are characteristic of LEO satellite networks. The approach would benefit from a sensitivity analysis to account for these dynamic variations. Additionally, integrating a decision-making method like TOPSIS could provide a more adaptable solution, better handling the dynamic nature of LEO satellite networks and ensuring more robust performance in varying conditions.
Authors in [4] address network congestion in LEO satellite networks by proposing a Parameter-Adaptive Signal-to-Interference-plus-Noise Ratio (SINR)-based Multi-Attribute Decision (PASMAD) algorithm for access and handover in GEO/LEO heterogeneous satellite networks. The algorithm considers SINR, required bandwidth, traffic cost, and satellite load to ensure QoS for multimedia services. It adapts parameters based on traffic distribution predictions to balance network loads, outperforming traditional Received Signal Strength (RSS)-based algorithms in terms of QoS and data rates, as well as improving system performance compared to fixed-parameter algorithms. However, the paper does not account for the dynamic variations in performance criteria, such as SINR or bandwidth, that are inherent in rapidly moving LEO satellites. It also lacks a sensitivity analysis of how fluctuations in these factors might impact the handover process.
The work in [5] proposes a user-centric handover scheme for ultra-dense LEO satellite networks that buffers downlink data across multiple satellites, ensuring seamless handover and improved communication quality. While the approach outperforms traditional schemes in throughput, delay, and latency, it does not address the dynamic fluctuations in key performance metrics like bandwidth and latency due to satellite movement. Additionally, there is no sensitivity analysis of how these variations affect handover performance. Integrating a method like TOPSIS could improve decision-making by accounting for the dynamic nature of LEO satellite networks.
Table 1 summarizes the related literature, the methodology applied in these works, the main advantages and limitations of the technique used, and the domain of application.
In this paper, we investigate the use of TOPSIS-based handover algorithms in LEO satellite networks, an environment where decision making criteria fluctuate rapidly with the movement of satellites. We analyze their robustness to dynamic variations in performance criteria by conducting a sensitivity analysis on the following criteria: upload speed, download speed, ping, and packet loss. To the best of our knowledge, at the time of writing, the use of TOPSIS-based handover algorithms in the context of sensitivity analysis in LEO satellite networks remains underexplored in the current literature.

3. Methodology

This section presents the methodology followed to derive our results. It provides an overview of the TOPSIS algorithm and its variants, and describes the sensitivity analysis conducted to measure the robustness of the three TOPSIS variants against rapid variations in criteria.

3.1. The TOPSIS Algorithm

The TOPSIS algorithm is based on the idea that the best alternative (in this case, the best satellite for handling communication after handover) should be the one closest to the positive ideal solution—which is a combination of optimal values for each criterion—and the farthest from the negative ideal solution which represents the worst performance for each criterion. The steps of the TOPSIS algorithm are shown in Figure 1.

3.1.1. Input of the Decision Matrix

The decision matrix D consists of m rows representing the alternatives among the LEO satellites of interest, and n columns representing the criteria used in this study: upload speed, download speed, ping, and packet loss. An element a i j of the matrix represents the value of the criterion j for the alternative i [14,15,16,17].
D = a 11 a 12 a 1 n a 21 a 22 a 2 n a 31 a 32 a 3 n a m 1 a m 2 a m n

3.1.2. Normalization of the Decision Matrix

The normalized matrix is obtained by dividing each element of the decision matrix by the square root of the sum of the squares of the elements in its column [12,18,19,20]. The decision matrix is normalized as follows:
D norm = a 11 i = 1 m a i 1 2 a 12 i = 1 m a i 2 2 a 1 n i = 1 m a i n 2 a 21 i = 1 m a i 1 2 a 22 i = 1 m a i 2 2 a 2 n i = 1 m a i n 2 a 31 i = 1 m a i 1 2 a 32 i = 1 m a i 2 2 a 3 n i = 1 m a i n 2 a m 1 i = 1 m a i 1 2 a m 2 i = 1 m a i 2 2 a m n i = 1 m a i n 2

3.1.3. Criteria Weighting

There are several TOPSIS variants or techniques for weighting the criteria. Three variants are of interest in this study, namely SD-TOPSIS, Entropy-TOSIS, and Importance-TOPSIS. These variants were developed to address the subjectivity and issues of the Standard-TOPSIS, where all criteria have equal weight, and other existing weighting methods.
The sum of the weights of all criteria must be equal to 1. The goal is to obtain a weight vector that satisfies the following relationship:
w = ( w 1 , w 2 , , w m ) , j = 1 m w j = 1
If the alternatives have similar performance rates for a given criterion, that criterion will have a lower influence on the handover decision. In other words, if a criterion j for all the alternatives in the decision matrix is identical, then this attribute is not useful for the handover decision.
  • SD-TOPSIS: The standard deviation measures the dispersion of values around the mean and allows for evaluating the variability of a criterion [10]. A criterion with a larger standard deviation receives a higher weight, and vice versa. SD-TOPSIS is particularly useful when criteria exhibit different levels of variability. It is more practical and objective for handover management in satellite networks, as several parameters fluctuate rapidly due to the fast movement of LEO satellites. It removes the subjectivity in defining the weights of criteria by assigning moderate and precise weights based on real data. Considering m as the number of alternatives and j as the index of the criterion, the standard deviation of each criterion is calculated as follows:
    σ j = 1 m i = 1 m ( a i j norm μ j ) 2
    where a i j norm is an element from D norm , and μ j the average value. The weight of each of the criterion j among the n criteria is then given by:
    w j sd = σ j k = 1 n σ k
  • Entropy-TOPSIS: The entropy measures data uncertainty by using probability theory [13]. The wider the data, the higher the uncertainty. Entropy-TOPSIS is a type of objective weighting technique that measures the weights of criteria based on their relative differences. The main limitation is that it requires the data to exhibit good diversity. Homogeneous data can lead to less effective results as the method tends to overweight the criterion with the highest entropy. Considering j as the index of the criterion, the value of the entropy divergence degree coefficient e j can be measured using the normalized decision matrix, with 0 c j 1:
    e j = 1 c j
    with
    c j = 1 ln ( n ) i = 1 n a i j norm ln ( a i j norm )
    The weight of each criterion is then given by:
    w j en = e j j = 1 n e j
  • Importance-TOPSIS: This technique uses continuous replacement to calculate the impact of individual changes of each criterion value on the target value (the score in our case), and give higher weight to the criteria that have more impact on the target [7]. The steps of the continuous replacement method are as follows:
    • Step 1. Let Q ( α , β ) , Q ( α , β ) be such that their difference is = Q Q . Suppose the order of substitution is to change α first, then change β .
    • Step 2. Replace α only and find Q 1 ( α , β ) . Q 1 is obtained by changing α on the basis of Q. Calculate the influence of α :
      Q α = Q 1 Q
    • Step 3. Replace β only and find Q ( α , β ) . Q is obtained by changing β on the basis of Q 1 . Calculate the influence of β :
      Q β = Q Q 1
      It can be observed that:
      = Q Q = Q α + Q β
    • Step 4. Calculate the weight:
      w i imp = Q i Q Q
      with i { α , β } .

3.1.4. Weighted Normalized Decision Matrix

After obtaining the weights, each column of the normalized decision matrix is multiplied by the weight associated with it, as follows [21,22,23,24]:
D norm , w = a 11 norm . w 1 a 12 norm . w 2 a 1 n norm . w n a 21 norm . w 1 a 22 norm . w 2 a 2 n norm . w n a 31 norm . w 1 a 32 norm . w 2 a 3 n norm . w n a m 1 norm . w 1 a m 2 norm . w 2 a m n norm . w n

3.1.5. Determination of the Positive Ideal Solution (PIS) and Negative Ideal Solution (NIS)

The Positive Ideal Solution (PIS) a + is the best value (maximum) for each criterion, while the Negative Ideal Solution (NIS) a is the worst value (minimum) [12,14,15,16,17,18,19,25,26,27,28]. They are calculated as follows:
a + = ( max i m D i j n , w | j j + ) , ( min i m D i j n , w | j j ) = d 1 + , d 2 + , , d n +
a = ( min i m D i j n , w | j j + ) , ( max i m D i j n , w | j j ) = d 1 , d 2 , , d n

3.1.6. Calculation of Distances to PIS and NIS

This step consists of calculating the Euclidean distances of each alternative from the PIS and the NIS, as follows [12,14,16,18,20,21,22,29,30]:
d i s t + = j = 1 n ( D i j n , w d j + ) 2 , i = 1 , , m
d i s t = j = 1 n ( D i j n , w d j ) 2 , i = 1 , , m
where d i s t + is the Euclidian distance of an alternative to the PIS, and d i s t the distance to the NIS.

3.1.7. Calculation of the Relative Proximity Score

The relative proximity score r for each satellite lies within the range:
0 r 1
The satellite with the highest r score is the optimal candidate for handover, as it is the closest to the PIS and the farthest from the NIS [23,24,25,26,27,31,32,33,34,35].
r i = d i s t d i s t + + d i s t , i = 1 , , m
The method used by each TOPSIS variant influences the way r is calculated. These differences in scoring methods ultimately guide the handover decision.

3.2. Sensitivity Analysis

To perform the comparison between the three TOPSIS variants mentioned above, we use the sensitivity analysis approach [36,37,38,39,40,41,42,43,44,45,46]. The aim is to assess the robustness of the three TOPSIS variants in response to dynamic changes observed in a criterion value for each of the alternatives, due to satellites’ mobility in LEO networks. Sensitivity analysis consists of evaluating the impact of changes in criteria values on the final score attributed by handover algorithms to each alternative. Scores should not change significantly after minor changes in criteria values, to avoid unnecessary handovers. Sensitivity s is given by:
s i = r i modified r i initial r i initial , i = 1 , , m
The r i modified is the score obtained after modifying the value of a criterion for the alternative i. We focus on the four following criteria: upload speed, download speed, ping, and packet loss. The choice of these criteria is crucial in the study of satellite handover, particularly in LEO networks. The upload speed is essential for bidirectional communication, directly impacting applications such as video calls or data uploads. The download speed determines the ability of the satellite to deliver data to the user, a key factor in ensuring a smooth user experience. The ping measures the latency between a request sent from the user terminal and the response received from the satellite network. It is a critical parameter for real-time applications such as VoIP and online gaming, where a lower latency ensures a more responsive connection. The packet loss refers to the percentage of data packets that do not reach their destination, often due to network congestion, signal degradation, or interference. High packet loss can significantly degrade service quality, leading to interruptions in video streaming, dropped calls, or reduced throughput in data transmissions. These four criteria are interdependent and directly influence the overall quality of the connection, making their optimal management essential during satellite handovers to maintain uninterrupted and high-quality service.
Our analysis involves evaluating, through four scenarios, the sensitivity of each TOPSIS algorithm in response to a x % random change in the initial criteria values for each alternative. To enhance the depth of our sensitivity analysis, we first apply variations within the range 5 x 5 , followed by a broader range of 10 x 10 with x Z . This progressive approach allows us to assess the robustness of each algorithm under different levels of perturbation, providing a more comprehensive evaluation of their stability and adaptability in dynamic environments. The four scenarios analyzed in this study are as follows:
  • First scenario: For each satellite, apply a x % random change on the upload speed value only, while keeping other criteria values unchanged.
  • Second scenario: For each satellite, apply a x % random change on the download speed value in addition to the previously changed upload speed value and keep the packet loss values unchanged.
  • Third scenario: For each satellite, apply a x % random change on the ping value in addition to the previously changed upload speed and download speed values and keep the packet loss value unchanged.
  • Fourth scenario: For each satellite, apply a x % random change on the packet loss value in addition to the previously changed upload speed, download speed, and ping.
We evaluate the robustness of the three TOPSIS variants by observing the impact of these random changes on the scores of the selected alternatives. By closely examining how the score assigned to a satellite by each TOPSIS variant changes with variations in upload speed, download speed, ping, and packet loss, we assess which TOPSIS variant maintains consistent performance and ensures high service quality despite these fluctuations. This approach enables the identification of the TOPSIS variant that is more capable of handling dynamic conditions in satellite networks, ensuring reliable and efficient handover decisions even when network performance metrics experience variations. The TOPSIS algorithm with the smallest rate of change in its scores with respect to the initial score, i.e., the lower sensitivity, is considered the most robust.
In addition to sensitivity analysis, we compare the standard deviation of the scores obtained by each of the TOPSIS variants under study. The objective of this analysis is to assess the stability of score distributions for each of the TOPSIS variants across different scenarios. A lower standard deviation indicates higher stability of the handover algorithm, whereas a higher standard deviation reflects greater sensitivity to small changes in criteria values. The algorithm whose standard deviation remains relatively constant in comparison to its counterparts despite small changes in criteria values is therefore considered more robust and reliable for managing satellite handover. The block diagram of the methodology is shown in Figure 2.

3.3. Experimental Setup

To ensure that our analysis accurately reflects real-world conditions and in the absence of an experimental framework for Starlink measurement campaigns, we relied on the ”Starlink-On-the-Road-Dataset”, previously used in [11]. This dataset serves as a reliable benchmark for assessing the performance of various handover strategies in LEO satellite networks, providing valuable insight into the challenges and behaviors observed in satellite communication systems.
The dataset, openly available through the Starlink-on-the-Road platform (Starlink-on-the-Road dataset, available at: https://github.com/sys-uos/Starlink-on-the-Road (accessed on 29 March 2025)), provides a rich and diverse source of real-world high-fidelity measurements, covering critical performance indicators such as upload speed (Mbps), download speed (Mbps), ping (ms), and packet loss (%).
Our algorithms were executed on a Lenovo ThinkPad E15 Gen 4 laptop equipped with an Intel Core i5-1235U processor (12th generation, up to 4.4 GHz), 8 GB of DDR4 RAM, and a 512 GB NVMe SSD. The system runs on Windows 10. This hardware configuration provided sufficient computational resources for simulating the algorithms and performing the required data processing tasks efficiently.

4. Simulation Results and Discussion

In this section, we evaluate the performance of the three TOPSIS variants in response to rapid change in the satellite criteria values including upload speed, download speed, ping, and packet loss. The goal is to determine which of the algorithms studied is the most robust in handling variations in these performance criteria across different satellites.

4.1. Initial Case

We generated the initial decision matrix shown in Table 2 with considering ten satellites of the Starlink constellation by SpaceX. Based on this data, we computed the initial criteria weights shown in Table 3, with each algorithm weighting the criteria according to its weighting technique. For each criterion, the sum of the weights is equal to 1. These weights influence the subsequent process of selecting the target satellite for the handover. Figure 3 graphically compares the weights obtained. We observe that, for all three TOPSIS variants, upload speed has the highest weight among the criteria, making it the most important criterion in our case based on the given data. Next, ping is followed by download speed and download speed followed by packet loss, which has a slightly lower weight.
Table 4 presents the initial scores values obtained with each TOPSIS variant under study, and Figure 4 shows a visual comparison of these values. The scoring process, following the assignment of initial weights, is crucial in selecting the target satellite for handover. Each algorithm calculates satellite scores on a scale from 0 to 1 based on criteria weights. From Table 4 and Figure 4, it can be seen that Satellite 5 is given the highest score by the three TOPSIS variants, which means that they all rank it as the best alternative for handover.

4.2. Case 1: Sensitivity Analysis for Criterion Variation Within ± 5 %

Here, we consider four scenarios in which the criterion values vary within the range 5 x 5 percent change.

4.2.1. Scenario 1: Variation in Upload Speed

In this scenario, we randomly vary the initial upload speed values while keeping the values of other criteria unchanged. Table 5 shows the random variation rates applied to the initial upload speed values of each satellite. The resulting decision matrix is shown in Table 6.

4.2.2. Scenario 2: Variation in Upload Speed and Download Speed

In this scenario, in addition to the previous variation in upload speed, we randomly vary the initial download speed values while keeping the values of other criteria unchanged. Table 7 shows the random variation rates applied to the initial download speed values of each satellite. The resulting decision matrix is shown in Table 8.

4.2.3. Scenario 3: Variation in Upload Speed, Download Speed, and Ping

In this scenario, in addition to the previous variation in upload speed and download speed, we randomly vary the initial ping values while keeping the values of packet loss unchanged. Table 9 shows the random variation rates applied to the initial ping values of each satellite. The resulting decision matrix is shown in Table 10.

4.2.4. Scenario 4: Variation in Upload Speed, Download Speed, Ping, and Packet Loss

In this last scenario, we randomly vary the initial packet loss values, in addition to the previous variation in upload speed, download speed, and ping. Table 11 shows the random variation rates applied to the initial packet loss values of each satellite. The resulting decision matrix is shown in Table 12.

4.2.5. Scores Variation

Table 13 presents the obtained score values for the three scenarios under consideration. Figure 5, Figure 6 and Figure 7 visually compares these obtained satellite scores across the three scenarios with those of the initial case, respectively for SD-TOPSIS, Entropy-TOPSIS, and Importance-TOPSIS. We observe that as the number of varied parameters increases from Scenario 1 to Scenario 4, significant variations in the scores are observed at different rates for each scenario and each TOPSIS algorithm. This indicates that the sensitivity of these algorithms to parameter changes is not uniform, with some algorithms demonstrating greater stability across scenarios while others show more pronounced fluctuations.

4.2.6. Sensitivity Analysis

Figure 8 shows the variation rates of the scores for the Scenario 1, in comparison with the initial case. We observe that the SD-TOPSIS algorithm is the most robust, as it shows the lowest score variation rate of 5.18%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 6.85% and 8.06%, respectively. Figure 9 shows the variation rates for the Scenario 2. We observe that the SD-TOPSIS algorithm is still the most robust, as it shows the lowest score variation rate of 7.79%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 8.9% and 9.5%, respectively. Figure 10 shows the variation rates for the Scenario 3. We observe that the SD-TOPSIS algorithm is still the most robust, as it shows the lowest score variation rate of 13.88%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 16.20% and 14.11%, respectively. Figure 11 shows the variation rates for the Scenario 4. We observe that the SD-TOPSIS algorithm remains the most robust, as it shows the lowest score variation rate of 14.54%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 15.07% and 16%, respectively. The SD-TOPSIS algorithm therefore stands as the most robust TOPSIS variant as fluctuations in criteria values have the lowest impact on the variation rate of satellites scores, thus demonstrating greater stability of the SD-TOPSIS.

4.3. Case 2: Sensitivity Analysis with Criterion Variation Within ± 10 %

Here, we consider four scenarios in which the criterion values vary within the range 10 x 10 percent change.

4.3.1. Scenario 1: Variation in Upload Speed

In this scenario, we randomly vary the initial upload speed values while keeping the values of other criteria unchanged. Table 14 shows the random variation rates applied to the initial upload speed values of each satellite. The resulting decision matrix is shown in Table 15.

4.3.2. Scenario 2: Variation in Upload Speed and Download Speed

In this scenario, in addition to the previous variation in upload speed, we randomly vary the initial download speed values while keeping the values of other criteria unchanged. Table 16 shows the random variation rates applied to the initial download speed values of each satellite. The resulting decision matrix is shown in Table 17.

4.3.3. Scenario 3: Variation in Upload Speed, Download Speed, and Ping

In this scenario, in addition to the previous variation in upload speed and download speed, we randomly vary the initial ping values while keeping the values of packet loss unchanged. Table 18 shows the random variation rates applied to the initial ping values of each satellite. The resulting decision matrix is shown in Table 19.

4.3.4. Scenario 4: Variation in Upload Speed, Download Speed, Ping, and Packet Loss

In this last scenario, we randomly vary the initial packet loss values, in addition to the previous variation in upload speed, download speed, and ping. Table 20 shows the random variation rates applied to the initial packet loss values of each satellite. The resulting decision matrix is shown in Table 21.

4.3.5. Scores Variation

Table 22 presents the obtained score values for the three scenarios under consideration. Figure 12, Figure 13 and Figure 14 visually compares these obtained satellite scores across the three scenarios with those of the initial case, respectively, for SD-TOPSIS, Entropy-TOPSIS, and Importance-TOPSIS. We observe that as the number of varied parameters increases from Scenario 1 to Scenario 4, significant variations in the scores are observed at different rates for each scenario and each TOPSIS algorithm. This indicates that the sensitivity of these algorithms to parameter changes is not uniform, with some algorithms demonstrating greater stability across scenarios while others show more pronounced fluctuations.

4.3.6. Sensitivity Analysis

Figure 15 shows the variation rates of the scores for the Scenario 1, in comparison with the initial case. We observe that the SD-TOPSIS algorithm is the most robust, as it shows the lowest score variation rate of 9.87%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 11.44% and 12.1%, respectively. Figure 16 shows the variation rates for the Scenario 2. We observe that the SD-TOPSIS algorithm is still the most robust, as it shows the lowest score variation rate of 14.54%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 15.07% and 16%, respectively. Figure 17 shows the variation rates for the Scenario 3. We observe that the SD-TOPSIS algorithm is still the most robust, as it shows the lowest score variation rate of 27.85%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 32.39% and 28.84%, respectively. Figure 18 shows the variation rates for the Scenario 4. We observe that the SD-TOPSIS algorithm remains the most robust, as it shows the lowest score variation rate of 29.69%, whereas the Entropy-TOPSIS and Importance-TOPSIS exhibit variation rates of 31.85% and 31.2%, respectively. The SD-TOPSIS algorithm therefore stands as the most robust TOPSIS variant as fluctuations in criteria values have the lowest impact on the variation rate of satellites scores, thus demonstrating the greater stability of the SD-TOPSIS.
Figure 19 presents the standard deviation in the score values for a variation range of 5 x 5 , while Figure 20 illustrates the standard deviation for a wider variation range of 10 x 10 . A low standard deviation indicates that the scores remain relatively stable, implying greater robustness, whereas a high standard deviation reflects significant variability in the scores, indicating higher sensitivity to changes in the criteria values. A low standard deviation indicates that the scores remain relatively stable.
From Figure 19, it can be observed that:
  • Scenario 1: After variations in upload speed values, the standard deviations for SD-TOPSIS, Entropy-TOPSIS and Importance-TOPSIS are, respectively, 0.1078, 0.1213 and 0.1233.
  • Scenario 2: After variations in download speed values in addition to those in upload speed, SD-TOPSIS has a standard deviation of 0.1141, Entropy-TOPSIS exhibits a standard deviation of 0.128, and Importance-TOPSIS reaches a standard deviation of 0.1284.
  • Scenario 3: After variations in ping values in addition to those in upload speed and download speed, SD-TOPSIS has a standard deviation of 0.1291, Entropy-TOPSIS exhibits a standard deviation of 0.1474, and Importance-TOPSIS reaches a standard deviation of 0.1397.
  • Scenario 4: Following the variation in packet loss values in addition to upload speed, download speed, and ping, SD-TOPSIS remains the most stable algorithm, with a standard deviation of 0.1325, whereas Entropy-TOPSIS reaches 0.1474, and Importance-TOPSIS presents a standard deviation of to 0.1408 maintaining an intermediate position.
From Figure 20, it can be seen that:
  • Scenario 1: After variations in upload speed values, the standard deviations for SD-TOPSIS, Entropy-TOPSIS and Importance-TOPSIS are, respectively, 0.1177, 0.1404 and 0.1345.
  • Scenario 2: After variations in download speed values in addition to those in upload speed, SD-TOPSIS has a standard deviation of 0.1329, Entropy-TOPSIS exhibits a standard deviation of 0.155, and Importance-TOPSIS reaches a standard deviation of 0.148.
  • Scenario 3: After variations in ping values in addition to those in upload speed and download speed, SD-TOPSIS has a standard deviation of 0.1723, Entropy-TOPSIS exhibits a standard deviation of 0.1995, and Importance-TOPSIS reaches a standard deviation of 0.1881.
  • Scenario 4: Following the variation in packet loss values in addition to upload speed, download speed, and ping, SD-TOPSIS remains the most stable algorithm, with a standard deviation of 0.1797, whereas Entropy-TOPSIS reaches 0.1971, and Importance-TOPSIS presents a standard deviation of to 0.1936 maintaining an intermediate position.
The analysis of the standard deviation evolution across the four scenarios highlights the following key observations:
  • SD-TOPSIS is the most stable algorithm, as its standard deviation remains the lowest in all four scenarios, meaning that it is less affected by small changes in criteria values.
  • Entropy-TOPSIS is the most sensitive, exhibiting a continuous increase in standard deviation, which indicates a high variability in scores in response to small changes in criteria.
  • Importance-TOPSIS falls in between SD-TOPSIS and Entropy-TOPSIS, with moderate variations in standard deviation, meaning intermediate stability.
While SD-TOPSIS has been successfully applied in terrestrial cellular networks [10], our results clearly demonstrate its suitability for handover management in LEO satellite networks. Our results confirm that for small x % variations in criteria values within the ranges 5 x 5 and 10 x 10 , SD-TOPSIS stands out as a better handover management algorithm, outperforming its counterparts Entropy-TOPSIS and Importance-TOPSIS previously presented in [7] as efficient TOPSIS-based handover management algorithms. Our study reveals that SD-TOPSIS is more stable and effective in the dynamic environment of LEO satellite networks. This stability, combined with its ability to manage the complexities and variability inherent in LEO satellite communication systems, makes SD-TOPSIS a suitable choice for handover management in dynamic LEO satellite-based networks.

5. Conclusions

This work presented a performance evaluation of the handover using three variants of the TOPSIS algorithm, namely SD-TOPSIS, Entropy-TOPSIS, and Importance-TOPSIS, in the context of dynamic LEO satellite environments. To ensure a realistic analysis, real-world data from “Starlink-on-the-Road-Dataset”, were used to assess the robustness and sensitivity of these algorithms during LEO satellite handovers.
The simulation results showed that the SD-TOPSIS algorithm consistently outperforms the other algorithms under study in terms of robustness, as it exhibits the lowest sensitivity and standard deviation of scores across the four scenarios in response to small variations in the criteria values. Moreover, the SD-TOPSIS algorithm reduces sensitivity compared to both Entropy-TOPSIS and Importance-TOPSIS for all the four scenarios, making it particularly well-suited for handover in satellite communication systems where criteria such as upload speed, download speed, ping, and packet loss vary rapidly, due to the mobility of LEO satellites.
Future work will explore hybrid algorithms that combine SD-TOPSIS with advanced methods such as machine learning. This integration could enhance adaptability and enable real-time decision-making, addressing the dynamic nature of LEO satellite networks while improving their handover performance and resilience.

Author Contributions

Conceptualization, P.B.N. and M.M.T.; Methodology, P.B.N. and A.K.L.; Software, P.B.N.; Validation, A.K.L., M.M.T. and P.S.N.; Formal analysis, P.B.N.; Resources, A.K.L. and V.P.; Data curation, P.B.N.; Writing—original draft preparation, P.B.N.; Writing—review and editing, M.M.T., M.-L.L.B., P.S.N. and V.P.; Visualization, P.B.N.; Supervision, A.K.L. and M.M.T.; Project administration, A.K.L. and M.M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Suggested Data Availability Statements are available at https://github.com/sys-uos/Starlink-on-the-Road (accessed on 29 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, W. Software defined satellite networks: A survey. Digit. Commun. Netw. 2023, 9, 1243–1264. [Google Scholar] [CrossRef]
  2. Baltaci, A.; Dinc, E.; Ozger, M.; Alabbasi, A.; Cavdar, C.; Schupke, D. A Survey of Wireless Networks for Future Aerial Communications (FACOM). IEEE Commun. Surv. Tutor. 2021, 23, 2833–2884. [Google Scholar] [CrossRef]
  3. Li, Y.; Wei, H.; Li, L.; Han, Y.; Zhou, J.; Zhou, W. An Extensible Multi-Layer Architecture Model Based on LEO-MSS and Performance Analysis. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, 22–25 September 2019; pp. 1–6. [Google Scholar]
  4. Song, H.; Liu, S.; Hu, X.; Li, X.; Wang, W. Load Balancing and QoS Supporting Access and Handover Decision algorithm for GEO/LEO Heterogeneous Satellite Networks. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 640–645. [Google Scholar] [CrossRef]
  5. Li, J.; Xue, K.; Liu, J.; Zhang, Y. A User-Centric Handover Scheme for Ultra-Dense LEO Satellite Networks. IEEE Wirel. Commun. Lett. 2020, 9, 1904–1908. [Google Scholar] [CrossRef]
  6. Lahby, M.; Sekkaki, A. Optimal vertical handover based on TOPSIS algorithm and utility function in heterogeneous wireless networks. In Proceedings of the 2017 International Symposium on Networks, Computers and Communications (ISNCC), Marrakech, Morocco, 16–18 May 2017; pp. 1–6. [Google Scholar] [CrossRef]
  7. Zhang, L.; Wu, S.; Lv, X.; Jiao, J. A Two-Step Handover Strategy for GEO/LEO Heterogeneous Satellite Networks Based on Multi-Attribute Decision Making. Electronics 2022, 11, 795. [Google Scholar] [CrossRef]
  8. Radulescu, C.Z.; Radulescu, M. A Hybrid Group Multi-Criteria Approach Based on SAW, TOPSIS, VIKOR, and COPRAS Methods for Complex IoT Selection Problems. Electronics 2024, 13, 789. [Google Scholar] [CrossRef]
  9. Stanic, I.; Drajic, D.; Cica, Z. Overview of Network Selection and Vertical Handover Approaches and Simulation Tools in Heterogeneous Wireless Networks. In Proceedings of the 2023 16th International Conference on Advanced Technologies, Systems and Services in Telecommunications (TELSIKS), Nis, Serbia, 25–27 October 2023; pp. 133–142. [Google Scholar] [CrossRef]
  10. Alhabo, M.; Zhang, L. Multi-Criteria Handover Using Modified Weighted TOPSIS Methods for Heterogeneous Networks. IEEE Access 2018, 6, 40547–40558. [Google Scholar] [CrossRef]
  11. Laniewski, D.; Lanfer, E.; Beginn, S.; Dunker, J.; Dückers, M.; Aschenbruck, N. Starlink on the Road: A First Look at Mobile Starlink Performance in Central Europe. In Proceedings of the 2024 8th Network Traffic Measurement and Analysis Conference (TMA), Dresden, Germany, 21–24 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–8. [Google Scholar] [CrossRef]
  12. Lahby, M.; Attioui, A.; Sekkaki, A. An Optimized Vertical Handover Approach Based on M-ANP and TOPSIS in Heterogeneous Wireless Networks; Springer: Singapore, 2017; Volume 397, pp. 15–29. [Google Scholar] [CrossRef]
  13. Liu, J.; Xie, M.; Chen, S.; Xu, G.; Wu, T.; Li, W. TS-REPLICA: A novel replica placement algorithm based on the entropy weight TOPSIS method in spark for multimedia data analysis. Inf. Sci. 2023, 626, 133–148. [Google Scholar] [CrossRef]
  14. Miao, J.; Wang, P.; Yin, H.; Chen, N.; Wang, X. A Multi-attribute Decision Handover Scheme for LEO Mobile Satellite Networks. In Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), Chengdu, China, 6–9 December 2019; pp. 938–942. [Google Scholar] [CrossRef]
  15. Goutam, S.; Unnikrishnan, S.; Karandikar, A. Algorithm for Vertical Handover using Multi Attribute Decision Making Techniques. In Proceedings of the 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat), Batam, Indonesia, 17–18 December 2020; pp. 306–313. [Google Scholar] [CrossRef]
  16. Adiat, K.A.N.; Kolawole, A.O.; Adeyemo, I.A.; Akinlalu, A.A.; Afolabi, D.O. Assessment of groundwater resources from geophysical and remote sensing data in a basement complex environment using fuzzy-topsis algorithm. Results Earth Sci. 2024, 2, 100034. [Google Scholar] [CrossRef]
  17. Tabatabaei, S. A new model for evaluating the impact of organizational culture variables on the success of knowledge management in organizations using the TOPSIS multi-criteria algorithm: Case study. Comput. Hum. Behav. Rep. 2024, 14, 100417. [Google Scholar] [CrossRef]
  18. Kumar, K.; Prakash, A.; Tripathi, R. Spectrum handoff scheme with multiple attributes decision making for optimal network selection in cognitive radio networks. Digit. Commun. Netw. 2017, 3, 164–175. [Google Scholar] [CrossRef]
  19. Bhatia, M.; Kumar, K. Network selection in cognitive radio enabled Wireless Body Area Networks. Digit. Commun. Netw. 2020, 6, 75–85. [Google Scholar] [CrossRef]
  20. Ezhilarasan, N.; Vijayalakshmi, C. Optimization of Fuzzy programming with TOPSIS algorithm. Procedia Comput. Sci. 2020, 172, 473–479. [Google Scholar] [CrossRef]
  21. Hajduk, S.; Jelonek, D. A Decision-Making Approach Based on TOPSIS Method for Ranking Smart Cities in the Context of Urban Energy. Energies 2021, 14, 2691. [Google Scholar] [CrossRef]
  22. Ismail, A.; Hee Roh, B. Adaptive Handovers in heterogeneous networks using fuzzy MADM. In Proceedings of the International Conference on Mobile IT Convergence, Gumi, Republic of Korea, 26–28 September 2011; pp. 99–104. [Google Scholar]
  23. Mansouri, M.; Leghris, C.; Bekkhoucha, A. Towards a better combination of the MADM algorithms for the Vertical Handover optimization in a mobile network multi-access environment. In Proceedings of the 2015 10th International Conference on Intelligent Systems: Theories and Applications (SITA), Rabat, Morocco, 20–21 October 2015; pp. 1–5. [Google Scholar] [CrossRef]
  24. Mansouri, M.; Leghris, C. The use of MADM methods in the vertical handover decision making context. In Proceedings of the 2017 International Conference on Wireless Networks and Mobile Communications (WINCOM), Rabat, Morocco, 1–4 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  25. Almutairi, A.F.; Landolsi, M.A.; Al-Hawaj, A.O. Weighting Selection in GRA-based MADM for Vertical Handover in Wireless Networks. In Proceedings of the 2016 UKSim-AMSS 18th International Conference on Computer Modelling and Simulation (UKSim), Cambridge, UK, 6–8 April 2016; pp. 331–336. [Google Scholar] [CrossRef]
  26. Ipaye, A.A.A.; Ibrahim, A.M.A.; Ahmed, I.I.O.; Nagar, S.A.; Mitropoulos, D.N.G. Mathematical Model Implementation of Vertical Handover Network Decision Algorithms in Heterogeneous Network Based On QoS Parameters. In Proceedings of the 2018 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE), Khartoum, Sudan, 12–14 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  27. Madi, E.N.; Zakaria, Z.A.; Sambas, A.; Sukono. Toward Effective Uncertainty Management in Decision-Making Models Based on Type-2 Fuzzy TOPSIS. Mathematics 2023, 11, 3512. [Google Scholar] [CrossRef]
  28. Wang, A.; Sun, L.; Liu, J. An Innovative TOPSIS–Mahalanobis Distance Approach to Comprehensive Spatial Prioritization Based on Multi-Dimensional Drought Indicators. Atmosphere 2024, 15, 1347. [Google Scholar] [CrossRef]
  29. Mansouri, M.; Leghris, C. A battery level aware MADM combination for the vertical handover decision making. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 1448–1452. [Google Scholar] [CrossRef]
  30. Shih, H.S. TOPSIS Variants. In TOPSIS and Its Extensions: A Distance-Based MCDM Approach; Springer International Publishing: Cham, Switzerland, 2022; pp. 33–79. [Google Scholar] [CrossRef]
  31. Zhang, W. Handover decision using fuzzy MADM in heterogeneous networks. In Proceedings of the 2004 IEEE Wireless Communications and Networking Conference (IEEE Cat. No. 04TH8733), Atlanta, GA, USA, 21–25 March 2004; Volume 2, pp. 653–658. [Google Scholar] [CrossRef]
  32. Ben Zineb, A.; Ayadi, M.; Tabbane, S. Fuzzy MADM based vertical handover algorithm for enhancing network performances. In Proceedings of the 2015 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 16–18 September 2015; pp. 153–159. [Google Scholar] [CrossRef]
  33. Rahman, S.; Alali, A.S.; Baro, N.; Ali, S.; Kakati, P. A Novel TOPSIS Framework for Multi-Criteria Decision Making with Random Hypergraphs: Enhancing Decision Processes. Symmetry 2024, 16, 1602. [Google Scholar] [CrossRef]
  34. Nilashi, M.; Mardani, A.; Liao, H.; Ahmadi, H.; Manaf, A.A.; Almukadi, W. A Hybrid Method with TOPSIS and Machine Learning Techniques for Sustainable Development of Green Hotels Considering Online Reviews. Sustainability 2019, 11, 6013. [Google Scholar] [CrossRef]
  35. Corrente, S.; Tasiou, M. A robust TOPSIS method for decision making problems with hierarchical and non-monotonic criteria. Expert Syst. Appl. 2023, 214, 119045. [Google Scholar] [CrossRef]
  36. Bazrafkan, A.; Pakravan, M.R. An MADM network selection approach for next generation heterogeneous networks. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 1884–1890. [Google Scholar] [CrossRef]
  37. Alinezhad, A.; Amini, A. Sensitivity Analysis of TOPSIS Technique: The Results of Change in the Weight of One Attribute on the Final Ranking of Alternatives. J. Optim. Ind. Eng. 2011, 4, 23–28. [Google Scholar]
  38. Jiří, M. The Robustness of TOPSIS Results Using Sensitivity Analysis Based on Weight Tuning. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering 2018, Prague, Czech Republic, 3–8 June 2018; Lhotska, L., Sukupova, L., Lacković, I., Ibbott, G.S., Eds.; Springer: Singapore, 2019; pp. 83–86. [Google Scholar]
  39. Alinezhad, A.; Sarrafha, K.; Amini, A. Sensitivity Analysis of SAW Technique: The Impact of Changing the Decision Making Matrix Elements on the Final Ranking of Alternatives. Iran. J. Oper. Res. 2014, 5, 82–94. [Google Scholar]
  40. Simanaviciene, R.; Ustinovichius, L. Sensitivity Analysis for Multiple Criteria Decision Making Methods: TOPSIS and SAW. Procedia Soc. Behav. Sci. 2010, 2, 7743–7744. [Google Scholar] [CrossRef]
  41. Dewangan, S.; Gangopadhyay, S.; Biswas, C. Study of surface integrity and dimensional accuracy in EDM using Fuzzy TOPSIS and sensitivity analysis. Measurement 2015, 63, 364–376. [Google Scholar] [CrossRef]
  42. Hidayat, T.; Kurniawan, H.; Albar, A.V.; Istiqomah, D.A. Sensitivity Analysis of Decision Support Systems for Selection of Achievement Students Using the TOPSIS Method. In Proceedings of the 2023 6th International Conference of Computer and Informatics Engineering (IC2IE), Lombok, Indonesia, 14–15 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  43. Nibrad, G.M.; Khot, P. A sensitivity analysis approach for deterministic multi-criteria decision making methods. Int. J. Manag. IT Eng. 2013, 3, 140–177. [Google Scholar]
  44. Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Saisana, M.; Tarantola, S. Global Sensitivity Analysis: The Primer; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  45. Seo, M.; Ryu, N.; Min, S. Sensitivity Analysis for Multi-Objective Optimization of the Benchmark TEAM Problem. IEEE Trans. Magn. 2020, 56, 1–4. [Google Scholar] [CrossRef]
  46. Hidayat, T.; Kurniawan, H.; Astuti, I.A.; Pravitasari, R.; Kristyawan, Y.; Syahadiyanti, L. The Effect and Impact of the Electre Method for Sensitivity Testing Based on the Case Study Selection of Outstanding Students. In Proceedings of the 2022 5th International Conference of Computer and Informatics Engineering (IC2IE), Jakarta, Indonesia, 13–14 September 2022; pp. 118–122. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the TOPSIS algorithm.
Figure 1. Flowchart of the TOPSIS algorithm.
Network 05 00015 g001
Figure 2. Block diagram of the methodology.
Figure 2. Block diagram of the methodology.
Network 05 00015 g002
Figure 3. Criteria weights for the initial decision matrix.
Figure 3. Criteria weights for the initial decision matrix.
Network 05 00015 g003
Figure 4. Satellites scores for the initial decision matrix.
Figure 4. Satellites scores for the initial decision matrix.
Network 05 00015 g004
Figure 5. Satellites scores for the initial case and the four scenarios using SD-TOPSIS, for criterion variation within ± 5 % .
Figure 5. Satellites scores for the initial case and the four scenarios using SD-TOPSIS, for criterion variation within ± 5 % .
Network 05 00015 g005
Figure 6. Satellites scores for the initial case and the four scenarios using Entropy-TOPSIS, for criterion variation within ± 5 % .
Figure 6. Satellites scores for the initial case and the four scenarios using Entropy-TOPSIS, for criterion variation within ± 5 % .
Network 05 00015 g006
Figure 7. Satellites scores for the initial case and the four scenarios using Importance-TOPSIS, for criterion variation within ± 5 % .
Figure 7. Satellites scores for the initial case and the four scenarios using Importance-TOPSIS, for criterion variation within ± 5 % .
Network 05 00015 g007
Figure 8. Variation rates of the scores for the Scenario 1, for criterion variation within ± 5 % .
Figure 8. Variation rates of the scores for the Scenario 1, for criterion variation within ± 5 % .
Network 05 00015 g008
Figure 9. Variation rates of the scores for the Scenario 2, for criterion variation within ± 5 % .
Figure 9. Variation rates of the scores for the Scenario 2, for criterion variation within ± 5 % .
Network 05 00015 g009
Figure 10. Variation rates of the scores for the Scenario 3, for criterion variation within ± 5 % .
Figure 10. Variation rates of the scores for the Scenario 3, for criterion variation within ± 5 % .
Network 05 00015 g010
Figure 11. Variation rates of the scores for the Scenario 4, for criterion variation within ± 5 % .
Figure 11. Variation rates of the scores for the Scenario 4, for criterion variation within ± 5 % .
Network 05 00015 g011
Figure 12. Satellites scores for the initial case and the four scenarios using SD-TOPSIS, for criterion variation within ± 10 % .
Figure 12. Satellites scores for the initial case and the four scenarios using SD-TOPSIS, for criterion variation within ± 10 % .
Network 05 00015 g012
Figure 13. Satellites scores for the initial case and the four scenarios using Entropy-TOPSIS, for criterion variation within ± 10 % .
Figure 13. Satellites scores for the initial case and the four scenarios using Entropy-TOPSIS, for criterion variation within ± 10 % .
Network 05 00015 g013
Figure 14. Satellites scores for the initial case and the four scenarios using Importance-TOPSIS, for criterion variation within ± 10 % .
Figure 14. Satellites scores for the initial case and the four scenarios using Importance-TOPSIS, for criterion variation within ± 10 % .
Network 05 00015 g014
Figure 15. Variation rates of the scores for the Scenario 1, for criterion variation within ± 10 % .
Figure 15. Variation rates of the scores for the Scenario 1, for criterion variation within ± 10 % .
Network 05 00015 g015
Figure 16. Variation rates of the scores for the Scenario 2, for criterion variation within ± 10 % .
Figure 16. Variation rates of the scores for the Scenario 2, for criterion variation within ± 10 % .
Network 05 00015 g016
Figure 17. Variation rates of the scores for the Scenario 3, for criterion variation within ± 10 % .
Figure 17. Variation rates of the scores for the Scenario 3, for criterion variation within ± 10 % .
Network 05 00015 g017
Figure 18. Variation rates of the scores for the Scenario 4, for criterion variation within ± 10 % .
Figure 18. Variation rates of the scores for the Scenario 4, for criterion variation within ± 10 % .
Network 05 00015 g018
Figure 19. Standard deviation of scores for a variation range of 5 x 5 percent.
Figure 19. Standard deviation of scores for a variation range of 5 x 5 percent.
Network 05 00015 g019
Figure 20. Standard deviation of scores for a variation range of 10 x 10 percent.
Figure 20. Standard deviation of scores for a variation range of 10 x 10 percent.
Network 05 00015 g020
Table 1. Summary of related work.
Table 1. Summary of related work.
ReferenceMethodology UsedAdvantagesLimitationsApplication Domain
10Entropy-TOPSIS, SD-TOPSISReduces handovers, improves user throughputLimited to stable cellular networksHeterogeneous cellular networks
6MADM algorithms with utility functionsReduces ping-pong effect and handoff failuresLacks sensitivity analysisHeterogeneous 4G cellular networks
12M-ANP for weighting, TOPSIS for rankingReduces reversal and ping-pong effectsLacks sensitivity analysisHeterogeneous wireless networks
13TS-REPLICA (Entropy-TOPSIS)Improves load balancing and system performanceNot tested in dynamic LEO environmentsDistributed computing (Hadoop)
7Importance-TOPSIS for ranking LEO SatellitesReduces handovers, decreases forced connection terminationsNeeds adaptation to dynamic satellite environmentsGEO/LEO heterogeneous satellite networks
14Multi-attribute decision handover schemeReduces handover frequency, improves data stream stabilityDoes not address rapid performance fluctuations in LEO networksLEO mobile satellite networks
4PASMAD (SINR-based MADM) for access/handoverEnhances QoS, balances network loads, outperforms RSS-based methodsLacks sensitivity analysis for SINR and bandwidth fluctuationsGEO/LEO heterogeneous satellite networks
5User-centric handover scheme with data bufferingImproves throughput, delay, and latencyDoes not adress bandwidth and latency fluctuations, lacks sensitivity analysis.Ultra-dense LEO satellite networks
Table 2. Initial decision matrix.
Table 2. Initial decision matrix.
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
110.45188.9449.500.01
212.72220.2162.670.16
313.37213.2362.342.88
416.04221.3964.1619.46
514.65218.5547.310.01
613.57226.76757.133.52
714.24174.5960.841.28
813.39226.2253.1913.28
916.85179.4465.850.48
1015.36208.8368.9619.04
Table 3. Criteria weights values for the initial decision matrix.
Table 3. Criteria weights values for the initial decision matrix.
CriteriaSD-TOPSISEntropy-TOPSISImportance-TOPSIS
Upload speed0.29430.34120.3493
Download speed0.21510.18120.1935
Ping0.29540.32820.2974
Packet loss0.1950.14920.1597
Table 4. Satellites scores values for the initial decision matrix.
Table 4. Satellites scores values for the initial decision matrix.
CriteriaSat 1Sat 2Sat 3Sat 4Sat 5Sat 6Sat 7Sat 8Sat 9Sat 10
SD-TOPSIS0.430.390.420.540.770.530.450.560.550.46
Entropy-TOPSIS0.410.360.400.560.760.500.460.540.560.47
Importance-TOPSIS0.390.370.410.580.740.510.470.540.590.49
Table 5. Random variation rates applied to upload speed, for criterion variation within ± 5 % .
Table 5. Random variation rates applied to upload speed, for criterion variation within ± 5 % .
Satellite ID12345678910
Variation rate (%)5−2−3−1235−512
Table 6. Decision matrix of Scenario 1, for criterion variation within ± 5 % .
Table 6. Decision matrix of Scenario 1, for criterion variation within ± 5 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
110.97188.9449.500.01
212.46220.2162.670.16
313.31213.2362.342.88
415.88221.3964.1619.46
514.95218.5547.310.01
613.98226.76757.133.52
714.95174.5960.841.28
812.72226.2253.1913.28
917.01179.4465.850.48
1015.67208.8368.9619.04
Table 7. Random variation rates applied to download speed, for criterion variation within ± 5 % .
Table 7. Random variation rates applied to download speed, for criterion variation within ± 5 % .
Satellite ID12345678910
Variation rate (%)4−3−3−1235−412
Table 8. Decision matrix of Scenario 2, for criterion variation within ± 5 % .
Table 8. Decision matrix of Scenario 2, for criterion variation within ± 5 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
110.97198.3949.500.01
212.46215.8162.670.16
313.31206.8362.342.88
415.88219.1864.1619.46
514.95222.9347.310.01
613.98233.5757.133.52
714.95183.3260.841.28
812.72214.953.1913.28
917.01181.2365.850.48
1015.67213.0168.9619.04
Table 9. Random variation rates applied to ping, for criterion variation within ± 5 % .
Table 9. Random variation rates applied to ping, for criterion variation within ± 5 % .
Satellite ID12345678910
Variation rate (%)−4231−2−3−54−1−2
Table 10. Decision matrix of Scenario 3, for criterion variation within ± 5 % .
Table 10. Decision matrix of Scenario 3, for criterion variation within ± 5 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
110.97198.3947.520.01
212.46215.8163.920.16
313.31206.8364.212.88
415.88219.1864.8019.46
514.95222.9346.360.01
613.98233.5755.413.52
714.95183.3257.791.28
812.72214.955.3113.28
917.01181.2365.190.48
1015.67213.0167.5819.04
Table 11. Random variation rates applied to packet loss, for criterion variation within ± 5 % .
Table 11. Random variation rates applied to packet loss, for criterion variation within ± 5 % .
Satellite ID12345678910
Variation rate (%)−4231−2−3−54−1−2
Table 12. Decision matrix of Scenario 4, for criterion variation within ± 5 % .
Table 12. Decision matrix of Scenario 4, for criterion variation within ± 5 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
110.97198.3947.520.0096
212.46215.8163.920.1632
313.31206.8364.212.97
415.88219.1864.8019.65
514.95222.9346.360.0098
613.98233.5755.413.4144
714.95183.3257.791.216
812.72214.955.3113.81
917.01181.2365.190.4752
1015.67213.0167.5818.66
Table 13. Summary of the scores of the ten satellites in all four scenarios, for criterion variation within ± 5 % .
Table 13. Summary of the scores of the ten satellites in all four scenarios, for criterion variation within ± 5 % .
ScenariosTOPSIS VariantsSatellites
12345678910
Scenario 1SD-TOPSIS0.440.360.390.510.770.530.470.490.530.45
Entropy-TOPSIS0.430.310.360.520.760.510.480.470.550.46
Importance-TOPSIS0.420.330.370.530.760.520.490.470.560.47
Scenario 2SD-TOPSIS0.450.320.360.500.770.520.480.460.540.44
Entropy-TOPSIS0.430.280.340.510.760.500.490.450.550.45
Importance-TOPSIS0.420.310.350.530.760.520.490.450.550.47
Scenario 3SD-TOPSIS0.490.290.320.460.780.530.500.390.510.42
Entropy-TOPSIS0.500.240.290.450.780.500.500.380.500.41
Importance-TOPSIS0.420.310.350.530.760.520.490.450.550.47
Scenario 4SD-TOPSIS0.510.300.310.450.780.540.520.360.520.42
Entropy-TOPSIS0.510.260.290.440.780.520.520.370.510.41
Importance-TOPSIS0.460.290.320.490.760.540.530.360.550.45
Table 14. Random variation rates applied to upload speed, for criterion variation within ± 10 % .
Table 14. Random variation rates applied to upload speed, for criterion variation within ± 10 % .
Satellite ID12345678910
Variation rate (%)10−5−6−12310−1012
Table 15. Decision matrix of Scenario 1, for criterion variation within ± 10 % .
Table 15. Decision matrix of Scenario 1, for criterion variation within ± 10 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
111.49188.9449.500.01
212.08220.2162.670.16
312.56213.2362.342.88
415.88221.3964.1619.46
514.94218.5547.310.01
613.97226.76757.133.52
715.66174.5960.841.28
812.05226.2253.1913.28
917.02179.4465.850.48
1015.66208.8368.9619.04
Table 16. Random variation rates applied to download speed, for criterion variation within ± 10 % .
Table 16. Random variation rates applied to download speed, for criterion variation within ± 10 % .
Satellite ID12345678910
Variation rate (%)10 5 −6−12310−1012
Table 17. Decision matrix of Scenario 2, for criterion variation within ± 10 % .
Table 17. Decision matrix of Scenario 2, for criterion variation within ± 10 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
111.49207.8449.500.01
212.08209.262.670.16
312.56200.4462.342.88
415.88219.1864.1619.46
514.94222.9347.310.01
613.97233.5757.133.52
715.66192.0560.841.28
812.05203.5953.1913.28
917.02181.2365.850.48
1015.66213.0168.9619.04
Table 18. Random variation rates applied to ping, for criterion variation within ± 10 % .
Table 18. Random variation rates applied to ping, for criterion variation within ± 10 % .
Satellite ID12345678910
Variation rate (%)−10561−2−3−1010−1−2
Table 19. Decision matrix of Scenario 3, for criterion variation within ± 10 % .
Table 19. Decision matrix of Scenario 3, for criterion variation within ± 10 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
111.49207.8444.550.01
212.08209.265.800.16
312.56200.4466.082.88
415.88219.1864.8019.46
514.94222.9346.360.01
613.97233.5755.413.52
715.66192.0554.751.28
812.05203.5958.5113.28
917.02181.2365.190.48
1015.66213.0167.5819.04
Table 20. Random variation rates applied to packet loss, for criterion variation within ± 10 % .
Table 20. Random variation rates applied to packet loss, for criterion variation within ± 10 % .
Satellite ID12345678910
Variation rate (%)−10561−2−3−1010−1−2
Table 21. Decision matrix of Scenario 4, for criterion variation within ± 10 % .
Table 21. Decision matrix of Scenario 4, for criterion variation within ± 10 % .
Satellite IDUpload Speed [Mbps]Download Speed [Mbps]Ping [ms]Packet Loss [%]
111.49207.8444.550.009
212.08209.265.800.168
312.56200.4466.083.0528
415.88219.1864.8019.65
514.94222.9346.360.0098
613.97233.5755.413.4744
715.66192.0554.751.152
812.05203.5958.5114.61
917.02181.2365.190.4752
1015.66213.0167.5818.66
Table 22. Summary of the scores of the ten satellites in all four scenarios, for criterion variation within ± 10 % .
Table 22. Summary of the scores of the ten satellites in all four scenarios, for criterion variation within ± 10 % .
ScenariosTOPSIS VariantsSatellites
12345678910
Scenario 1SD-TOPSIS0.450.320.320.500.760.510.510.430.530.44
Entropy-TOPSIS0.420.260.270.520.740.480.530.390.560.46
Importance-TOPSIS0.460.300.300.490.760.500.500.440.520.43
Scenario 2SD-TOPSIS0.460.270.280.490.750.500.530.380.530.43
Entropy-TOPSIS0.420.210.240.510.740.470.550.360.560.46
Importance-TOPSIS0.470.260.260.480.760.500.510.400.510.42
Scenario 3SD-TOPSIS0.560.200.200.410.770.480.550.240.460.37
Entropy-TOPSIS0.570.140.150.380.780.450.540.230.440.35
Importance-TOPSIS0.550.200.190.420.770.490.550.240.460.38
Scenario 4SD-TOPSIS0.590.220.200.380.760.500.600.210.480.35
Entropy-TOPSIS0.600.190.180.360.770.470.580.210.460.33
Importance-TOPSIS0.580.230.210.390.760.510.590.210.480.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buhinyori Ngango, P.; Lufua Binda, M.-L.; Matalatala Tamasala, M.; Sedi Nzakuna, P.; Paciello, V.; Kuti Lusala, A. Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis. Network 2025, 5, 15. https://doi.org/10.3390/network5020015

AMA Style

Buhinyori Ngango P, Lufua Binda M-L, Matalatala Tamasala M, Sedi Nzakuna P, Paciello V, Kuti Lusala A. Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis. Network. 2025; 5(2):15. https://doi.org/10.3390/network5020015

Chicago/Turabian Style

Buhinyori Ngango, Pascal, Marie-Line Lufua Binda, Michel Matalatala Tamasala, Pierre Sedi Nzakuna, Vincenzo Paciello, and Angelo Kuti Lusala. 2025. "Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis" Network 5, no. 2: 15. https://doi.org/10.3390/network5020015

APA Style

Buhinyori Ngango, P., Lufua Binda, M.-L., Matalatala Tamasala, M., Sedi Nzakuna, P., Paciello, V., & Kuti Lusala, A. (2025). Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis. Network, 5(2), 15. https://doi.org/10.3390/network5020015

Article Metrics

Back to TopTop