Next Article in Journal
Correction: Schnell, R.; Mennicken, M. Unified Assessment of Open and Ducted Propulsors. Aerospace 2024, 11, 1002
Previous Article in Journal
Assessment of Extending Flight Endurance Through Engine Dynamic Clearance Control via Fuel Heat Sink Utilization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions

by
Nelson Ramírez
1,2,
Juan Miguel Sánchez-Lozano
2,* and
Eloy Peña-Asensio
3
1
Department of Military Information and Communications Technologies, School of Command, Control and Telecommunications Techniques of Spanish Air Force, A-5 Highway km 10.5, 28054 Madrid, Spain
2
Department of Electronics, Technology of Computers and Projects, Universidad Politécnica de Cartagena, Plaza del Hospital 1, 30202 Cartagena, Spain
3
Department of Aerospace Science and Technology, Politecnico di Milano, Via La Masa 34, 20156 Milano, Italy
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(9), 800; https://doi.org/10.3390/aerospace12090800
Submission received: 26 June 2025 / Revised: 13 August 2025 / Accepted: 3 September 2025 / Published: 4 September 2025

Abstract

A systematic fuzzy multi-criteria decision making (MCDM) framework is proposed to prioritize near-Earth asteroids (NEAs) for a boulder capture mission, addressing the requirement for rigorous prioritization of asteroid candidates under conditions of data uncertainty. Twenty-eight NEA candidates were first selected through filtering based on physical and orbital properties. Then, objective fuzzy weighting MCDM methods (statistical variance, CRITIC, and MEREC) were applied to determine the importance of criteria such as capture cost, synodic period, rotation rate, orbit determination accuracy, and similarity to other candidates. Subsequent fuzzy ranking MCDM techniques (WASPAS, TOPSIS, MARCOS) generated nine prioritization schemes whose coherence was assessed via correlation analysis. An innovative sensitivity analysis employing Dirichlet-distributed random sampling around reference weights quantified ranking robustness. All methodologies combinations consistently identified the same top four asteroids, with 2013 NJ ranked first in every scenario, and stability metrics confirmed resilience to plausible weight variations. The modular MCDM methodology proposed provides mission planners with a reliable, adaptable decision support tool for asteroid selection, demonstrably narrowing broad candidate pools to robust targets while accommodating future data updates.

1. Introduction

Asteroids are remnants of early Solar System formation; they are small bodies larger than one meter in size composed of rocky, icy, or metallic materials that orbit the Sun without qualifying as planets or satellites [1]. A particular subset of them, near-Earth asteroids (NEAs), follow trajectories that bring them into Earth’s neighborhood—by definition, their perihelion is less than 1.3 AU—, making them especially important for scientific investigation, resource prospecting, and planetary defense (PD) [2,3,4]. NEAs preserve primitive materials and record processes from the protoplanetary disk, offering windows into planetary origins and Solar System evolution [5]. At the same time, NEAs hold dual significance: many contain water, volatiles, and metals suitable for in situ resource utilization (ISRU) in future missions, while others pose impact hazards that drive PD mitigation efforts [6,7,8,9,10].
Several space missions have been carried out over the past decades to investigate asteroids and advance scientific understanding of their composition, geology, and potential as strategic resources. One of the most ambitious initiatives in this context was NASA’s Asteroid Redirect Mission (ARM), which proposed capturing an asteroid and relocating it to a stable orbit within the Earth–Moon system. Although ARM was ultimately canceled, it marked a significant shift in the objectives of asteroid missions by emphasizing the demonstration of capture and redirection technologies. Unlike previous sample return missions—such as NEAR Shoemaker, Hayabusa, Hayabusa2, and OSIRIS-REx—ARM aimed not only to collect small surface samples but also to enable the retrieval of a substantial mass of asteroid material for in-depth, direct analysis.
In the beginning of the ARM program, two mission concepts were considered: capturing an entire small asteroid or extracting a large boulder from the surface of the asteroid [11]. The present work focuses on the latter—a boulder capture mission—which was found to be more relevant from the perspective of planetary defense system development and the advancement of proximity operation techniques [12]. While ARM defined a reference mission targeting a specific asteroid with predetermined physical and orbital parameters, the present study generalizes the problem by identifying and prioritizing all viable NEAs from an extended catalog that meet a defined set of requirements.
Despite extensive surveys cataloging thousands of NEAs, only one subset meets the combination of requirements related to size, orbit geometry, and capture viability needed for a boulder capture mission [13]. As detailed in this study, an initial screening process illustrates how proper filtering yields a manageable candidate pool yet still requires rigorous prioritization to guide mission planning. To address this, a structured decision framework is essential.
Building on this motivation, multi-criteria decision making (MCDM) methodologies provide systematic means with which to evaluate alternatives across different criteria [14]. These criteria may include quantitative parameters—such as capture cost estimates, synodic periods, or rotation rates—and others exhibiting high uncertainty—such as orbit determination accuracy under limited observations or the number of similar candidates based on orbital similarity. For this reason, fuzzy logic, as a branch of artificial intelligence, is applied here to rigorously model parameters with elevated uncertainty through triangular fuzzy numbers (TFNs), enabling the decision process to incorporate uncertain or imprecise data in a principled way [15].
It is important to note that the application of artificial intelligence techniques to MCDM extends beyond fuzzy logic for representing vague or uncertain information. Machine learning methodologies such as the adaptive-network-based fuzzy inference system (ANFIS) [16,17] have been implemented successfully in MCDM problems [18,19,20,21,22,23,24]. This kind of system automatically derives appropriate rules from input–output data and leverages neural learning to optimize criterion combinations with minimal error [25]. Additionally, machine learning techniques also allow the evaluation of new alternatives without re-executing the original ranking algorithm [26], and they eliminate the need for experts to specify exact criterion weights [27], which is an advantage already offered by the objective weight determination methods proposed in this work. However, neuro-fuzzy methods require historical data for model training and depend critically on selecting an optimal network architecture—input variables, hidden layers and nodes, transfer functions, learning rules, initialization, stopping criteria, and more—without which models could not converge or generalize effectively [28,29]. The training process can be slow, may fail to converge without sufficient data, and becomes increasingly complex as criteria and decision rules proliferate, leading to exponential growth in rule sets that undermines interpretability and incurs costly rule codification efforts [26,29]. Moreover, reliance on historical data reduces flexibility when criterion structures evolve, and studies have shown that limited sample sizes further constrain generalization [29].
For these reasons, for the purposes of this analysis, a deterministic fuzzy MCDM framework is preferred: it is simple, quick, and cost-effective to construct; it requires neither extensive data collection nor expert-driven weight assignment; and its outputs are transparent, observable, and easily audited, avoiding the “grey-box” limitations of trained neuro-fuzzy systems [25,28]. The proposed framework provides alternative rankings without historical data requirements or architectural overhead. Importantly, the resulting framework establishes a transparent foundation for future machine-learning integration, as clearly defined criteria weights and outcomes will facilitate machine learning adoption once new data becomes available [25,27].
Therefore, the aim of this study is to analyze in detail potential asteroid candidates for an asteroid boulder capture mission and to compare outcomes obtained through combinations of both classical and innovative fuzzy MCDM techniques. By examining how different weighting and ranking methods handle uncertain criteria, decision makers can gain insight into robust top candidates and understand the influence of weight assumptions on final priorities.
This paper is organized as follows. Section 2 details the materials and methods; it introduces the fundamentals of fuzzy logic and presents the MCDM methodologies including the fuzzy weighting methods and fuzzy ranking. Section 3 introduces the decision problem of prioritization of NEAs for boulder capture missions; it describes the initial requirements for candidate selection, outlines the evaluation criteria, and assesses the potential candidates. Section 4 presents the results and discussion, including the weight sets from the three methods, the alternative rankings from the nine MCDM combinations, a comparative analysis of rankings, and the sensitivity analysis findings that quantify ranking robustness. Finally, Section 5 provides the conclusions, discussing implications for future asteroid mission planning, lessons on decision support under uncertainty, and potential extensions as observational data improve.

2. Materials and Methods

This study implements three fuzzy weighting MCDM methods, i.e., fuzzy statistical variance (Section 2.2.1), fuzzy CRITIC (Section 2.2.2), and fuzzy MEREC (Section 2.2.3), to derive criterion importance from a decision matrix constructed from filtered NEA parameters obtained from NASA JPL’s Small-Body Database System (SBDS) [30]. Alternatives are then ranked using three fuzzy MCDM techniques: fuzzy WASPAS (Section 2.3.1), fuzzy TOPSIS (Section 2.3.2), and fuzzy MARCOS (Section 2.3.3), facilitating a comparative analysis of classical and innovative fuzzy-based MCDM approaches. In addition, an innovative sensitivity analysis system based on random sampling of weight assignments using a Dirichlet distribution centered on reference weights is introduced to rigorously analyze the results.
It is worth noting that the modular structure of this MCDM framework allows seamless incorporation of new data or criteria, ensuring that prioritization remains scientifically rigorous, transparent, and adaptable as mission requirements or observational inputs evolve. A high-level overview of the workflow adopted in this study to address the asteroid selection decision problem is presented in Figure 1.

2.1. Fuzzy Logic and Fuzzy Sets

Fuzzy logic is an artificial intelligence technique that enables rigorous handling of variables whose values involve uncertainty or are vaguely defined [31]. This capability has made the application of fuzzy logic to MCDM problems a growing research topic [32].
Fuzzy sets provide the foundational concept for fuzzy logic [33]. Formally, given a universe of discourse X, a fuzzy set A is defined by a membership function f A : X [ 0 , 1 ] , where f A indicates the degree to which x belongs to A. In other words, as f A ( x ) approaches 1, the degree to which x belongs to A increases. Thus, A can be expressed as A = { f A ( x ) | x X } [34]. This mechanism allows gradual boundaries and a formal representation of uncertainty or vagueness. Building on this concept, fuzzy variables are defined by associating each possible value of a quantity with a corresponding fuzzy set, so that an uncertain parameter is represented not by a single crisp value but by a membership distribution over a range.
Fuzzy sets can be defined by different membership functions [32,35]: triangular, trapezoidal, sigmoid, gaussian, etc. Among the multiple ways to represent fuzzy sets, triangular fuzzy numbers are particularly popular in MCDM. A TFN z ˜ is represented by the tuple z ˜ = ( z l , z m , z u ) , where z l z m z u correspond to the minimum, modal, and maximum values of the variable, respectively. This representation is notably advantageous: it captures uncertainty rigorously while allowing direct and efficient computation, given that all fuzzy arithmetic operations (addition, subtraction, multiplication, division) can be performed via simple formulae on the three parameters (for details on TFNs and their arithmetic operations, see [36]). In addition, numerous empirical MCDM applications [32,34,35,37,38,39] demonstrate that TFNs deliver robust decision outcomes without excessive modeling burden [15,40]. The consensus in the literature is that TFNs offer the most practical trade-off between interpretability, computational complexity, and fidelity in representing uncertainty [41,42]. In this study, a combination of real (crisp) variables and fuzzy variables represented as TFNs is considered. This combination has proven to be an effective approach for handling MCDM problems involving heterogeneous data types [34,37,43,44]. Consequently, the mathematical fundaments of the MCDM algorithms for the determination of the weights of the criteria and the assessment of the alternatives must be adapted to their fuzzy counterparts.
Finally, it is worth noting that fuzzy variables sometimes need to be converted into a representative crisp value z. Although this process, called defuzzification, can be performed using various methods, this study employs the formula proposed in [45]:
z = z l + 4 z m + z u 6
Although the defuzzification method presented in Equation (1) is well established in the literature, an alternative method based on the best non-fuzzy performance approach [44] has been implemented to ensure the robustness of the results. In this way, the influence of the defuzzification method on the final outcomes will be analyzed in Section 4.2.

2.2. Weight Determination for Fuzzy MCDM

Weighting methods can be classified as objective, subjective, or hybrid approaches [46]. Objective methods determine the weight of each criterion by mathematically analyzing the intrinsic information contained within the decision matrix, which represents the evaluation of each alternative with respect to each criterion. Subjective methods derive the weights based on expert judgments regarding the relative importance of the criteria. Hybrid methods combine elements of both objective and subjective approaches to establish weights.
Although subjective methods are widely used in MCDM problems, they are not well suited to scenarios involving large volumes of data or situations in which experts lack sufficient information to make reliable experience-based judgments. In such contexts, objective weighting methods are becoming increasingly popular [47].
It should be emphasized that when fuzzy values are used in the decision matrix, either modifications to the traditional weighting methods or the use of their fuzzy versions are required in order to handle the data appropriately.
For these reasons, this study implements and analyzes several fuzzy objective weighting methods to enable comparison of the results produced by each approach.

2.2.1. Fuzzy Statistical Variance Method

The first method considered is the statistical variance method [48,49]. This straightforward technique prioritizes criteria exhibiting greater performance variance across alternatives, under the principle that higher dispersion implies greater discriminative power.
The application of this method involves several steps.
  • The elements of the decision matrix x ˜ i j = ( x i j l , x i j m , x i j u ) , where i [ 1 , n ] indicates the ith alternative and j [ 1 , m ] indicates the jth criterion, which are first normalized using Equation (2) for benefit-type criteria—where higher values are preferred—and Equation (3) for cost-type criteria—where lower values are more desirable:
    r ˜ i j = x i j l x j * u , x i j m x j * u , x i j u x j * u , where x j * u = max i x i j u ,
    r ˜ i j = x j l x i j u , x j l x i j m , x j l x i j l , where x j l = min i x i j l .
  • Next, the statistical variance of each normalized criterion is calculated using the mean value of each component r ¯ j k and Equation (4):
    V j k = 1 n i = 1 n r i j k r ¯ j k 2 , for k = l , m , u .
  • Finally, the fuzzy weight of each criterion is obtained through
    w ˜ j = ( w j l , w j m , w j u ) , where w j k = V j k j = 1 m V j k for k = l , m , u .

2.2.2. Fuzzy CRITIC Method

The CRITIC (Criteria Importance Through Intercriteria Correlation) method is a robust and well-established approach for determining weights that expands upon the statistical deviation concept by incorporating inter-criteria correlation analysis [50,51]. This method assigns greater information content—and consequently higher weight—to criteria exhibiting larger standard deviations in alternative performance scores. Conversely, when two criteria show strong performance correlation, they likely contain redundant information, and thus each should receive reduced weight to account for their diminished unique informational contribution.
The application of this method involves several steps [51]:
  • The decision matrix is first normalized using the normalizing method determined by Equation (6) for benefit-type criteria and Equation (7) for cost-type criteria:
    r ˜ i j = x i j l x j l x j * u x j l , x i j m x j l x j * u x j l , x i j u x j l x j * u x j l ,
    r ˜ i j = x j * u x i j u x j * u x j l , x j * u x i j m x j * u x j l , x j * u x i j l x j * u x j l ,
    where
    x j l = min i x i j l , x j * u = max i x i j u .
  • The correlation coefficient between all the criteria ρ j j k and the standard deviation of each criterion, σ j k , are computed.
  • The weights of the criteria are calculated based on the standard deviation and its correlation with the remaining criteria. The objective fuzzy weights are determined as follows:
    w ˜ j = ( w j l , w j m , w j u ) , where w j k = C j k j = 1 m C j k for k = l , m , u ,
    and
    C j k = σ j k · j = 1 m ( 1 ρ j j k ) .

2.2.3. Fuzzy MEREC Method

The MEREC (Method based on the Removal Effects of Criteria) [52] is an innovative approach gaining recognition in the MCDM literature [46]. This method determines weights by analyzing how the removal of each criterion affects the alternatives’ overall performance. Criteria whose removal causes significant performance changes receive higher weights, while those with minimal impact receive lower weights.
The application of this method involves several steps:
  • Normalization of the decision matrix using Equation (11) for benefit-type criteria and Equation (12) for cost-type criteria [53]:
    r ˜ i j = x j l x i j u , x j l x i j m , x j l x i j l , where x j l = min i x i j l ,
    r ˜ i j = x i j l x j * u , x i j m x j * u , x i j u x j * u , where x j * u = max i x i j u
  • We calculate the overall performance S i k of each component of the alternatives using
    S i k = ln 1 + 1 m j = 1 m ln ( r i j k ) , for k = l , m , u .
  • We then compute the performance of each alternative, S i * k when removing each criterion:
    S i * k = ln 1 + 1 m j j ln ( r i j k ) , for k = l , m , u .
  • We compute the effect of removing the jth criterion by computing the summation of absolute deviations using
    E j k = i = 1 n | S i * k S i k | , for k = l , m , u .
  • Finally, the fuzzy weight of each criterion is obtained through
    w ˜ j = ( w j l , w j m , w j u ) , where w j k = E j k j = 1 m E j k for k = l , m , u .
It is important to note that, while some of the normalization techniques discussed earlier are linked to the weight determination procedure—such as in the case of MEREC [54]—for methods like fuzzy statistical variance and fuzzy CRITIC, normalization techniques are interchangeable [55]. In order to ensure independence and a more transparent assessment of the objective weighting methods implemented in this study, the normalization techniques commonly associated with each method have been retained. Nonetheless, Section 4.2 provides an analysis of how alternative normalization choices may affect the results.

2.3. MCDM Fuzzy Ranking Methods

Once the decision matrix and criterion weights are computed, ranking methods must be employed to prioritize the proposed alternatives. Although numerous ranking methods exist in the MCDM literature [15], this study applies three distinct approaches: Fuzzy-WASPAS, Fuzzy-TOPSIS, and Fuzzy-MARCOS.
The selection of these ranking methods is primarily based on the fact that each employs a fundamentally different prioritization strategy, ensuring independence among the resulting solutions. Additionally, the chosen methods represent a balanced and representative sample, allowing for comparison between classical and straightforward approaches such as WASPAS, widely recognized and established techniques like TOPSIS, and more recent and innovative methods such as MARCOS. Furthermore, all selected methods have well-tested fuzzy extensions, enabling efficient and rigorous handling of uncertainty expressed through triangular fuzzy numbers.
In conclusion, this multi-method implementation will enable a robust solution analysis through comparative ranking evaluation.

2.3.1. Fuzzy WASPAS

WASPAS (Weighted Aggregated Sum Product Assessment) is an efficient yet straightforward MCDM technique widely used for alternative ranking. This method prioritizes candidates through a weighted combination of their performance scores calculated via two approaches: the Weighted Sum Method (WSM) and Weighted Product Method (WPM).
The WASPAS implementation procedure consists of five sequential steps [51,56].
  • The decision matrix is normalized using Equations (2) and (3), following the same normalization scheme as in the fuzzy statistical variance weighting method.
  • The global performance of each alternative is computed through WSM, Q ˜ i , and WPM, G ˜ i aggregations:
    Q ˜ i = j = 1 m r ˜ i j w ˜ j and G ˜ i = j = 1 m r ˜ i j w ˜ j .
  • Then, WSM and WPM performance scores are defuzzified using Equation (1).
  • The performance measures are combined using a weighting parameter λ :
    K i = λ Q i + ( 1 λ ) G i where λ = i = 1 n G i i = 1 n Q i + i = 1 n G i .
  • Finally, alternatives are ranked in descending order of their combined performance values, with the highest scoring alternative representing the best choice.

2.3.2. Fuzzy TOPSIS

TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is a widely used MCDM ranking method valued for its versatility and robustness [57]. TOPSIS ranks the alternatives by simultaneously evaluating their distance to a theoretically ideal and anti-ideal solution. The best alternatives demonstrate geometric closeness to the ideal reference point while remaining distant from the anti-ideal solution.
The application of this method involves several steps [58]:
  • The decision matrix is normalized following the same procedure as in the WASPAS method.
  • The elements l ˜ i j of the weighted normalized fuzzy decision matrix are computed:
    l ˜ i j = r ˜ i j w ˜ j .
  • The ideal solution, A * , and anti-ideal solution, A , are identified by selecting the most extreme values across all alternatives for each criterion:
    A * = ( l ˜ 1 * , l ˜ 2 * , , l ˜ m * ) , where l ˜ j * = max i l i j u ,
    A = ( l ˜ 1 , l ˜ 2 , , l ˜ m ) , where l ˜ j = min i l i j l .
  • The distance between each alternative and ideal solution, d i * , and the anti-ideal solution, d i l , is computed.
    d i * = j = 1 m d ( l ˜ i j , l ˜ j * ) , d i = j = 1 m d ( l ˜ i j , l ˜ j ) ,
    where the distance between two TFN y ˜ and z ˜ can be calculated by
    d ( y ˜ , z ˜ ) = 1 3 y l z l 2 + y m z m 2 + y u z u 2 .
    It is worth noting that this distance metric is the most commonly applied in TOPSIS [59]; however, alternative metrics, such as the one proposed by Tran and Duckstein [60], can also be used. The effect of changing the distance metric on the obtained results will be analyzed in Section 4.2.
  • A closeness coefficient, C C , is calculated for each alternative using
    C C = d i d i + d i * .
  • Alternatives are ranked in descending order of their C C values, with higher values indicating better performance.

2.3.3. Fuzzy MARCOS

MARCOS (Measurement of Alternatives and Ranking according to Compromise Solution) is a state-of-the-art ranking methodology that addresses two critical aspects of MCDM problems: the need for robust reference-based evaluation and the requirement for balanced compromise solutions [61]. This technique evaluates alternatives by establishing their utility functions relative to both reference solutions—that is, ideal and anti-ideal—ultimately generating a compromise ranking that balances these benchmarks.
The application of this method involves several steps [62].
  • We determine the ideal A ˜ ( I D ) , and anti-ideal, A ˜ ( A I ) values from the decision matrix for each criterion according to whether it is a cost, C, or benefit, B, criterion:
    A ˜ ( A I ) = min i x ˜ i j k if j B and max i x ˜ i j k if j C , for k = l , m , u ,
    A ˜ ( I D ) = max i x ˜ i j k if j B and min i x ˜ i j k if j C , for k = l , m , u .
  • We then normalize the decision matrix using Equations (27) and (28):
    r ˜ i j = x I D l x i j u , x I D l x i j m , x I D l x i j l if j C ,
    r ˜ i j = x i j l x I D u , x i j m x I D u , x i j u x I D u if j B .
  • We compute the elements, l ˜ i j , of the weighted decision matrix following the same procedure as in the TOPSIS method.
  • The ideal, K ˜ i + , and anti-ideal, K ˜ i , utility degrees for each alternative are determined:
    K ˜ i + = S ˜ i S ˜ I D = s i l s I D u , s i m s I D m , s i u s I D l ,
    K ˜ i = S ˜ i S ˜ A I = s i l s A I u , s i m s A I m , s i u s A I l ,
    where
    S ˜ i = j = 1 m l ˜ i j .
  • We compute the elements t ˜ i of the aggregated utility matrix:
    t ˜ i = K ˜ i + + K ˜ i .
  • We then compute the fuzzy number H ˜ and its defuzzied value, h c r i s p using Equation (1):
    H ˜ = ( h l , h m , h u ) = max t ˜ i k for k = l , m , u .
  • Then, the utility function of each alternative in relation to the ideal, f ( K ˜ i + ) , and anti-ideal, f ( K ˜ i ) , solutions is determined:
    f ˜ ( K ˜ i + ) = K ˜ i h c r i s p = k i l h c r i s p , k i m h c r i s p , k i u h c r i s p ,
    f ˜ ( K ˜ i ) = K ˜ i + h c r i s p = k i + l h c r i s p , k i + m h c r i s p , k i + u h c r i s p .
  • We compute the crisp value of K ˜ i + , K ˜ i , f ˜ ( K ˜ i + ) and f ˜ ( K ˜ i ) using Equation (1).
  • We then calculate the utility function value for each alternative by applying Equation (36):
    f ( K i ) = K i + + K i 1 + 1 f ( K i + ) f ( K i + ) + 1 f ( K i ) f ( K i ) ,
  • The alternatives are then ranked such that the highest utility function value corresponds to the best choice.

3. Decision Problem: Prioritization of NEAs for a Boulder Capture Mission

3.1. Statement of the Decision Problem

In this study, the decision making problem to be addressed involves prioritizing a number of NEAs that could potentially serve as targets for a space mission dedicated to the scientific study of asteroids and the technological development of PD systems and ISRU techniques [6].
To achieve this, certain initial requirements that all candidate asteroids must meet will be established. First, NEAs with a diameter between 100 and 350 m will be selected. This size range is considered interesting because such asteroids are massive enough to cause significant damage if they impact Earth, yet they remain small enough that current observation campaigns have only detected a small fraction of their estimated population. This combination makes them a greater impact threat than larger asteroids; while the latter are more destructive, their population is both more limited and nearly completely catalogued [63]. Additionally, asteroids within this size range often exhibit surface boulders suitable for collection and further analysis [12].
Furthermore, to ensure the mission can be executed within a reasonable time frame using available technological capabilities and Δ V constraints, only asteroids that have at least one natural close approach to Earth between 2030 and 2045 are considered [13,64].

3.2. Criteria Selection

Following the establishment of initial requirements, a set of evaluation criteria must be proposed to prioritize the list of candidates. These criteria are classified as either benefit criteria, where higher performance of an alternative in the criterion improves its overall ranking, or cost criteria, where higher performance worsens its ranking. In addition, the criteria may be classified as crisp when their values are well defined or fuzzy when their values fluctuate within a reference interval or when it becomes necessary to represent the uncertainty associated with a particular value.
Therefore, to establish a suitable set of evaluation criteria for the decision making problem, it is necessary to analyze the key parameters that govern the design of asteroid mission architectures. The first of these is the Δ V required to perform the mission [64,65], which has a direct impact on the trajectory design as well as on the propulsion system requirements and fuel consumption. This parameter is represented in the model by the capture cost evaluation criterion.
Another desirable characteristic in mission design relates to operational flexibility. Targets with a very high synodic period present the disadvantage of offering infrequent opportunities to conduct missions and gather Earth-based observational data [65]. This feature is addressed in the model through the synodic period evaluation criterion.
The asteroid’s rotational dynamic, spin rate, and spin axis also constitute an important factor in the mission design [66]. The spin rate is incorporated as an evaluation criterion due to its strong influence on the operational cost of synchronizing spacecraft maneuvers during proximity operations. On the other hand, although most asteroids exhibit a stable spin axis, tumbling bodies should be avoided. Since tumbling motion cannot be reliably assessed without detailed observation, it is considered a constraint to be verified in a prior observational campaign before progressing to subsequent phases of mission development [6].
From a preliminary mission design perspective, another relevant aspect is the quality of orbital knowledge of the target, as it affects the possibility of defining precise and robust mission trajectories [65,67]. This characteristic is considered through the orbit determination accuracy criterion.
Additionally, it is advantageous for the selected target to have a group of asteroids with similar orbits [67], as this enables the preliminary trajectory design to be adapted for future missions or backup targets [68]. This feature is reflected in the number of similar candidates criterion.
From a scientific interest point of view, asteroid composition is also a relevant feature. However, current Earth-based observations provide taxonomic classification for only 0.29% of the asteroids within the studied size range [30]. This limitation, combined with the fact that asteroid boulder capture and redirection missions were primarily designed as technology demonstrators rather than for scientific exploration purposes [13], has led to the exclusion of composition as an evaluation criterion.
Therefore, although this study focuses on the analysis of the criteria capture cost, synodic period, rotation rate, orbit determination accuracy, and number of similar candidates, the preliminary results obtained may serve as a basis for organizing follow-up observation campaigns for the most promising candidates. These campaigns would aim to gather additional information on other relevant characteristics, such as taxonomic classification, approximate shape models, or thermal properties, which could be incorporated in a more detailed assessment using the MCDM framework proposed here.
  • Capture Cost: This cost-type, crisp criterion estimates the Δ V required to capture the asteroid or a portion of it into a closed Earth orbit. It is directly derived from the relative orbital geometry of the asteroid and Earth. For each candidate, the capture cost is computed following the methodology developed in [69,70] with orbital data extracted from the SBDS [30].
  • Synodic Period: This cost-type, crisp criterion defines the time required for the asteroid to return to the same relative position with respect to Earth. A low synodic period enables multiple observation and launch windows under similar conditions, whereas a high synodic period implies low repeatability of favorable conditions. The value of synodic period of an asteroid P s y n can be calculated from the orbital periods of Earth, P E a r t h , and the NEA, P N E A using the following formula:
    P s y n = P E a r t h · P N E A | P E a r t h P N E A | .
  • Rotation Rate: This criterion significantly affects mission operational costs, as higher rotation rates increase both mission complexity and propellant requirements for spacecraft–NEA synchronization during proximity operations. Consequently, rotation rate is classified as a cost-type criterion. When available, crisp rotation rate values for candidate asteroids may be retrieved from SBDS. To ensure reasonable operational costs for spacecraft–asteroid synchronization, only asteroids with known rotation rates below 0.5 revolutions per hour will be considered [6]. It is important to note that applying this filter will significantly reduce the number of potential candidates. One possible way to incorporate this evaluation criterion without excluding candidates would be to estimate the rotation rate using models that relate rotation rate to asteroid’s size [71,72]. However, available data [73] show variations of up to two orders of magnitude in the observed rotation periods for asteroids of the same size within the studied range. Hence, applying such estimations could result in inaccurate performance assessments and, consequently, distort the ranking outcomes. For this reason, this study has opted to exclude candidates with unknown rotation rates, allowing their inclusion in the proposed MCDM framework once reliable data become available.
  • Orbit Determination Accuracy: This benefit-type criterion evaluates the quality of the NEA’s orbital determination and its evolution. It is derived from the condition code parameter in SBDS, which quantifies orbital uncertainty on a scale from 0 (low uncertainty) to 9 (very high uncertainty). In practice, orbits of NEAs with condition codes exceeding 6 are considered too uncertain for mission targeting and are excluded as potential candidates unless additional observations improve their orbital determination [67,74]. It is important to note that condition code does not represent an exact and unambiguous value for orbital uncertainty but rather defines an interval containing the potential uncertainty range. Such uncertainty-bearing data can be rigorously handled through fuzzy logic.
    The fuzzy orbit determination accuracy parameter will be calculated through a linear transformation of the condition code. This transformation assigns maximum scores to asteroids with condition code 0, decreasing linearly until reaching null values for condition codes above 6 (see Table 1). This transformation simultaneously accomplishes two critical functions: it enables rigorous uncertainty quantification through fuzzy logic while preventing potential distortions in MCDM outcomes that may arise from zero-value criterion assignments [46,52].
    Table 1. Correspondence between the fuzzy orbit determination accuracy criterion and condition code.
    Table 1. Correspondence between the fuzzy orbit determination accuracy criterion and condition code.
    Orbit Determination AccuracyCondition Code
    (0.000, 0.143, 0.286)6
    (0.143, 0.286, 0.429)5
    (0.286, 0.429, 0.571)4
    (0.429, 0.571, 0.714)3
    (0.571, 0.714, 0.857)2
    (0.714, 0.857, 1)1
    (0.857, 1, 1)0
  • Number of Similar Candidates: This benefit-type criterion quantifies the number of potential candidates (those meeting all previous requirements) with orbits similar to the NEA under evaluation. The existence of a number of NEAs with similar orbital parameters enables the development of versatile mission architectures. In such scenarios, navigation systems, propulsion profiles, and design strategies can be adapted to analogous targets with minimal modifications. This approach not only optimizes development resources but also expands the range of potential destinations for a single mission, thereby enhancing both technical feasibility and scientific return. For this reason, NEAs lacking any similar orbital candidates will be excluded from further consideration.
    Orbital similarity between two asteroids could be determined through their semimajor axis, a, semi-latus rectum, p, eccentricity, e, inclination, i, argument of the perihelion ω , and longitude of the ascending node Ω using the metric ϱ 2 [75]:
    ϱ 2 2 = ( 1 + e 1 2 ) p 1 + ( 1 + e 2 2 ) p 2 2 p 1 p 2 ( cos I + e 1 e 2 cos P ) ,
    where subscripts 1 and 2 denote the orbital elements of the first and second NEA being compared, respectively, and
    cos I = c 1 c 2 + s 1 s 2 cos ( Ω 1 Ω 2 ) ,
    cos P = s 1 s 2 sin ω 1 sin ω 2 + ( cos ω 1 cos ω 2 + c 1 c 2 sin ω 1 sin ω 2 ) cos ( Ω 1 Ω 2 ) + ( c 2 cos ω 1 sin ω 2 c 1 sin ω 1 cos ω 2 ) sin ( Ω 1 Ω 2 ) ,
    where
    c = cos i , s = sin i .
    It should be emphasized that no exact threshold exists for the maximum ϱ 2 value that defines sufficient orbital similarity between two bodies [76]. To address this uncertainty rigorously, fuzzy logic is employed by representing the number of similar candidates by a TFN bounded between two characteristic ϱ 2 values, ϱ 2 = 0.16 and ϱ 2 = 0.2 . The number N ˜ s i m of similar candidates for a given NEA is then computed as follows:
    N ˜ s i m = ( N s i m | ϱ 2 = 0.16 , N s i m | ϱ 2 = 0.16 + N s i m | ϱ 2 = 0.2 2 , N s i m | ϱ 2 = 0.2 ) ,
    where N s i m | ϱ 2 = 0.16 and N s i m | ϱ 2 = 0.2 represent the number of similar candidates when applying the threshold conditions ϱ 2 = 0.16 and ϱ 2 = 0.2 , respectively.
Once these criteria and their limitations have been defined, the available data from the SBDS will be analyzed by performing a screening of the potential candidates. Table 2 provides a summary of the filters applied to the database, along with the corresponding implications for mission design.
It is important to emphasize that this criterion list could potentially be expanded by incorporating additional factors such as an asteroid’s scientific interest based on spectral composition or taxonomic classification [79]. However, current observational data reveals that only 0.29% of NEAs within the target size range have confirmed taxonomic classifications, which would severely constrain the candidate pool if such a criterion was implemented.
Consequently, this study should be interpreted as a foundational framework for prioritizing boulder capture mission candidates. The methodology permits future expansion as observational campaigns yield additional data on candidate asteroids, particularly regarding their compositional properties.

3.3. Proposal and Assessment of Alternatives

Once the requirements and evaluation criteria were established for the asteroid population under study, candidate asteroid data were systematically extracted and filtered using a custom script interfacing with the SBDS API. This screening process yielded a refined set of 28 asteroids meeting all mission constraints as potential targets.
The orbital and physical parameters of these candidates were processed to evaluate their performance across each defined criterion. These results were structured into a comprehensive decision matrix (see Table 3), which serves as the foundation for the subsequent multi-criteria analysis.

4. Results and Discussion

Once the methodological framework is established, the MCDM problem can be addressed. Following alternative evaluation, a set of fuzzy weights will be generated for each weighting mechanism. These weight sets, along with the decision matrix, will serve as inputs to the proposed ranking algorithms. The combination of different weighting methods and ranking techniques will produce a total of nine distinct alternative prioritization rankings, which will require comprehensive comparative analysis.
Upon obtaining the MCDM-derived rankings, an independent sensitivity analysis will be performed to assess how potential deviations in the determined weights could affect the resulting prioritized lists. This sensitivity analysis will serve two critical purposes: verifying the stability of top-ranked asteroid positions and validating that the results derived from both classical and innovative MCDM algorithm combinations are robust and reliable.

4.1. Weight Determination

The proposed evaluation methods, fuzzy statistical variance, fuzzy CRITIC, and fuzzy MEREC, were applied to the decision matrix presented in Table 3. Since each weighting method assesses criterion importance through distinct normalization and information processing techniques, the resulting fuzzy weight sets are expected to differ from one another.
Consequently, Table 4 presents the fuzzy weight evaluation results for each criterion, while Table 5 and Figure 2 display the defuzzified weight values using Equation (1).
It is worth highlighting that although the CRITIC method already incorporates correlation control within its weighting algorithm, it is important to assess the correlation among criteria to ensure the robustness of the results. A straightforward way to do so is through the calculation of the Variance Inflation Factor (VIF) [80]. A high VIF value for a given criterion indicates that its behavior may be explained by the remaining criteria. Accordingly, the VIF was calculated for all criteria, and in all cases, the values were found to be below 1.2, which is under the commonly accepted threshold of 5 used in the literature [81]. This fact confirms that the correlation among criteria in this problem is adequately controlled.
To compare the results calculated by different weighting MCDM methods, a common approach involves computing the Pearson correlation coefficient and constructing a correlation matrix. Each entry in this matrix represents the Pearson correlation coefficient between the weights obtained from two different methods. A coefficient close to unity indicates a strong positive correlation, meaning that if a criterion’s weight is high in one method, it tends to be high in the other as well. A negative coefficient suggests an inverse relationship, where a relatively high weight in one method corresponds to a low weight in the other. Finally, a coefficient near zero implies no clear relationship between the weight tendencies of the two methods.
The correlation matrix comparing the three weighting methods is shown in Figure 3.
As observed, all correlation indices are positive and close to 1. The lowest correlation occurs between fuzzy statistical variance and fuzzy MEREC, which can be attributed to their fundamentally different weighting procedures [49]. Notably, the largest discrepancy lies in the orbit determination accuracy criterion, where MEREC assigns a weight of 0.153 compared to statistical variance’s 0.0521, and there are a number of similar candidates for which the distribution is the opposite (0.11 according to MEREC and 0.206 according to the statistical variance method).Such discrepancies are inherent to MCDM weighting techniques, yet the correlation coefficient value of 0.638 between fuzzy statistical variance and fuzzy MEREC remains sufficiently high to confirm that all methods produce highly correlated, coherent results [82].
Finally, it is important to note that while the defuzzified weights in Table 5 and Figure 3 were used for comparison purposes, fuzzy weights will be employed for calculations of alternative rankings.

4.2. Alternative Rankings

Each of the three fuzzy weight sets calculated in the previous section will be used, along with the decision matrix, as input for the different fuzzy ranking methods of F-WASPAS, F-TOPSIS, and F-MARCOS, resulting in a total of nine alternative rankings. As discussed in Section 2.3, the diverse normalization approaches and data-processing techniques will naturally lead to variations between the different rankings, which is a common outcome when applying MCDM methods [14,34], though these differences must be systematically analyzed and controlled.
Following this approach, Table 6 presents the alternative rankings in descending order for all nine combinations of weighting and ranking methods.
As evidenced by the results, while the rankings differ, a consistent trend emerges where the top performing asteroids in one ranking maintain strong positions across all others and vice versa. Notably, the four highest-ranked candidates remain identical in all rankings, with 2013 NJ consistently outperforming all other alternatives.
Interestingly, 2013 NJ is a small Apollo-class NEO with an estimated diameter on the order of 0.1–0.2 km. Its orbit—semimajor axis a 1.286 A U , eccentricity e 0.283 , inclination i 3 . 941 , longitude of the ascending node Ω 238 . 54 and argument of perihelion ω 140 . 4 —has been well determined over multiple apparitions and radar astrometry techniques. Photometric and spectroscopic observations show it has a relative low rotation rate (0.004 revolutions per hour) with a Q-type (high-albedo, olivine/pyroxene–rich) surface [83] analogous to ordinary chondritic or ureilite meteorites with minimal space weathering.
It is worth noting that the four main candidates for the boulder capture option of the ARM mission—Itokawa (1998 SF36), Bennu (1999 RQ36), 1999 JU3, and 2008 EV5—do not appear in the rankings, as their sizes exceed the limits defined in this study and were therefore excluded during the preliminary screening phase. However, other asteroids proposed by ARM as potential targets are included, such as 2006 CT, which was ranked between positions 10 and 15 across the different methods, and 2000 AC6, which was ranked between positions 24 and 25 due to its higher rotation rate and capture cost.
Although the agreement in the top four ranking position suggests robust and coherent solutions across methodologies, a correlation analysis was conducted to quantitatively validate this observed consistency. As a result, Figure 4 presents the ranking correlation matrix for all method combinations proposed.
As shown in Figure 4, all correlation coefficients are positive and close to one, indicating strong relationships between the rankings. The lowest value, 0.814, corresponds to the MEREC–WASPAS and CRITIC–TOPSIS combinations, which still reflects a strong correlation. It is worth noting that the high correlation values observed among the rankings suggest a notable robustness of the obtained solutions. On average, these correlations exceed those observed between the weight distributions, indicating that the resulting rankings remain stable under small variations in the weighting of criteria.
To provide preliminary evidence on robustness prior to the dedicated sensitivity study, a set of targeted methodological tests addressing three key choices—defuzzification, normalization, and the TOPSIS distance metric—were performed. First, the defuzzification method proposed in Equation (1) was replaced by the best non-fuzzy performance-based defuzzification approach [44]. After conducting the simulations, it was found that the resulting rankings are robust, with correlation coefficients between the rankings obtained through the different defuzzification methods exceeding 0.99 in all cases. Second, to test sensitivity to normalization, the original procedures for fuzzy statistical variance and fuzzy CRITIC were subsequently replaced by linear max–min normalization and simple linear normalization [55], respectively, to test the robustness of the prioritization results. The simulation results using these alternative procedures yield correlation coefficients greater than 0.97 in all cases when compared to those obtained with the original normalization techniques. Third, to assess the sensitivity of the results to distance metric selection, the standard TOPSIS implementation using Euclidean distance [59] was compared with an alternative formulation proposed by Tran and Duckstein [60]. The resulting rankings demonstrated remarkable consistency, with correlation coefficients between the rankings obtained using the original TOPSIS implementation and those obtained with the alternative metric across the different weighting methods exceeding 0.99, confirming the robustness of the implementation to this methodological variation. In addition, the ranking of the top performing candidates remained invariant across all tested methodological variations.

4.3. Sensitivity Analysis

To conclude the analysis of the prioritized results obtained through the combination of MCDM methods, it is advisable to perform a sensitivity analysis to evaluate how slight deviations in the calculated weights may affect the ranking of alternatives. Traditionally, such analyses involve a series of simulations in which the weight of the most influential criterion is modified while adjusting the remaining weights proportionally [84]. However, this approach leads to a constrained weight distribution that does not comprehensively explore the space of possible weight combinations. To address this limitation, this study introduces a novel sensitivity analysis technique based on random sampling of weight combinations in the vicinity of a reference weight vector. This sampling is conducted using a Dirichlet distribution centered on the defuzzified average weights of each criterion shown in Table 5.
Although readers interested in the mathematical formulation of the Dirichlet distribution can refer to the specialized literature [85], in essence, the Dirichlet distribution generates the components of the weight vector, ( w d i r 1 , w d i r 2 , , w d i r m ) according to a set of parameters ( α d i r 1 , α d i r 2 , , α d i r m ) . By expressing α d i r j , where j 1 , m , as α j = k d i r w r e f j , a distribution centered around the reference weights ( w r e f 1 , w r e f 2 , , w r e f m ) with the dispersion controlled by the parameter k d i r is obtained.
It should be emphasized that this approach not only ensures that the sampled weight vectors remain focused within the region of interest defined by the reference weights but also guarantees that the sum of the weights in each simulation equals one, maintaining consistency in the decision making framework.
Thus, to conduct a robust sensitivity analysis, different values of the dispersion parameter k d i r were evaluated. For each value, a set of 10,000 simulations were performed, generating different weight combinations, around the reference weight vector. The distribution of simulated weights around the reference values for each k d i r is shown in Table 7.
In each simulation, the sampled weight vector was combined with the defuzzified decision matrix using a normalized additive weighting model. Based on the outcomes of all simulations for each k d i r , two metrics were derived: the random sampling average performance and the dominance percentage [86].
The first metric, the random sampling average global performance across the simulations, serves as a validation measure for the rankings produced by the different MCDM method combinations. Candidates exhibiting high average performance values across simulations should correspond to those achieving top rankings in the method evaluations, while low average performance values align with poorly ranked alternatives. Figure 5 presents these results for the various k d i r parameter values tested.
The results presented in Figure 5 yield several key observations. First, the trend in random sampling average performance closely aligns with the average results from the proposed MCDM methods, confirming the robustness of the MCDM-derived rankings. Second, the slope of the random sampling average performance curve flattens for lower-ranked asteroids, indicating that the top performing candidates demonstrate markedly superior performance compared to the remainder. Third, as expected, the dispersion parameter k d i r shows no significant influence on the random sampling average performance metric.
The second metric, the dominance percentage, quantifies the frequency with which each alternative attained the top ranking position across all simulated scenarios. A higher dominance percentage directly correlates with greater ranking stability for the top performing candidate around the reference weights, indicating stronger robustness to weight variations. Table 8 presents this metric for the five asteroids exhibiting the highest average performance values.
Based on the data presented in Table 8, the following conclusions can be drawn. First, the dispersion parameter k d i r has a significant influence on the dominance percentage metric. For small values of k d i r , there is a certain proportion of weight combinations under which alternatives with a lower random sampling average performance than 2013 NJ are able to dominate the ranking. As expected, as k d i r increases, the weights become more concentrated around the reference weights, and dominance tends to converge toward the best performing candidate. However, it is worth noting that the leading position of the candidate 2013 NJ is remarkably stable, as it consistently ranks highest in both average performance and dominance percentage across the different values of k d i r .
Second, it is noteworthy that asteroids such as 1999 FN19 and 1994 CJ1 exhibit a higher dominance percentage than 2001 EC16, despite receiving a lower overall ranking based on the combined results of the MCDM methods and the random sampling average performance. This phenomenon should not be interpreted as a lack of robustness in the ranking obtained, but rather as an indication that if decision makers were to shift the weight allocation in specific directions, the ranking would change in favor of the candidates with higher dominance percentages.
Thus, although 2001 EC16 shows high overall performance on average, the individual criterion-specific performances of other candidates tend to be stronger, often relegating 2001 EC16 to second place. For instance, if decision makers were to assign greater importance to the capture cost criterion, the ranking would likely be dominated by 1999 FN19. Conversely, if more weight were placed on the number of similar candidates, 1994 CJ1 would tend to dominate, overshadowing 2001 EC16 despite its high global performance. However, when 2013 NJ is removed from the ranking and the dominance percentages are recalculated, 2001 EC16 achieves dominance percentages of 65.81%, 95.39%, and 100% for k d i r = 10, 100, and 1000, respectively. This reinforces the idea that the random sampling average performance remains a suitable metric for validating the obtained rankings, while the dominance percentage provides a useful indication of which candidates are most likely to take the top position if deviations from the reference weights occur.
Finally, it is worth noting that an alternative way of selecting k d i r involves progressively increasing its value and simulating the resulting rankings until they stabilize, particularly with regard to the top performing candidates. Once such a value is identified, it can be accepted by the analyst if the trade-off between the dominance percentage of the top-ranked alternative and the average percentage variation of the simulated weights is deemed satisfactory. As an example, in this study, all integer values of k d i r within the interval [1, 1000] were explored, and it was found that the rankings stabilized from k d i r = 40 onwards. For this value, the variation introduced by the simulations was sufficient, as the mean variation of the simulated criterion weights exceeded the average variation observed among the original weighting methods in all cases and, furthermore, 2013 NJ exhibited an acceptably high dominance (77.67%). Therefore, selecting a k d i r value around 40 may be considered acceptable for the purpose of sensitivity analysis in this work.
As an additional robustness check, an alternative sensitivity analysis was performed using normal distributions centered on the mean weight [86], with a standard deviation equal to that observed across the three weighting methods. The results of this analysis were consistent with those obtained in the primary evaluation, further reinforcing the stability of the proposed approach.
In conclusion, the sensitivity analysis applied to this problem indicates not only that the ranking trends produced by the proposed combination of MCDM methodologies are coherent and satisfactory but also that the resulting classification is highly robust with respect to deviations around the calculated weights.

4.4. Limitations and Future Work

To conclude this section, it is important to remark that future work should consider extending the framework as new data become available. For example, a key limitation of this study is that rotation rates are currently known for only 4.46% of candidate asteroids, significantly restricting the final candidate selection. As observational campaigns progressively expand this dataset, the methodological framework presented here could be applied to a larger, more comprehensive pool of potential targets.
Furthermore, future research could enhance this decision making framework by integrating spectral or taxonomic classifications as data coverage improves, or by developing hybrid approaches that combine expert elicitation (subjective weights) with existing objective fuzzy methods.
Although the proposed fuzzy multi-criteria decision making approach provides a robust framework for preliminary asteroid prioritization using established accessibility metrics, we acknowledge that the current model’s scope does not fully capture potential mission design constraints that emerge during a more detailed mission planning [87,88]. Specifically, Earth-departure energy limits ( C 3 ), launch-window dynamics, mass-return efficiency and rotation axis orientation were not incorporated into this model. The absence of a launch-window criterion may underestimate the importance of time frame in mission design, while payload mass exclusions could mask important trade-offs between sample return mass and trajectory design. Furthermore, while rotation rate provides a useful preliminary metric, it neglects important factors like spin-axis orientation and libration modes that significantly impact proximity operations’ cost and complexity. These omissions stem from the focus on high-level assessment during the initial screening phase but warrant further investigation to enhance practical applicability. For these reasons, in future work, trajectory simulation tools for launch-window and C 3 analysis, as well as the development of a mass-return metric will be implemented to support more detailed operational feasibility assessments. By coupling our systematic MCDM framework with these mission-design considerations, it would be possible to deliver a comprehensive selection tool that bridges theoretical accessibility with practical implementation requirements.
Moreover, it is worth highlighting that the ranking methods used in this study belong to the class of compensatory methods [14]. These methods offer the advantage of being more intuitive, as they combine the performance across different criteria into a single representative index used to produce the final ranking. However, they present certain limitations. Specifically, they do not adequately account for situations where a poor result in one evaluation criterion can be offset by an excellent result in another, which may mask critical weaknesses or generate rankings not so influenced by the criteria with greater weights [44]. Moreover, to generate an aggregated metric that produces a final ranking, these methods require a defuzzification procedure in the final stages to obtain a crisp ranking.
These limitations suggest that future work should explore a comparison between the results obtained using compensatory methods and those derived from the implementation of fuzzy outranking methods such as ELECTRE [89] and PROMETHEE [90], which allow the uncertainty contained in the weights and performance values to be preserved until the final stages of the classification process [91]. Nevertheless, the use of such methods also entails a series of disadvantages. First, the results produced by these approaches are not always conclusive, often requiring additional techniques to derive a complete ranking of the candidates. Second, it is necessary to define a set of preference or veto thresholds for each criterion, which would require expert input and add subjectivity to the decision model. Finally, these methods tend to be less interpretable, due to the complex interactions formulated between the criteria [14].
Finally, the modular design of this MCDM system will allow decision makers and designers to expand and adapt this methodology to support broader aerospace decision making under uncertainty or apply this fuzzy MCDM framework to other NEA mission types (e.g., observation campaign target selection, or complete capture missions) or to different small-body populations (e.g., main-belt asteroids and comets). In addition, this modularity also opens up the opportunity for future works to build upon the results obtained within the proposed framework in order to integrate machine learning-based MCDM methods, enabling the development of classification models capable of predicting the evaluation of new candidates as more asteroid data become available.

5. Conclusions

This study has fulfilled its two main objectives; the first was to analyze in detail potential near-Earth asteroid candidates for a boulder capture mission, and the second was to compare the outcomes obtained through combinations of both classical and innovative fuzzy MCDM techniques. The methodology herein involved the application of three objective fuzzy weighting methods—fuzzy statistical variance, fuzzy CRITIC, and fuzzy MEREC—combined with three fuzzy ranking techniques—fuzzy WASPAS, fuzzy TOPSIS, and fuzzy MARCOS—resulting in nine distinct prioritization schemes applied to a filtered set of 28 NEAs.
The results show a high degree of consistency and robustness across methods. Despite small differences in how each weighting approach quantifies criterion importance, the top-ranked asteroids remain largely stable. NEA 2013 NJ was ranked first in all nine method combinations, while the same four asteroids consistently appeared in the top positions. Correlation analysis between weight sets and rankings confirmed strong coherence, reinforcing the reliability of the framework regardless of the weighting method used. From a mission planning perspective, the results confirm that a structured fuzzy MCDM approach can effectively narrow a large NEA candidate pool to a robust set of priorities, guiding resource allocation and reducing risk.
To further assess robustness, a statistical sensitivity analysis based on random sampling of weight assignments was implemented using a Dirichlet distribution centered on the reference weights. This innovative approach confirmed that the rankings are resilient to plausible variations in criterion importance, providing mission planners with confidence in the prioritization outcome.
Beyond ranking stability, the integration of fuzzy logic has proven essential for addressing parameters with high uncertainty, such as orbit determination accuracy or similarity among asteroid orbits. The use of triangular fuzzy numbers enabled rigorous treatment of imprecise data without compromising methodological transparency.
Moreover, the framework’s modularity allows incorporation of additional criteria such as compositional data or updated observational inputs without altering the core methodology. This flexibility ensures that as new measurements become available or mission requirements evolve, the same decision support structure can be readily adapted.
In summary, this work demonstrates the viability and value of combining classical and novel fuzzy MCDM techniques to support asteroid target selection in the presence of uncertainty. The proposed framework not only narrows down a broad set of potential candidates into a scientifically grounded priority list but also offers insights into the influence of each method and the sensitivity of the decision space.
The results herein support further investigation of the highest-ranked targets—particularly 2013 NJ—for mission design studies and offer a flexible toolset that is adaptable to future data updates or additional mission constraints. This methodology may also be extended to other small-body exploration scenarios, promoting robust, data-informed decision making in aerospace applications.

Author Contributions

Conceptualization, N.R., J.M.S.-L. and E.P.-A.; methodology, N.R., J.M.S.-L. and E.P.-A.; software, N.R.; validation, N.R., J.M.S.-L. and E.P.-A.; formal analysis, N.R., J.M.S.-L. and E.P.-A.; investigation, N.R.; resources, N.R., J.M.S.-L. and E.P.-A.; data curation, N.R.; writing—original draft preparation, N.R.; writing—review and editing, N.R., J.M.S.-L. and E.P.-A.; visualization, N.R.; supervision, J.M.S.-L. and E.P.-A.; project administration, J.M.S.-L. and E.P.-A. All authors have read and agreed to the published version of the manuscript.

Funding

N.R., J.M.S.-L and E.P.-A. carried out this work within the framework of the project Fundación Seneca (22069/PI/22), Spain. E.P.-A. received support from the LUMIO project funded by the Agenzia Spaziale Italiana (2024-6-HH.0). J.M.S.-L. acknowledges funding from the Ministerio de Ciencia e Innovación of Spain (grant PID2021-128062NBI00).

Data Availability Statement

Asteroid data can be found at https://ssd.jpl.nasa.gov/tools/sbdb_query.html (accessed on 17 December 2024). Computing scripts can be found at https://github.com/NelsonRaLop/MCDM-Asteroid-Mission (accessed on 01 February 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Koschny, D.; Borovička, J. Definitions of Terms in Meteor Astronomy. WGN J. Int. Meteor Organ. 2017, 45, 91–92. [Google Scholar]
  2. Barbee, B.; Esposito, T.; Piñon, E.; Hur-Diaz, S.; Mink, R.; Adamo, D. A Comprehensive Ongoing Survey of the Near-Earth Asteroid Population for Human Mission Accessibility. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Toronto, ON, Canada, 2–5 August 2010. [Google Scholar] [CrossRef][Green Version]
  3. Sánchez-Lozano, J.M.; Fernández-Martínez, M. Near-Earth object hazardous impact: A Multi-Criteria Decision Making approach. Sci. Rep. 2016, 6, 37055. [Google Scholar] [CrossRef]
  4. Sánchez-Lozano, J.M.; Fernández-Martínez, M.; Lamata, M.T. Near-Earth Asteroid impact dates: A Reference Ideal Method (RIM) approach. Eng. Appl. Artif. Intell. 2019, 81, 157–168. [Google Scholar] [CrossRef]
  5. Michel, P.; DeMeo, F.E.; Bottke, W.F. Asteroids IV; University of Arizona Press: Tucson, AZ, USA, 2015. [Google Scholar] [CrossRef]
  6. Small Bodies Assessment Group (SBAG). Goals and Objectives for the Exploration and Investigation of the Solar System’s Small Bodies, Version 1.2.2016. Technical Report, Small Bodies Assessment Group. 2016. Available online: https://discovery.larc.nasa.gov/PDF_FILES/05_SBAG_GoalsDoc_ver.1.2.2016.pdf (accessed on 13 July 2024).
  7. National Academies (Ed.) Defending Planet Earth: Near-Earth Object Surveys and Hazard Mitigation Strategies; National Academies Press: Washington, DC, USA, 2010; p. 12842. [Google Scholar] [CrossRef]
  8. Bazzocchi, M.C.F.; Emami, M.R. Comparative Analysis of Redirection Methods for Asteroid Resource Exploitation. Acta Astronaut. 2016, 120, 1–19. [Google Scholar] [CrossRef]
  9. Mazanek, D.D.; Merrill, R.G.; Brophy, J.R.; Mueller, R.P. Asteroid Redirect Mission Concept: A Bold Approach for Utilizing Space Resources. Acta Astronaut. 2015, 117, 163–171. [Google Scholar] [CrossRef]
  10. Simó-Soler, E.; Peña-Asensio, E. From impact refugees to deterritorialized states: Foresighting extreme legal-policy cases in asteroid impact scenarios. Acta Astronaut. 2022, 192, 402–408. [Google Scholar] [CrossRef]
  11. Tomblin, D.; Pirtle, Z.; Farooque, M.; Sittenfeld, D.; Mahoney, E.; Worthington, R.; Gano, G.; Gates, M.; Bennett, I.; Kessler, J.; et al. Integrating Public Deliberation into Engineering Systems: Participatory Technology Assessment of NASA’s Asteroid Redirect Mission. Astropolitics 2017, 15, 141–166. [Google Scholar] [CrossRef]
  12. Gates, M.; Muirhead, B.; Naasz, B.; McDonald, M.; Mazanek, D.; Stich, S.; Chodas, P.; Reuter, J. NASA’s Asteroid Redirect Mission Concept Development Summary. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2015. [Google Scholar] [CrossRef]
  13. Muirhead, B.K.; Brophy, J.R. Asteroid Redirect Robotic Mission Feasibility Study. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2014. [Google Scholar] [CrossRef]
  14. Thakkar, J.J. Multi-Criteria Decision Making; Studies in Systems, Decision and Control; Springer: Singapore, 2021; Volume 336. [Google Scholar] [CrossRef]
  15. Mardani, A.; Jusoh, A.; Zavadskas, E.K. Fuzzy Multiple Criteria Decision-Making Techniques and Applications—Two Decades Review from 1994 to 2014. Expert Syst. Appl. 2015, 42, 4126–4148. [Google Scholar] [CrossRef]
  16. Jang, J.S.R. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man, Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  17. Liao, H.; He, Y.; Wu, X.; Wu, Z.; Bausys, R. Reimagining multi-criterion decision making by data-driven methods based on machine learning: A literature review. Inf. Fusion 2023, 100, 101970. [Google Scholar] [CrossRef]
  18. Suresh, K.; Dillibabu, R. A novel fuzzy mechanism for risk assessment in software projects. Soft Comput. 2020, 24, 1683–1705. [Google Scholar] [CrossRef]
  19. Sheu, J.B. A hybrid neuro-fuzzy analytical approach to mode choice of global logistics management. Eur. J. Oper. Res. 2008, 189, 971–986. [Google Scholar] [CrossRef]
  20. Kolvir, H.R.; Madadi, A.; Safarianzengir, V.; Sobhani, B. Monitoring and analysis of the effects of atmospheric temperature and heat extreme of the environment on human health in Central Iran, located in southwest Asia. Air Qual. Atmos. Health 2020, 13, 1179–1191. [Google Scholar] [CrossRef]
  21. Nilashi, M.; Minaei-Bidgoli, B.; Alrizq, M.; Alghamdi, A.; Alsulami, A.A.; Samad, S.; Mohd, S. An analytical approach for big social data analysis for customer decision-making in eco-friendly hotels. Expert Syst. Appl. 2021, 186, 115722. [Google Scholar] [CrossRef]
  22. Tavana, M.; Fallahpour, A.; Caprio, D.D.; Santos-Arteaga, F.J. A hybrid intelligent fuzzy predictive model with simulation for supplier evaluation and selection. Expert Syst. Appl. 2016, 61, 129–144. [Google Scholar] [CrossRef]
  23. Paryani, S.; Neshat, A.; Pradhan, B. Spatial landslide susceptibility mapping using integrating an adaptive neuro-fuzzy inference system (ANFIS) with two multi-criteria decision-making approaches. Theor. Appl. Climatol. 2021, 146, 489–509. [Google Scholar] [CrossRef]
  24. Shao, Q.; Rowe, R.C.; York, P. Comparison of neurofuzzy logic and neural networks in modelling experimental data of an immediate release tablet formulation. Eur. J. Pharm. Sci. 2006, 28, 394–404. [Google Scholar] [CrossRef] [PubMed]
  25. Özkan, G.; İnal, M. Comparison of neural network application for fuzzy and ANFIS approaches for multi-criteria decision making problems. Appl. Soft Comput. 2014, 24, 232–238. [Google Scholar] [CrossRef]
  26. Güneri, A.F.; Ertay, T.; Yücel, A. An approach based on ANFIS input selection and modeling for supplier selection problem. Expert Syst. Appl. 2011, 38, 14907–14917. [Google Scholar] [CrossRef]
  27. Saghaei, A.; Didehkhani, H. Developing an integrated model for the evaluation and selection of six sigma projects based on ANFIS and fuzzy goal programming. Expert Syst. Appl. 2011, 38, 721–728. [Google Scholar] [CrossRef]
  28. Golmohammadi, D. Neural network application for fuzzy multi-criteria decision making problems. Int. J. Prod. Econ. 2011, 131, 490–504. [Google Scholar] [CrossRef]
  29. Nassimbeni, G.; Battain, F. Evaluation of supplier contribution to product development: Fuzzy and neuro-fuzzy based approaches. Int. J. Prod. Res. 2003, 41, 2933–2956. [Google Scholar] [CrossRef]
  30. Small-Body Database Query. Available online: https://ssd.jpl.nasa.gov/tools/sbdb_query.html (accessed on 11 June 2025).
  31. Herrera, F.; Alonso, S.; Chiclana, F.; Herrera-Viedma, E. Computing with Words in Decision Making: Foundations, Trends and Prospects. Fuzzy Optim. Decis. Mak. 2009, 8, 337–364. [Google Scholar] [CrossRef]
  32. Wu, H.; Xu, Z. Fuzzy Logic in Decision Support: Methods, Applications and Future Trends. Int. J. Comput. Commun. Control 2020, 16. [Google Scholar] [CrossRef]
  33. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  34. Sánchez-Lozano, J.M.; Serna, J.; Dolón-Payán, A. Evaluating Military Training Aircrafts through the Combination of Multi-Criteria Decision Making Processes with Fuzzy Logic: A Case Study in the Spanish Air Force Academy. Aerosp. Sci. Technol. 2015, 42, 58–65. [Google Scholar] [CrossRef]
  35. Bazzocchi, M.C.F.; Sánchez-Lozano, J.M.; Hakima, H. Fuzzy multi-criteria decision-making approach to prioritization of space debris for removal. Adv. Space Res. 2021, 67, 1155–1173. [Google Scholar] [CrossRef]
  36. Klir, G.J.; Yuan, B.F. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  37. Sánchez-Lozano, J.M.; Moya, A.; Rodríguez-Mozos, J.M. A fuzzy Multi-Criteria Decision Making approach for Exo-Planetary Habitability. Astron. Comput. 2021, 36, 100471. [Google Scholar] [CrossRef]
  38. Sánchez-Lozano, J.M.; Fernández-Martínez, M.; Saucedo-Fernández, A.A.; Trigo-Rodriguez, J.M. Evaluation of NEA deflection techniques. A fuzzy Multi-Criteria Decision Making analysis for planetary defense. Acta Astronaut. 2020, 176, 383–397. [Google Scholar] [CrossRef]
  39. Sánchez Lozano, J.M. Decisiones cruciales: Metodologías multicriterio aplicadas a la defensa planetaria. In Defensa Planetaria; Simó Soler, E., Peña Asensio, E., Eds.; Dykinson: Madrid, Spain, 2023. [Google Scholar] [CrossRef]
  40. Dağdeviren, M.; Yavuz, S.; Kılınç, N. Weapon selection using the AHP and TOPSIS methods under fuzzy environment. Expert Syst. Appl. 2009, 36, 8143–8151. [Google Scholar] [CrossRef]
  41. Pedrycz, W.; Gomide, F. Operations and Aggregations of Fuzzy Sets. In Fuzzy Systems Engineering: Toward Human-Centric Computing; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar] [CrossRef]
  42. Fernández-Martínez, M.; Sánchez-Lozano, J.M. Assessment of Near-Earth Asteroid Deflection Techniques via Spherical Fuzzy Sets. Adv. Astron. 2021, 2021, 6678056. [Google Scholar] [CrossRef]
  43. Sánchez-Lozano, J.M.; García-Cascales, M.S.; Lamata, M.T. GIS-based onshore wind farm site selection using Fuzzy Multi-Criteria Decision Making methods. Evaluating the case of Southeastern Spain. Appl. Energy 2016, 171, 86–102. [Google Scholar] [CrossRef]
  44. Sánchez-Lozano, J.; Fernández-Martínez, M.; Larraz, M. A comparison between fuzzy TOPSIS and VIKOR to the selection of aircraft for airspace defense. Int. J. Inf. Technol. Decis. Mak. 2025. Online Ready. [Google Scholar] [CrossRef]
  45. García-Cascales, M.S.; Lamata, M.A. A Modification of the Index of Liou and Wang for Ranking Fuzzy Number. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2007, 15, 411–424. [Google Scholar] [CrossRef]
  46. Ayan, B.; Abacıoğlu, S.; Basilio, M.P. A Comprehensive Review of the Novel Weighting Methods for Multi-Criteria Decision-Making. Information 2023, 14, 285. [Google Scholar] [CrossRef]
  47. Valentinas, P.; Kazimieras, Z.E.; Askoldas, P. An Extension of the New Objective Weight Assessment Methods CILOS and IDOCRIW to Fuzzy MCDM. ECECSR 2020, 54, 59–75. [Google Scholar] [CrossRef]
  48. Odu, G.O. Weighting Methods for Multi-Criteria Decision Making Technique. J. Appl. Sci. Environ. Manag. 2019, 23, 1449–1457. [Google Scholar] [CrossRef]
  49. Paradowski, B.; Shekhovtsov, A.; Bączkiewicz, A.; Kizielewicz, B.; Sałabun, W. Similarity Analysis of Methods for Objective Determination of Weights in Multi-Criteria Decision Support Systems. Symmetry 2021, 13, 1874. [Google Scholar] [CrossRef]
  50. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining Objective Weights in Multiple Criteria Problems: The CRITIC Method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
  51. Trivedi, P.; Vansjalia, R.; Erra, S.; Narayanan, S.; Nagaraju, D. A Fuzzy CRITIC and Fuzzy WASPAS-Based Integrated Approach for Wire Arc Additive Manufacturing (WAAM) Technique Selection. Arab. J. Sci. Eng. 2023, 48, 3269–3288. [Google Scholar] [CrossRef]
  52. Keshavarz-Ghorabaee, M.; Amiri, M.; Zavadskas, E.K.; Turskis, Z.; Antucheviciene, J. Determination of Objective Weights Using a New Method Based on the Removal Effects of Criteria (MEREC). Symmetry 2021, 13, 525. [Google Scholar] [CrossRef]
  53. Hasnan, Q.H.; Rodzi, Z.; Kamis, N.H.; AlSharqi, F.; AlQuran, A.; Romdhini, M.U. Triangular Fuzzy MEREC (TFMEREC) and Its Applications in Multi Criteria Decision Making. J. Fuzzy Ext. Appl. 2024, 5, 505–532. [Google Scholar] [CrossRef]
  54. Narang, M.; Kumar, A.; Dhawan, R. A fuzzy extension of MEREC method using parabolic measure and its applications. J. Decis. Anal. Intell. Comput. 2023, 3, 33–46. [Google Scholar] [CrossRef]
  55. Jahan, A.; Edwards, K.L. A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Mater. Des. (1980–2015) 2015, 65, 335–342. [Google Scholar] [CrossRef]
  56. Turskis, Z.; Goranin, N.; Nurusheva, A.; Boranbayev, S. A Fuzzy WASPAS-Based Approach to Determine Critical Information Infrastructures of EU Sustainable Development. Sustainability 2019, 11, 424. [Google Scholar] [CrossRef]
  57. Nădăban, S.; Dzitac, S.; Dzitac, I. Fuzzy TOPSIS: A General View. Procedia Comput. Sci. 2016, 91, 823–831. [Google Scholar] [CrossRef]
  58. Sodhi, B.; V, P.T. A Simplified Description of Fuzzy TOPSIS. Technical Report. arXiv 2017, arXiv:1205.5098. [Google Scholar] [CrossRef]
  59. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar] [CrossRef]
  60. Tran, L.; Duckstein, L. Comparison of fuzzy numbers using a fuzzy distance measure. Fuzzy Sets Syst. 2002, 130, 331–341. [Google Scholar] [CrossRef]
  61. Stević, Ž.; Pamučar, D.; Puška, A.; Chatterjee, P. Sustainable supplier selection in healthcare industries using a new MCDM method: Measurement of alternatives and ranking according to COmpromise solution (MARCOS). Comput. Ind. Eng. 2020, 140, 106231. [Google Scholar] [CrossRef]
  62. Stanković, M.; Stević, Ž.; Das, D.K.; Subotić, M.; Pamučar, D. A New Fuzzy MARCOS Method for Road Traffic Risk Analysis. Mathematics 2020, 8, 457. [Google Scholar] [CrossRef]
  63. National Science and Technology Council. National Preparedness Strategy and Action Plan for Near-Earth Object Hazards and Planetary Defense; Technical report; National Science and Technology Council: Washington, DC, USA, 2023.
  64. Strange, N.; Landau, D.; McElrath, T.; Lantoine, G.; Lam, T.; McGuire, M.; Laura, B.; Michael, M.; Dankanich, J. Overview of Mission Design for NASA Asteroid Redirect Robotic Mission Concept. In Proceedings of the International Electric Propulsion Conference (IEPC), Washington, DC, USA, 6–10 October 2013. [Google Scholar]
  65. Drake, B.G. Strategic Implications of Human Exploration of Near-Earth Asteroids. In Proceedings of the IEEE 2012 Aerospace Conference, Big Sky, MT, USA, 3–10 March 2012. [Google Scholar]
  66. Barbee, B.W. Mission Planning for the Mitigation of Hazardous Near Earth Objects. Master’s Thesis, University of Texas, Austin, TX, USA, 2005. [Google Scholar]
  67. Hirabayashi, M.; Mimasu, Y.; Sakatani, N.; Watanabe, S.; Tsuda, Y.; Saiki, T.; Kikuchi, S.; Kouyama, T.; Yoshikawa, M.; Tanaka, S.; et al. Hayabusa2 Extended Mission: New Voyage to Rendezvous with a Small Asteroid Rotating with a Short Period. Adv. Space Res. 2021, 68, 1533–1555. [Google Scholar] [CrossRef]
  68. Michel, P.; Barucci, M.A.; Cheng, A.F.; Böhnhardt, H.; Brucato, J.R.; Dotto, E.; Ehrenfreund, P.; Franchi, I.A.; Green, S.F.; Lara, L.-M.; et al. MarcoPolo-R: Near-Earth Asteroid sample return mission selected for the assessment study phase of the ESA program cosmic vision. Acta Astronaut. 2014, 93, 530–538. [Google Scholar] [CrossRef]
  69. Sanchez, J.P.; Garcia-Yarnoz, D.; McInnes, C.R. Near-Earth Asteroid Resource Accessibility and Future Capture Mission. In Proceedings of the Global Space Exploration Conference, Washington, DC, USA, 22–24 May 2012. [Google Scholar]
  70. García Yárnoz, D.; Sanchez, J.P.; McInnes, C.R. Easily Retrievable Objects among the NEO Population. Celest. Mech. Dyn. Astron. 2013, 116, 367–388. [Google Scholar] [CrossRef]
  71. Persson, B.N.J.; Biele, J. On the Stability of Spinning Asteroids. Tribol. Lett. 2022, 70, 34. [Google Scholar] [CrossRef]
  72. Asphaug, E. Growth and Evolution of Asteroids. Annu. Rev. Earth Planet. Sci. 2009, 37, 413–448. [Google Scholar] [CrossRef]
  73. Warner, B.; Pravec, P.; Harris, A. Asteroid Lightcurve Database (LCDB) Bundle V4.0; NASA Planetary Data System; NASA: Washington, DC, USA, 2021. [CrossRef]
  74. Minor Planet Center. Uncertainty Parameter U and Orbit Quality Codes. Available online: http://www.minorplanetcenter.org/iau/info/UValue.html (accessed on 17 November 2024).
  75. Kholshevnikov, K.V.; Kokhirova, G.I.; Babadzhanov, P.B.; Khamroev, U.H. Metrics in the Space of Orbits and Their Application to Searching for Celestial Objects of Common Origin. Mon. Not. R. Astron. Soc. 2016, 462, 2275–2283. [Google Scholar] [CrossRef]
  76. Peña-Asensio, E.; Sánchez-Lozano, J.M. Statistical Equivalence of Metrics for Meteor Dynamical Association. Adv. Space Res. 2024, 74, 1073–1089. [Google Scholar] [CrossRef]
  77. Michikami, T.; Hagermann, A. Boulder sizes and shapes on asteroids: A comparative study of Eros, Itokawa and Ryugu. Icarus 2021, 357, 114282. [Google Scholar] [CrossRef]
  78. Sanchez, J.P.; McInnes, C.R. Assessment on the feasibility of future shepherding of asteroid resources. Acta Astronaut. 2012, 73, 49–66. [Google Scholar] [CrossRef]
  79. Tholen, D.J. Asteroid Taxonomic Classifications. In Proceedings of the Conference Asteroids II, Tucson, AZ, USA, 8–11 March 1988. [Google Scholar]
  80. Sohil, F.; Sohali, M.U.; Shabbir, J. An introduction to statistical learning with applications in R. Stat. Theory Relat. Fields 2022, 6, 87. [Google Scholar] [CrossRef]
  81. Akinwande, M.O.; Dikko, H.G.; Samson, A. Variance Inflation Factor: As a Condition for the Inclusion of Suppressor Variable(s) in Regression Analysis. Open J. Stat. 2015, 5, 754–767. [Google Scholar] [CrossRef]
  82. Keshavarz-Ghorabaee, M.; Amiri, M.; Zavadskas, E.K.; Turskis, Z.; Antucheviciene, J. Simultaneous Evaluation of Criteria and Alternatives (SECA) for Multi-Criteria Decision-Making. Informatica 2018, 29, 265–280. [Google Scholar] [CrossRef]
  83. Hasegawa, S.; Kuroda, D.; Kitazato, K.; Kasuga, T.; Sekiguchi, T.; Takato, N.; Aoki, K.; Arai, A.; Choi, Y.J.; Fuse, T.; et al. Physical properties of near-Earth asteroids with a low delta-v: Survey of target candidates for the Hayabusa2 mission. Publ. Astron. Soc. Jpn. 2018, 70, 114. Available online: https://academic.oup.com/pasj/article-pdf/70/6/114/54674317/pasj_70_6_114.pdf (accessed on 4 March 2025). [CrossRef]
  84. Kahraman, Y.R. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models. Theses and Dissertations.4516. Available online: https://scholar.afit.edu/etd/4516 (accessed on 25 June 2025).
  85. Dunn, W.L.; Shultis, J.K. Appendix A – Some Common Probability Distributions. In Exploring Monte Carlo Methods, 2nd ed.; Dunn, W.L., Shultis, J.K., Eds.; Elsevier: Amsterdam, The Netherlands, 2023; pp. 447–495. [Google Scholar] [CrossRef]
  86. Mazurek, J.; Strzałka, D. On the Monte Carlo Weights in Multiple Criteria Decision Analysis. PLoS ONE 2022, 17, e0268950. [Google Scholar] [CrossRef]
  87. Wessen, R.R.; Borden, C.; Ziemer, J.; Kwok, J. Space Mission Concept Development Using Concept Maturity Levels. In Proceedings of the American Institute of Aeronautics and Astronautics, Reston, VA, USA, 10–12 September 2013. [Google Scholar]
  88. Tantardini, M.; Parki, K.; Schweickart, R.; Singh, G.; Strang, N.; Wilcox, B.; Williams, C.; Williams, W.; Yeomans, D.; Nut, J.; et al. Asteroid Retrieval Feasibility Study. Technical Report, Keck Institute for Space Studies California Institute of Technology Jet Propulsion Laboratory. 2012. Available online: https://www.kiss.caltech.edu/final_reports/Asteroid_final_report.pdf (accessed on 2 December 2024).
  89. Chu, T.C.; Nghiem, T.B.H. Extension of Fuzzy ELECTRE I for Evaluating Demand Forecasting Methods in Sustainable Manufacturing. Axioms 2023, 12, 926. [Google Scholar] [CrossRef]
  90. Gul, M.; Celik, E.; Gumus, A.T.; Guneri, A.F. A fuzzy logic based PROMETHEE method for material selection problems. Beni-Suef Univ. J. Basic Appl. Sci. 2018, 7, 68–79. [Google Scholar] [CrossRef]
  91. Akkaya, B.; Kahraman, C. A Literature Review on Fuzzy ELECTRE Methods. In LNNS 758; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
Figure 1. Overview of the asteroid decision problem workflow.
Figure 1. Overview of the asteroid decision problem workflow.
Aerospace 12 00800 g001
Figure 2. Comparison diagram of defuzzified weights.
Figure 2. Comparison diagram of defuzzified weights.
Aerospace 12 00800 g002
Figure 3. Correlation matrix between weighting methods.
Figure 3. Correlation matrix between weighting methods.
Aerospace 12 00800 g003
Figure 4. Correlation matrix between rankings.
Figure 4. Correlation matrix between rankings.
Aerospace 12 00800 g004
Figure 5. Comparison between average rankings across nine MCDM method combinations (left) and random sampling average performance trends (right).
Figure 5. Comparison between average rankings across nine MCDM method combinations (left) and random sampling average performance trends (right).
Aerospace 12 00800 g005
Table 2. Screening criteria applied to the SBDS and their mission design implications.
Table 2. Screening criteria applied to the SBDS and their mission design implications.
Filter CriterionMission Design Rationale
Asteroid size [100–350 m]
Represents objects of significant planetary defense interest due to their impact threat [63].
Implies high likelihood of surface boulder presence [77].
Natural Earth close approach [2030–2045]
Mission feasibility within practical Δ V budget constraints [13,69,78].
Provides lead time to gather additional Earth-based observational data and develop mission design maturation [72,73].
Rotation rate < 0.5 r.p.h
Minimizes spacecraft synchronization complexity and propellant consumption during proximity operations [6].
Condition code ≤ 6
Guarantees adequate orbital knowledge for precise and robust trajectory planning [65,67].
Table 3. Decision matrix for the filtered asteroid candidate list. Alternative assessments with respect to fuzzy criteria are expressed as triangular fuzzy numbers.
Table 3. Decision matrix for the filtered asteroid candidate list. Alternative assessments with respect to fuzzy criteria are expressed as triangular fuzzy numbers.
AsteroidCapture Cost [m/s]Synodic Period [y]Rotation Rate [rph]Orbit Det. AccuracyNo. of Similar Candidates
1998 FG22754.6783.0910.0862(0.857, 1, 1)(1, 1, 1)
2000 EW701857.4419.850.0303(0.857, 1, 1)(2, 3, 4)
2002 TC70786.2462.6640.0381(0.857, 1, 1)(1, 1.5, 2)
2005 EE3106.6486.0260.3769(0.857, 1, 1)(2, 2.5, 3)
2005 YU551542.1385.0930.0556(0.857, 1, 1)(1, 1.5, 2)
2000 AC62387.1013.7360.4092(0.857, 1, 1)(1, 1.5, 2)
2010 LN142209.5895.1760.2603(0.857, 1, 1)(1, 1, 1)
2003 GA1598.2943.2180.173(0.857, 1, 1)(1, 2.5, 4)
2011 DU1963.7894.7240.0972(0.857, 1, 1)(2, 3, 4)
2002 UK114474.082.9130.125(0.857, 1, 1)(1, 1.5, 2)
2010 XC151456.0131.6790.3741(0.857, 1, 1)(1, 1, 1)
2018 DH12463.141.4920.2(0.857, 1, 1)(1, 1.5, 2)
1994 CJ11976.0912.220.0333(0.857, 1, 1)(4, 4.5, 5)
1999 FN19842.8751.8980.0373(0.857, 1, 1)(2, 3.5, 5)
2000 KA1675.2822.8610.0873(0.857, 1, 1)(2, 2.5, 3)
2001 EC162100.8372.7830.005(0.857, 1, 1)(1, 3, 5)
2002 LY12307.20214.1630.3121(0.857, 1, 1)(1, 3, 5)
2006 CT645.8287.7380.0599(0.857, 1, 1)(1, 1.5, 2)
2010 AF301727.0762.9030.3846(0.857, 1, 1)(2, 3, 4)
2012 XS1113986.2252.1270.4673(0.714, 0.857, 1.0)(1, 1, 1)
2013 NJ1757.7513.1810.004(0.857, 1, 1)(1, 2.5, 4)
2014 HK1292789.7511.8220.0661(0.857, 1, 1)(1, 1.5, 2)
2015 FO1243280.6512.7840.1278(0.857, 1, 1)(2, 2.5, 3)
2017 AG51233.4592.7750.1361(0.571, 0.714, 0.857)(2, 2.5, 3)
2019 CD52707.1531.3950.033(0.571, 0.714, 0.857)(1, 1, 1)
2022 BX11512.4552.1630.3923(0.857, 1, 1)(3, 4, 5)
2022 HB4161.5111.210.0115(0.857, 1, 1)(1, 1, 1)
2022 UD92533.3172.3090.0797(0.429, 0.571, 0.714)(2, 2, 2)
Table 4. Fuzzy weights calculated by the F-statistical variance, F-CRITIC and F-MEREC methods.
Table 4. Fuzzy weights calculated by the F-statistical variance, F-CRITIC and F-MEREC methods.
CriterionF-Statistical VarianceF-CRITICF-MEREC
Capture cost(0.171, 0.202, 0.202)(0.174, 0.176, 0.176)(0.139, 0.145, 0.161)
Synodic period(0.236, 0.279, 0.279)(0.211, 0.217, 0.217)(0.267, 0.279, 0.307)
Rotation rate(0.226, 0.268, 0.268)(0.261, 0.263, 0.263)(0.292, 0.306, 0.340)
Orbit det. accuracy(0.017, 0.059, 0.059)(0.096, 0.164, 0.164)(0.130, 0.149, 0.149)
Similar candidates(0.120, 0.191, 0.349)(0.150, 0.179, 0.256)(0.061, 0.120, 0.120)
Table 5. Defuzzified weights calculated by the F-statistical variance, F-CRITIC and F-MEREC methods and Equation (1).
Table 5. Defuzzified weights calculated by the F-statistical variance, F-CRITIC and F-MEREC methods and Equation (1).
CriterionF-Statistical VarianceF-CRITICF-MERECAverage
Capture cost0.1970.1760.1470.175
Synodic period0.2720.2160.2820.259
Rotation rate0.2610.2630.3100.280
Orbit det. acc.0.0520.1530.1460.118
Sim. Candidates0.2060.1870.1100.169
Table 6. Asteroid rankings according to nine different combinations of MCDM weighting and sorting methods.
Table 6. Asteroid rankings according to nine different combinations of MCDM weighting and sorting methods.
RankSt.Variance WASPASCRITIC WASPASMEREC WASPASSt.Variance TOPSISCRITIC TOPSISMEREC TOPSISSt.Variance MARCOSCRITIC MARCOSMEREC MARCOS
2013 NJ111111111
2001 EC16222222222
1999FN19333333333
1994 CJ1444444444
2002 TC70555667668
2019 CD56668958117
2022 BX17810558556
2014HK129897131410121310
2000 KA979141311131211
2017 AG5101212101514111515
2018 DH111138776775
2006 CT121011151013141013
2022 UD9131916192118182019
2000EW70141114181721191820
2003 GA151519121215151414
2010XC1516201311119989
2011 DU171420161619161618
2010 AF30181822981210912
2015FO124191718171816171716
2005YU55201617212023211923
2022 HB212115272724272724
1998 FG2222221262622252522
2002UK11232323232220232221
2012XS111242724222317202317
2000 AC6252425242425242425
2005 EE262527252526262626
2002 LY1272628201927222128
2010LN14282826282828282827
Table 7. Simulation’s random weight distribution parameters for different k d i r values.
Table 7. Simulation’s random weight distribution parameters for different k d i r values.
CriterionSim. Parameter k dir = 10 k dir = 100 k dir = 1000
Capture costRef. Weight0.17470.17470.1747
Full range[0.0002, 0.7235][0.0662, 0.3417][0.1353, 0.2242]
75% range[0.0540, 0.3162][0.1318, 0.2193][0.1610, 0.1889]
Mean % dev.52.98%17.30%5.56%
Synodic PeriodRef. Weight0.25850.25850.2585
Full range[0.0057, 0.8386][0.1137, 0.4514][0.2068, 0.3126]
75% range[0.1102, 0.4175][0.2084, 0.3099][0.2427, 0.2740]
Mean % dev.41.17%13.56%4.23%
Rotation RateRef. Weight0.28000.28000.2800
Full range[0.0100, 0.8666][0.1406, 0.4656][0.2322, 0.3434]
75% range[0.1248, 0.4449][0.2289, 0.3322][0.2637, 0.2964]
Mean % dev.39.39%12.85%4.07%
Orbit det. accuracyRef. Weight0.11790.11790.1179
Full range[0.0000, 0.6909][0.0256, 0.2521][0.0785, 0.1582]
75% range[0.0226, 0.2304][0.0820, 0.1557][0.1064, 0.1299]
Mean % dev.63.90%21.56%6.89%
Sim. CandidatesRef. Weight0.16890.16890.1689
Full range[0.0008, 0.7177][0.0575, 0.3434][0.1253, 0.2134]
75% range[0.0504, 0.3032][0.1263, 0.2123][0.1551, 0.1826]
Mean % dev.52.71%17.62%5.64%
Table 8. Percentage of simulations for 10,000 Dirichlet-sampled weight vectors centered on reference values where top five candidates achieved the highest rank, evaluated for dispersion parameters k d i r = 10, 100, and 1000. Larger k d i r values indicate tighter sampling distributions around reference weights.
Table 8. Percentage of simulations for 10,000 Dirichlet-sampled weight vectors centered on reference values where top five candidates achieved the highest rank, evaluated for dispersion parameters k d i r = 10, 100, and 1000. Larger k d i r values indicate tighter sampling distributions around reference weights.
AsteroidAv. Random Sampling Performance% Dominance k dir = 10 % Dominance k dir = 100 % Dominance k dir = 1000
2013 NJ0.67160.4489.87100
2001 EC160.6401.0600
1999 FN190.60229.0710.130
1994 CJ10.5404.8500
2022 BX10.512000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramírez, N.; Sánchez-Lozano, J.M.; Peña-Asensio, E. Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions. Aerospace 2025, 12, 800. https://doi.org/10.3390/aerospace12090800

AMA Style

Ramírez N, Sánchez-Lozano JM, Peña-Asensio E. Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions. Aerospace. 2025; 12(9):800. https://doi.org/10.3390/aerospace12090800

Chicago/Turabian Style

Ramírez, Nelson, Juan Miguel Sánchez-Lozano, and Eloy Peña-Asensio. 2025. "Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions" Aerospace 12, no. 9: 800. https://doi.org/10.3390/aerospace12090800

APA Style

Ramírez, N., Sánchez-Lozano, J. M., & Peña-Asensio, E. (2025). Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions. Aerospace, 12(9), 800. https://doi.org/10.3390/aerospace12090800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop