Next Article in Journal
Correction: Schnell, R.; Mennicken, M. Unified Assessment of Open and Ducted Propulsors. Aerospace 2024, 11, 1002
Previous Article in Journal
Assessment of Extending Flight Endurance Through Engine Dynamic Clearance Control via Fuel Heat Sink Utilization
 
 
Article
Peer-Review Record

Fuzzy Multi-Criteria Decision Framework for Asteroid Selection in Boulder Capture Missions

Aerospace 2025, 12(9), 800; https://doi.org/10.3390/aerospace12090800
by Nelson Ramírez 1,2, Juan Miguel Sánchez-Lozano 2,* and Eloy Peña-Asensio 3
Reviewer 1:
Reviewer 2: Anonymous
Aerospace 2025, 12(9), 800; https://doi.org/10.3390/aerospace12090800
Submission received: 26 June 2025 / Revised: 13 August 2025 / Accepted: 3 September 2025 / Published: 4 September 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript proposes a modular fuzzy multi-criteria decision-making (MCDM) framework for ranking near-Earth asteroids (NEAs) as candidates for a boulder-capture mission. The method uses three objective fuzzy weighting techniques (Statistical Variance, CRITIC, and MEREC) and three fuzzy ranking algorithms (WASPAS, TOPSIS, MARCOS), generating nine sets of prioritized lists for 28 filtered NEAs. To test robustness, the authors employ Dirichlet-distributed sampling of weights and conduct a sensitivity analysis. The work aims to provide a robust, extensible decision-support tool for future asteroid mission planning.

  • The paper relies on traditional fuzzy MCDM methods, but does not justify why more modern neuro-fuzzy or machine learning-based approaches (such as ANFIS) are not used, especially since computational cost is not a limitation for offline mission design.
  • Discuss the rationale for not employing neural-fuzzy methods, and clearly articulate the benefits of the chosen approach.
  • The manuscript does not sufficiently justify the selection of the five evaluation criteria (ΔV, synodic period, rotation rate, orbit determination accuracy, number of similar candidates). There is little discussion about whether these features fully cover all relevant mission aspects or how omissions may impact results.
  • Please Justify feature selection with references to the asteroid mission literature. Discuss any critical criteria that may have been omitted (e.g., albedo, spectral class, surface composition, spin axis/orientation, shape complexity, thermal properties) and how their inclusion/exclusion might affect the rankings. Consider sensitivity analysis by removing/adding features and measuring the impact on rankings.
  • The process for filtering NEA candidates is described, but it is unclear if the resulting sample (28 NEAs) is representative of all feasible mission targets. There is insufficient discussion on how the filtering criteria align with real mission requirements. Explicitly tie each filtering rule (size, encounter window, etc.) to real mission design constraints (e.g., launch vehicle capabilities, ΔV budgets, operational timelines). If feasible, validate the sample by showing that detailed trajectory simulations or mission design studies confirm the feasibility for at least a subset of the chosen NEAs.
  • Rotation rates are only known for a small fraction of NEAs. Treating unknowns as exclusions could bias the results. Discuss methods for handling missing data, such as imputation (using population statistics or Bayesian priors) rather than exclusion.
  • Only two criteria are modeled as triangular fuzzy numbers (TFNs), while the other three remain crisp. This heterogeneity can create scale imbalances. For example, wide numeric ranges in ΔV may dominate weight calculations over narrow fuzzy intervals. Furthermore, centroid defuzzification collapses all uncertainty into a single crisp score prior to ranking, negating much of the benefit of fuzzy modeling. The authors should justify the choice of TFNs over other membership-function shapes (e.g., trapezoidal or Gaussian), explain why mixing crisp and fuzzy inputs does not distort results, and discuss the potential use of fuzzy outranking methods (such as fuzzy ELECTRE) to preserve uncertainty through to the final ranking.
  • The three objective weighting methods employ different normalization formulas (Eqs. 2–3, 6–7, and 11–12), which implicitly define different decision spaces and may drive weight differences independently of information content. The authors should clarify whether observed variations in weights stem from true data-driven distinctions or are artifacts of normalization. Similarly, fuzzy TOPSIS and MARCOS use Euclidean distance in ℝ³ to compare TFNs; A brief robustness check using at least one alternative distance metric would add confidence.
  • The sensitivity analysis samples weight vectors from Dirichlet distributions with k = 10, 100, 1000, but no rationale is given for these choices. To inform practitioners how to set k in real mission planning, the authors should link k to estimated uncertainties in expert weights or derive it from statistical error bounds. Extending the analysis to sweep k continuously over [1, 1000] and showing how ranking stability evolves would offer practical guidance.
  • ΔV and synodic period are inherently correlated through orbital mechanics. While the CRITIC method down-weights correlated criteria, the combined average weights still place them as the two most influential factors. A variance-inflation-factor or principal component analysis would quantify residual redundancy and help interpret why certain criteria dominate despite correlation control.
  • ΔV estimates follow established formulas, but the analysis omits key operational constraints such as launch-window durations, Earth-departure C₃ limits, and return-payload mass. Additionally, the rotation-rate cutoff (<0.5 rph) does not consider spin-axis orientation or libration modes, which critically affect boulder capture and spacecraft hovering. The authors should at least discuss how these factors might reorder the ranking or propose them as extensions.

Author Response

Response to Reviewer 1 Comments
1. Summary
We sincerely thank you for the time and effort you dedicated to reviewing our manuscript.
Your constructive comments and suggestions have been invaluable in improving the clarity,
rigor, and overall quality of the work. Below, we provide detailed responses to each point,
with the corresponding revisions highlighted in red in the re-submitted files.
2. Point-by-point response to Comments and Suggestions for Authors
Comment 1: The paper relies on traditional fuzzy MCDM methods, but does not justify why more
modern neuro-fuzzy or machine learning-based approaches (such as ANFIS) are not used, especially
since computational cost is not a limitation for offline mission design. Discuss the rationale for not
employing neural-fuzzy methods, and clearly articulate the benefits of the chosen approach.
Response 1: Thank you for this insightful comment. In response, we have incorporated a
detailed discussion comparing traditional fuzzy MCDM methods with machine learningbased
approaches. This includes a review of related literature to guide interested readers, an
outline of the advantages and limitations of each approach, and a rationale for selecting the
proposed deterministic fuzzy MCDM framework for the present study. We also highlight that
the proposed modular framework is compatible with future integration of artificial intelligence methods.
These additions can be found in the revised manuscript on lines 64-91 (Introduction) and 735-
739 (Limitations and Future Work):
“It is important to note that the application of artificial intelligence techniques to MCDM
extends beyond fuzzy logic for representing vague or uncertain information. Machine
learning methodologies such as adaptive-network-based fuzzy inference system (ANFIS) [1],
[2] have been implemented successfully in MCDM problems [3], [4], [5], [6], [7], [8], [9]. This
kind of systems automatically derives appropriate rules from input–output data and leverages
neural learning to optimize criterion combinations with minimal error [10]. Additionally,
machine-learning techniques also allow the evaluation of new alternatives without reexecuting
the original ranking algorithm [11], and they eliminate the need for experts to
specify exact criterion weights [12]—which is an advantage already offered by the objective
weight determination methods proposed in this work.
However, neuro-fuzzy methods require historical data for model training and depend
critically on selecting an optimal network architecture—input variables, hidden layers and
nodes, transfer functions, learning rules, initialization, stopping criteria, and more— without
which models could not converge or generalize effectively [13], [14]. The training process can
be slow, may fail to converge without sufficient data, and becomes increasingly complex as
criteria and decision rules proliferate, leading to exponential growth in rule sets that
undermines interpretability and incurs costly rule-codification efforts [11], [14]. Moreover,
reliance on historical data reduces flexibility when criterion structures evolve, and studies
2
have shown that limited sample sizes further constrain generalization [14].
For these reasons, for the purposes of this analysis deterministic fuzzy MCDM framework is
preferred: it is simple, quick, and cost-effective to construct; it requires neither extensive data
collection nor expert-driven weight assignment; and its outputs are transparent, observable,
and easily audited , avoiding the "grey-box" limitations of trained neuro-fuzzy systems [10],
[13]. The proposed framework provides alternative rankings without historical data
requirements or architectural overhead. Importantly, the resulting framework establishes a
transparent foundation for future machine-learning integration, as clearly defined criteria
weights and outcomes will facilitate machine learning adoption once new data becomes
available [10], [12].”
“In addition, this modularity also opens the possibility for future works to build upon the
results obtained within the proposed framework to integrate machine learning-based MCDM
methods, enabling the development of classification models capable of predicting the
evaluation of new candidates as more asteroid data becomes available.”
Comment 2: The manuscript does not sufficiently justify the selection of the five evaluation criteria.
There is little discussion about whether these features fully cover all relevant mission aspects or how
omissions may impact results. Please Justify feature selection with references to the asteroid mission
literature. Discuss any critical criteria that may have been omitted (e.g., albedo, spectral class, surface
composition, shape complexity, thermal properties) and how their inclusion/exclusion might affect the
rankings. Consider sensitivity analysis by removing/adding features and measuring the impact on
rankings.
Response 2: Thank you for this valuable observation. In the revised manuscript, we have
added a detailed justification for the selection of the five evaluation criteria, including
references to asteroid mission literature and a discussion of relevant parameters not included
in this study. The new text can be found on lines 64–91:
“Therefore, to establish a suitable set of evaluation criteria for the decision-making problem,
it is necessary to analyze the key parameters that govern the design of asteroid mission
architectures. The first of these is the ΔV required to perform the mission [15], [16], which has
a direct impact on the trajectory design as well as on the propulsion system requirements and
fuel consumption. This parameter is represented in the model by the capture cost evaluation
criterion.
Another desirable characteristic in mission design relates to operational flexibility. Targets
with a very high synodic period present the disadvantage of offering infrequent opportunities
to conduct missions and gather Earth-based observational data [15]. This feature is addressed
in the model through the synodic period evaluation criterion.
The asteroid’s rotational dynamic, spin rate and spin axis, also constitute an important factor
in the mission design [17]. The spin rate is incorporated as an evaluation criterion due to its
strong influence on the operational cost of synchronizing spacecraft maneuvers during
proximity operations. On the other hand, although most asteroids exhibit a stable spin axis,
tumbling bodies should be avoided. Since tumbling motion cannot be reliably assessed
3
without detailed observation, it is considered a constraint to be verified in a prior
observational campaign before progressing to subsequent phases of mission development
[18].
From a preliminary mission design perspective, another relevant aspect is the quality of orbital
knowledge of the target as it affects the possibility to define precise and robust mission
trajectories [15], [19]. This characteristic is considered through the orbit determination
accuracy criterion.
Additionally, it is advantageous for the selected target to have a group of asteroids with
similar orbits [19], as this enables the preliminary trajectory design to be adapted for future
missions or backup targets [20]. This feature is reflected in the number of similar candidates
criterion.
From a scientific interest point of view, asteroid composition is also a relevant feature.
However, current Earth-based observations provide taxonomic classification for only 0.29%
of the asteroids within the studied size range [21]. This limitation, combined with the fact that
asteroid boulder capture and redirection missions were primarily designed as technology
demonstrators rather than for scientific exploration purposes [22], has led to the exclusion of
composition as an evaluation criterion.
Therefore, although this study focuses on the analysis of the criteria capture cost, synodic
period, rotation rate, orbit determination accuracy, and number of similar candidates, the
preliminary results obtained may serve as a basis for organizing follow-up observation
campaigns for the most promising candidates. These campaigns would aim to gather
additional information on other relevant characteristics, such as taxonomic classification,
approximate shape models, or thermal properties, which could be incorporated in a more
detailed assessment using the MCDM framework proposed here.”
Comment 3: The process for filtering NEA candidates is described, but it is unclear if the resulting
sample (28 NEAs) is representative of all feasible mission targets. There is insufficient discussion on
how the filtering criteria align with real mission requirements. Explicitly tie each filtering rule (size,
encounter window, etc.) to real mission design constraints (e.g., launch vehicle capabilities, ΔV budgets,
operational timelines). If feasible, validate the sample by showing that detailed trajectory simulations or
mission design studies confirm the feasibility for at least a subset of the chosen NEAs.
Response 3: Thank you for this comment. While detailed trajectory simulations are beyond
the scope of the present study, we recognize that they will be fundamental elements of our
future work. However, in the current revision, we have strengthened the link between each
filtering rule and real mission design constraints by adding a table summarizing the applied
filters and their mission design rationale. This addition can be found in table 2 of the revised
manuscript:
Filter criterion Mission design rationale
Asteroid size [100-
350m]
 Represents objects of significant planetary defense interest due to
their impact threat [23].
 Implies high likelihood of surface boulder presence [24].
4
Natural Earth close
approach [2030-2045]
 Mission feasibility within practical ΔV budget constraints [22],
[25], [26].
 Provides lead time to gather additional Earth-based observational
data and develop mission design maturation [27], [28].
Rotation rate < 0.5
r.p.h
 Minimizes spacecraft synchronization complexity and propellant
consumption during proximity operations [18].
Condition code ≤ 6  Guarantees adequate orbital knowledge for precise and robust
trajectory planning [15], [19].
Comment 4: Rotation rates are only known for a small fraction of NEAs. Treating unknowns as
exclusions could bias the results. Discuss methods for handling missing data, such as imputation (using
population statistics or Bayesian priors) rather than exclusion.
Response 4: Thank you for this observation. We agree that excluding NEAs with unknown
rotation rates may introduce bias, and we have added a discussion on possible methods for
handling missing data, including imputation approaches based on asteroid size. We also
explain the limitations of such estimations given the large observed variability in rotation
periods, and justify our choice to exclude these candidates in the present study. This addition
can be found after line 394 in the revised manuscript:
“It is important to note that applying this filter will significantly reduce the number of
potential candidates. One possible way to incorporate this evaluation criterion without
excluding candidates would be to estimate the rotation rate using models that relate rotation
rate to asteroid's size [27], [29]. However, available data [28] show variations of up to two
orders of magnitude in the observed rotation periods for asteroids of the same size within the
studied ranges. Hence, applying such estimations could result in inaccurate performance
assessments and, consequently, distort the ranking outcomes. For this reason, this study has
opted to exclude candidates with unknown rotation rates, allowing their inclusion in the
proposed MCDM framework once reliable data become available.”
Comment 5: Only two criteria are modeled as triangular fuzzy numbers (TFNs), while the other three
remain crisp. This heterogeneity can create scale imbalances. For example, wide numeric ranges in ΔV
may dominate weight calculations over narrow fuzzy intervals. Furthermore, centroid defuzzification
collapses all uncertainty into a single crisp score prior to ranking, negating much of the benefit of fuzzy
modeling. The authors should justify the choice of TFNs over other membership-function shapes (e.g.,
trapezoidal or Gaussian), explain why mixing crisp and fuzzy inputs does not distort results, and
discuss the potential use of fuzzy outranking methods (such as fuzzy ELECTRE) to preserve
uncertainty through to the final ranking.
Response 5: Thank you for this important comment. We have substantially expanded the
manuscript to address these points:
 We have replaced the original text with an extended discussion justifying the choice of
TFNs over other membership-function shapes, supported by relevant literature.
 We summarized the results of methodological tests covering defuzzification,
normalization, and TOPSIS distance metric variations (see next comment)
5
 We discuss the limitations of compensatory methods and outline the potential benefits
and drawbacks of exploring fuzzy outranking methods (e.g., fuzzy ELECTRE,
PROMETHEE) in future work.
In section Materials and Methods (lines 137-139 and 158-161):
“Fuzzy sets can be defined by different membership functions [30], [31]: triangular,
trapezoidal, sigmoid, gaussian, etc. Among the multiple ways to represent fuzzy sets,
triangular fuzzy numbers are particularly popular in MCDM. A TFN ?̃ is represented by the
tuple ?̃ = (?? , ??, ??) , where ?? ≤ ?? ≤ ?? correspond to the minimum, modal, and
maximum values of the variable, respectively. This representation is notably advantageous: it
captures uncertainty rigorously while allowing direct and efficient computation, given that all
fuzzy arithmetic operations (addition, subtraction, multiplication, division), can be performed
via simple formulae on the three parameters (for details on TFNs and their arithmetic
operations, see [32]).
In addition, numerous empirical MCDM applications [30], [31], [33], [34], [35] demonstrate
that TFNs deliver robust decision outcomes without excessive modeling burden [36], [37]. The
consensus in the literature is that TFNs offer the most practical trade-off between
interpretability, computational complexity, and fidelity in representing uncertainty [38], [39].
In this study, a combination of real (crisp) variables and fuzzy variables represented as TFNs
is considered. This combination has proven to be an effective approach for handling MCDM
problems involving heterogeneous data types [33], [34], [40], [41]. Consequently, the
mathematical fundaments of the MCDM algorithms for the determination of the weights of
the criteria and the assessment of the alternatives must be adapted to their fuzzy
counterparts.”
“Although the defuzzification method presented in Equation 1 is well established in the
literature, an alternative method based on the Best Non-Fuzzy Performance approach [41] has
been implemented to ensure the robustness of the results. In this way, the influence of the
defuzzification method on the final outcomes will be analyzed in Section 4.2.”
In section Limitations and Future Work (lines 710-730):
“Moreover, it is worth highlighting that the ranking methods used in this study belong to the
class of compensatory methods [42]. These methods offer the advantage of being more
intuitive, as they combine the performance across different criteria into a single representative
index used to produce the final ranking. However, they present certain limitations.
Specifically, they do not adequately account for situations where poor result in one evaluation
criterion can be offset by an excellent result in another, which may mask critical weaknesses
or generate rankings not so influenced by the criteria with greater weights [41]. Moreover, to
generate an aggregated metric that produces a final ranking, these methods require a
defuzzification procedure in the final stages to obtain a crisp ranking.
These limitations suggest that future work should explore a comparison between the results
obtained using compensatory methods and those derived from the implementation of fuzzy
outranking methods such as ELECTRE [43] and PROMETHEE [44], which allow the
uncertainty contained in the weights and performance values to be preserved until the final
6
stages of the classification process [45]. Nevertheless, the use of such methods also entails a
series of disadvantages. First, the results produced by these approaches are not always
conclusive, often requiring additional techniques to derive a complete ranking of the
candidates. Second, it is necessary to define a set of preference or veto thresholds for each
criterion, which would require expert input and adds subjectivity to the decision model.
Finally, these methods tend to be less interpretable, due to the complex interactions
formulated between the criteria [42].”
Comment 6: The three objective weighting methods employ different normalization formulas (Eqs. 2–
3, 6–7, and 11–12), which implicitly define different decision spaces and may drive weight differences
independently of information content. The authors should clarify whether observed variations in
weights stem from true data-driven distinctions or are artifacts of normalization. Similarly, fuzzy
TOPSIS and MARCOS use Euclidean distance in ℝ³ to compare TFNs; A brief robustness check using
at least one alternative distance metric would add confidence.
Response 6: Thank you for this helpful and precise comment. We have revised the manuscript
to clarify these points:
 We implemented an alternative TFN distance metric for TOPSIS based on Tran and
Duckstein approach [46].
 On the other hand, MARCOS should not be considered a distance-based method [47], as
it does not rely on a conventional distance metric but rather on the comparison of the utility
degree of each alternative with respect to the ideal and anti-ideal alternatives. To avoid
potential confusion, it is proposed that the nomenclature used in Equations (33-35) be
revised avoiding the use of “D”.
 We summarized the results of methodological tests covering defuzzification,
normalization, and TOPSIS distance metric variations.
In section Materials and Methods (lines 226-232 and 278-281):
“It is important to note that, while some of the normalization techniques discussed earlier are
linked to the weight determination procedure—such as in the case of MEREC [48]—for
methods like Fuzzy Statistical Variance and fuzzy CRITIC, normalization techniques are
interchangeable [49]. In order to ensure independence and a more transparent assessment of
the objective weighting methods implemented in this study, the normalization techniques
commonly associated with each method have been retained. Nonetheless, Section 4.2 provides
an analysis of how alternative normalization choices may affect the results.”
“It is important to note that this distance metric is the most commonly applied in TOPSIS [50];
however, alternative metrics, such as the one proposed by Tran and Duckstein [46] can also
be used. The effect of changing the distance metric on the obtained results will be analyzed in
Section 4.2.”
In section Results and Discussion (lines 561-580):
“To provide preliminary evidence on robustness prior to the dedicated sensitivity study, a set
of targeted methodological tests addressing three key choices: defuzzification, normalization,
7
and the TOPSIS distance metric was performed. First, the defuzzification method proposed in
Equation (1) was replaced by the Best Non-Fuzzy Performance-based defuzzification
approach [41] After conducting the simulations, it was found that the resulting rankings are
robust, with correlation coefficients between the rankings obtained through the different
defuzzification methods exceeding 0.99 in all cases. Second, to test sensitivity to
normalization, the original procedures for Fuzzy Statistical Variance and fuzzy CRITIC were
subsequently replaced by linear max–min normalization and simple linear normalization [49]
respectively to test the robustness of the prioritization results. The simulation results using
these alternative procedures yield correlation coefficients greater than 0.97 in all cases when
compared to those obtained with the original normalization techniques. Third, to assess the
sensitivity of the results to distance metric selection, the standard TOPSIS implementation
using Euclidean distance [50] was compared with an alternative formulation proposed by
Tran and Duckstein [46]. The resulting rankings demonstrated remarkable consistency, with
correlation coefficients between the rankings obtained using the original TOPSIS
implementation and those obtained with the alternative metric across the different weighting
methods exceeding 0.99, confirming the robustness of the implementation to this
methodological variation. In addition, the ranking of the top-performing candidates remained
invariant across all tested methodological variations.”
Comment 7: The sensitivity analysis samples weight vectors from Dirichlet distributions with k = 10,
100, 1000, but no rationale is given for these choices. To inform practitioners how to set k in real mission
planning, the authors should link k to estimated uncertainties in expert weights or derive it from
statistical error bounds. Extending the analysis to sweep k continuously over [1, 1000] and showing
how ranking stability evolves would offer practical guidance.
Response 7: Thank you for this valuable suggestion. We have extended the sensitivity analysis
to sweep kdir over the interval [1, 1000] and included guidance on selecting kdir based on
ranking stability and weight variation, noting that stabilization occurs from kdir=40 onwards.
An additional robustness check using normally distributed weights yielded consistent results.
See lines 661-677 of the revised manuscript:
“Finally, it is worth noting that an alternative way to select kdir involves progressively
increasing its value and simulating the resulting rankings until they stabilize—particularly
with regard to the top-performing candidates. Once such a value is identified, it can be
accepted by the analyst if the trade-off between the dominance percentage of the top-ranked
alternative and the average percentage variation of the simulated weights is deemed
satisfactory. As an example, in this study, all integer values of kdir within the interval [1, 1000]
were explored, and it was found that the rankings stabilized from kdir = 40 onwards. For this
value the variation introduced by the simulations was sufficient as the mean variation of the
simulated criterion weights exceeded the average variation observed among the original
weighting methods in all cases and, furthermore, 2013 NJ exhibited an acceptably high
dominance (77.67%). Therefore, selecting a kdir value around 40 may be considered acceptable
for sensitivity analysis purposes in this work.
8
As an additional robustness check, an alternative sensitivity analysis was performed using
normal distributions centered on the mean weight [51], with a standard deviation equal to
that observed across the three weighting methods. The results of this analysis were consistent
with those obtained in the primary evaluation, further reinforcing the stability of the proposed
approach.”
Comment 8: ΔV and synodic period are inherently correlated through orbital mechanics. While the
CRITIC method down-weights correlated criteria, the combined average weights still place them as the
two most influential factors. A variance-inflation-factor or principal component analysis would
quantify residual redundancy and help interpret why certain criteria dominate despite correlation
control.
Response 8: Thank you for this valuable comment. Following your advice, we have computed
the Variance Inflation Factor (VIF) for all criteria to quantify residual redundancy and confirm
that correlations are adequately controlled. The results (all VIF values < 1.2) are well below the
common threshold of 5, supporting the robustness of the weighting process. This information
has been added on lines 491-498 the revised manuscript:
“It is worth highlighting that although CRITIC method already incorporates correlation
control within its weighting algorithm, it is important to assess the correlation among criteria
to ensure the robustness of the results. A straightforward way to do so is through the
calculation of the Variance Inflation Factor (VIF) [52]. A high VIF value for a given criterion
indicates that its behavior may be explained by the remaining criteria. Accordingly, the VIF
was calculated for all criteria, and in all cases, the values were found to be below 1.2 which is
under the commonly accepted threshold of 5 used in the literature [53]. This fact confirms that
the correlation among criteria in this problem is adequately controlled.”
Comment 9: ΔV estimates follow established formulas, but the analysis omits key operational
constraints such as launch-window durations, Earth-departure C₃ limits, and return-payload mass.
Additionally, the rotation-rate cutoff (<0.5 rph) does not consider spin-axis orientation or libration
modes, which critically affect boulder capture and spacecraft hovering. The authors should at least
discuss how these factors might reorder the ranking or propose them as extensions.
Response 9: Thank you for this valuable comment. We have added a paragraph in lines 693-
710 acknowledging that certain operational factors—such as launch-window dynamics, Earthdeparture
energy limits (C₃), payload return mass, and spin-axis orientation—are not included
in the current model. These omissions are due to the preliminary nature of the analysis, and
we outline plans to address them in future work to enhance the framework’s practical
applicability.
“Although the proposed fuzzy multi-criteria decision-making approach provides a robust
framework for preliminary asteroid prioritization using established accessibility metrics, we
acknowledge that the current model’s scope does not fully capture potential mission design
constraints that emerge during a more detailed mission planning [54], [55]. Specifically, Earthdeparture
energy limits (C₃), launch-window dynamics, mass-return efficiency and rotation
9
axis orientation were not incorporated in this model. The absence of launch-window criterion
may underestimate the importance of timeframe in mission design, while payload mass
exclusions could mask important trade-offs between sample return mass and trajectory
design. Furthermore, while rotation rate provides useful preliminary metric, it neglects
important factors like spin-axis orientation and libration modes that significantly impact
proximity operations cost and complexity.
These omissions stem from the focus on high-level assessment during initial screening phase
but warrant further investigation to enhance practical applicability. For these reasons in future
work, trajectory simulation tools for launch-window and C₃ analysis, as well as the
development of a mass-return metric will be implemented to support more detailed
operational feasibility assessments. By coupling our systematic MCDM framework with these
mission-design considerations it would be possible to deliver a comprehensive selection tool
that bridges theoretical accessibility with practical implementation requirements.”
References:
[1] J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Transactions on
Systems, Man, and Cybernetics, vol. 23, no. 3, pp. 665–685, May 1993, doi: 10.1109/21.256541.
[2] H. Liao, Y. He, X. Wu, Z. Wu, and R. Bausys, “Reimagining multi-criterion decision making by
data-driven methods based on machine learning: A literature review,” Information Fusion, vol. 100,
p. 101970, Dec. 2023, doi: 10.1016/j.inffus.2023.101970.
[3] K. Suresh and R. Dillibabu, “A novel fuzzy mechanism for risk assessment in software projects,”
Soft Comput, vol. 24, no. 3, pp. 1683–1705, Feb. 2020, doi: 10.1007/s00500-019-03997-2.
[4] J.-B. Sheu, “A hybrid neuro-fuzzy analytical approach to mode choice of global logistics
management,” European Journal of Operational Research, vol. 189, no. 3, pp. 971–986, Sep. 2008,
doi: 10.1016/j.ejor.2006.06.082.
[5] H. R. Kolvir, A. Madadi, V. Safarianzengir, and B. Sobhani, “Monitoring and analysis of the effects
of atmospheric temperature and heat extreme of the environment on human health in Central Iran,
located in southwest Asia,” Air Qual Atmos Health, vol. 13, no. 10, pp. 1179–1191, Oct. 2020, doi:
10.1007/s11869-020-00843-5.
[6] M. Nilashi et al., “An analytical approach for big social data analysis for customer decision-making
in eco-friendly hotels,” Expert Systems with Applications, vol. 186, p. 115722, Dec. 2021, doi:
10.1016/j.eswa.2021.115722.
[7] M. Tavana, A. Fallahpour, D. Di Caprio, and F. J. Santos-Arteaga, “A hybrid intelligent fuzzy
predictive model with simulation for supplier evaluation and selection,” Expert Systems with
Applications, vol. 61, pp. 129–144, Nov. 2016, doi: 10.1016/j.eswa.2016.05.027.
[8] S. Paryani, A. Neshat, and B. Pradhan, “Spatial landslide susceptibility mapping using integrating
an adaptive neuro-fuzzy inference system (ANFIS) with two multi-criteria decision-making
approaches,” Theor Appl Climatol, vol. 146, no. 1, pp. 489–509, Oct. 2021, doi: 10.1007/s00704-
021-03695-w.
[9] Q. Shao, R. C. Rowe, and P. York, “Comparison of neurofuzzy logic and neural networks in
modelling experimental data of an immediate release tablet formulation,” European Journal of
Pharmaceutical Sciences, vol. 28, no. 5, pp. 394–404, Aug. 2006, doi: 10.1016/j.ejps.2006.04.007.
[10] G. Özkan and M. İnal, “Comparison of neural network application for fuzzy and ANFIS approaches
10
for multi-criteria decision making problems,” Applied Soft Computing, vol. 24, pp. 232–238, Nov.
2014, doi: 10.1016/j.asoc.2014.06.032.
[11] A. F. Güneri, T. Ertay, and A. Yücel, “An approach based on ANFIS input selection and modeling
for supplier selection problem,” Expert Systems with Applications, vol. 38, no. 12, pp. 14907–14917,
Nov. 2011, doi: 10.1016/j.eswa.2011.05.056.
[12] A. Saghaei and H. Didehkhani, “Developing an integrated model for the evaluation and selection of
six sigma projects based on ANFIS and fuzzy goal programming,” Expert Systems with Applications,
vol. 38, no. 1, pp. 721–728, Jan. 2011, doi: 10.1016/j.eswa.2010.07.024.
[13] D. Golmohammadi, “Neural network application for fuzzy multi-criteria decision making problems,”
International Journal of Production Economics, vol. 131, no. 2, pp. 490–504, Jun. 2011, doi:
10.1016/j.ijpe.2011.01.015.
[14] G. Nassimbeni and F. Battain, “Evaluation of supplier contribution to product development: Fuzzy
and neuro-fuzzy based approaches,” International Journal of Production Research, vol. 41, no. 13,
pp. 2933–2956, Jan. 2003, doi: 10.1080/0020754021000043967.
[15] B. G. Drake, “Strategic Implications of Human Exploration of Near- Earth Asteroids”.
[16] N. Strange et al., “Overview of Mission Design for NASA Asteroid Redirect Robotic Mission
Concept,” presented at the International Electric Propulsion Conference (IEPC2013), Washington,
D. C., Oct. 2013. Accessed: Jul. 13, 2024. [Online]. Available:
https://ntrs.nasa.gov/citations/20150007879
[17] B. W. Barbee, “Mission Planning for the Mitigation of Hazardous Near Earth Objects”.
[18] SBAG, “Goals and Objectives for the Exploration and Investigation of the Solar System’s Small
Bodies. ver. 1.2.2016,” Small Bodies Assessment Group (SBAG). Accessed: Jul. 13, 2024. [Online].
Available: https://www.lpi.usra.edu/sbag
[19] M. Hirabayashi et al., “Hayabusa2 Extended Mission: New Voyage to Rendezvous with a Small
Asteroid Rotating with a Short Period,” Advances in Space Research, vol. 68, no. 3, pp. 1533–1555,
Aug. 2021, doi: 10.1016/j.asr.2021.03.030.
[20] P. Michel et al., “MarcoPolo-R: Near-Earth Asteroid sample return mission selected for the
assessment study phase of the ESA program cosmic vision,” Acta Astronautica, vol. 93, pp. 530–
538, Jan. 2014, doi: 10.1016/j.actaastro.2012.05.030.
[21] “Small-Body Database Query.” Accessed: Jun. 11, 2025. [Online]. Available:
https://ssd.jpl.nasa.gov/tools/sbdb_query.html
[22] B. K. Muirhead and J. R. Brophy, “Asteroid Redirect Robotic Mission feasibility study,” in 2014
IEEE Aerospace Conference, Big Sky, MT, USA: IEEE, Mar. 2014, pp. 1–14. doi:
10.1109/AERO.2014.6836358.
[23] National Science and Technology Council, “National Preparedness Strategy and Action Plan for
Near-Earth Object Hazards and Planetary Defense,” 2023.
[24] T. Michikami and A. Hagermann, “Boulder sizes and shapes on asteroids: A comparative study of
Eros, Itokawa and Ryugu,” Icarus, vol. 357, p. 114282, Mar. 2021, doi:
10.1016/j.icarus.2020.114282.
[25] J. P. Sanchez, “NEAR-EARTH ASTEROID RESOURCE ACCESSIBILITY AND FUTURE
CAPTURE MISSION OPPORTUNITIES,” 2012.
[26] J. P. Sanchez and C. R. McInnes, “Assessment on the feasibility of future shepherding of asteroid
resources,” Acta Astronautica, vol. 73, pp. 49–66, Apr. 2012, doi: 10.1016/j.actaastro.2011.12.010.
[27] E. Asphaug, “Growth and Evolution of Asteroids,” Annu. Rev. Earth Planet. Sci., vol. 37, no. 1, pp.
11
413–448, May 2009, doi: 10.1146/annurev.earth.36.031207.124214.
[28] B. D. Warner, A. W. Harris, and P. Pravec, “Asteroid Lightcurve Database (LCDB) Bundle V4.0.”
NASA Planetary Data System, 2021. doi: 10.26033/J3XC-3359.
[29] B. N. J. Persson and J. Biele, “On the Stability of Spinning Asteroids,” Tribol Lett, vol. 70, no. 2, p.
34, Jun. 2022, doi: 10.1007/s11249-022-01570-x.
[30] M. C. F. Bazzocchi, J. M. Sánchez-Lozano, and H. Hakima, “Fuzzy multi-criteria decision-making
approach to prioritization of space debris for removal,” Advances in Space Research, vol. 67, no. 3,
pp. 1155–1173, Feb. 2021, doi: 10.1016/j.asr.2020.11.006.
[31] H. Wu and Z. Xu, “Fuzzy Logic in Decision Support: Methods, Applications and Future Trends,”
INT J COMPUT COMMUN, Int. J. Comput. Commun. Control, vol. 16, no. 1, Sep. 2020, doi:
10.15837/ijccc.2021.1.4044.
[32] G. J. Klir and B. Yuan, Fuzzy sets and fuzzy logic: theory and applications. Upper Saddle River,
New Jersey: Prentice Hall PTR, 1995.
[33] J. M. Sánchez-Lozano, A. Moya, and J. M. Rodríguez-Mozos, “A fuzzy Multi-Criteria Decision
Making approach for Exo-Planetary Habitability,” Astronomy and Computing, vol. 36, p. 100471,
Jul. 2021, doi: 10.1016/j.ascom.2021.100471.
[34] J. M. Sánchez-Lozano, J. Serna, and A. Dolón-Payán, “Evaluating military training aircrafts through
the combination of multi-criteria decision making processes with fuzzy logic. A case study in the
Spanish Air Force Academy,” Aerospace Science and Technology, vol. 42, pp. 58–65, Apr. 2015, doi:
10.1016/j.ast.2014.12.028.
[35] J. M. Sánchez-Lozano, M. Fernández-Martínez, A. A. Saucedo-Fernández, and J. M. Trigo-
Rodriguez, “Evaluation of NEA deflection techniques. A fuzzy Multi-Criteria Decision Making
analysis for planetary defense,” Acta Astronautica, vol. 176, pp. 383–397, Nov. 2020, doi:
10.1016/j.actaastro.2020.06.043.
[36] A. Mardani, A. Jusoh, and E. K. Zavadskas, “Fuzzy multiple criteria decision-making techniques
and applications – Two decades review from 1994 to 2014,” Expert Systems with Applications, vol.
42, no. 8, pp. 4126–4148, May 2015, doi: 10.1016/j.eswa.2015.01.003.
[37] M. Dağdeviren, S. Yavuz, and N. Kılınç, “Weapon selection using the AHP and TOPSIS methods
under fuzzy environment,” Expert Systems with Applications, vol. 36, no. 4, pp. 8143–8151, May
2009, doi: 10.1016/j.eswa.2008.10.016.
[38] “Fuzzy Systems Engineering: Toward Human-Centric Computing | IEEE eBooks | IEEE Xplore.”
Accessed: Jul. 30, 2025. [Online]. Available: https://ieeexplore.ieee.org/book/5201525
[39] M. Fernández-Martínez and J. M. Sánchez-Lozano, “Assessment of Near-Earth Asteroid Deflection
Techniques via Spherical Fuzzy Sets,” Advances in Astronomy, vol. 2021, no. 1, p. 6678056, 2021,
doi: 10.1155/2021/6678056.
[40] J. M. Sánchez-Lozano, M. S. García-Cascales, and M. T. Lamata, “GIS-based onshore wind farm
site selection using Fuzzy Multi-Criteria Decision Making methods. Evaluating the case of
Southeastern Spain,” Applied Energy, vol. 171, pp. 86–102, Jun. 2016, doi:
10.1016/j.apenergy.2016.03.030.
[41] “A comparison between fuzzy TOPSIS and VIKOR to the selection of aircraft for airspace defense
| Request PDF,” ResearchGate, Aug. 2025, doi: 10.1142/S0219622025500348.
[42] J. J. Thakkar, Multi-Criteria Decision Making, vol. 336. in Studies in Systems, Decision and Control,
vol. 336. Singapore: Springer Singapore, 2021. doi: 10.1007/978-981-33-4745-8.
[43] T.-C. Chu and T. B. H. Nghiem, “Extension of Fuzzy ELECTRE I for Evaluating Demand
12
Forecasting Methods in Sustainable Manufacturing,” Axioms, vol. 12, no. 10, Art. no. 10, Oct. 2023,
doi: 10.3390/axioms12100926.
[44] M. Gul, E. Celik, A. T. Gumus, and A. F. Guneri, “A fuzzy logic based PROMETHEE method for
material selection problems,” Beni-Suef University Journal of Basic and Applied Sciences, vol. 7,
no. 1, pp. 68–79, Mar. 2018, doi: 10.1016/j.bjbas.2017.07.002.
[45] B. Akkaya and C. Kahraman, “A Literature Review on Fuzzy ELECTRE Methods,” vol. 758 LNNS,
doi: 10.1007/978-3-031-39774-5_43.
[46] L. Tran and L. Duckstein, “Comparison of fuzzy numbers using a fuzzy distance measure,” Fuzzy
Sets and Systems, vol. 130, no. 3, pp. 331–341, Sep. 2002, doi: 10.1016/S0165-0114(01)00195-6.
[47] P. K. D. Pramanik, S. Biswas, S. Pal, D. Marinković, and P. Choudhury, “A Comparative Analysis
of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing,”
Symmetry, vol. 13, no. 9, Art. no. 9, Sep. 2021, doi: 10.3390/sym13091713.
[48] M. Narang, A. Kumar, and R. Dhawan, “A fuzzy extension of MEREC method using parabolic
measure and its applications,” J. Decis. Anal. Int. Comp., vol. 3, no. 1, pp. 33–46, Apr. 2023, doi:
10.31181/jdaic10020042023n.
[49] A. Jahan and K. L. Edwards, “A state-of-the-art survey on the influence of normalization techniques
in ranking: Improving the materials selection process in engineering design,” Materials & Design
(1980-2015), vol. 65, pp. 335–342, Jan. 2015, doi: 10.1016/j.matdes.2014.09.022.
[50] C.-T. Chen, “Extensions of the TOPSIS for group decision-making under fuzzy environment,” Fuzzy
Sets and Systems, vol. 114, no. 1, pp. 1–9, Aug. 2000, doi: 10.1016/S0165-0114(97)00377-1.
[51] J. Mazurek and D. Strzałka, “On the Monte Carlo weights in multiple criteria decision analysis,”
PLoS ONE, vol. 17, no. 10, p. e0268950, Oct. 2022, doi: 10.1371/journal.pone.0268950.
[52] F. Sohil, M. U. Sohali, and J. Shabbir, “An introduction to statistical learning with applications in R:
by Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani, New York, Springer Science
and Business Media, 2013, $41.98, eISBN: 978-1-4614-7137-7,” Statistical Theory and Related
Fields, vol. 6, no. 1, pp. 87–87, Jan. 2022, doi: 10.1080/24754269.2021.1980261.
[53] M. O. Akinwande, H. G. Dikko, and A. Samson, “Variance Inflation Factor: As a Condition for the
Inclusion of Suppressor Variable(s) in Regression Analysis,” Open Journal of Statistics, vol. 5, no.
7, Art. no. 7, Dec. 2015, doi: 10.4236/ojs.2015.57075.
[54] R. R. Wessen, C. Borden, J. Ziemer, and J. Kwok, “Space Mission Concept Development Using
Concept Maturity Levels,” presented at the American Inst. of Aeronautics and Astronautics, Reston,
VA, United States, Sep. 2013. Accessed: Aug. 01, 2025. [Online]. Available:
https://ntrs.nasa.gov/citations/20150007816
[55] M. Tantardini et al., “Asteroid Retrieval Feasibility Study,” Apr. 2012.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The term "boulder-capture mission" is occasionally referred to as "grab-a-boulder scenario" in the text. It is recommended to standardize the terminology throughout the manuscript to avoid reader confusion.

 

Section 2 already provides a detailed explanation of the methodology. However, some formulae, such as Equation (37), are repeated in Section 3. It is suggested to move such content earlier into the methods section, as Section 3 should primarily serve to describe the decision context and data sources.

 

Although the use of Triangular Fuzzy Numbers (TFNs) is well explained in the manuscript, the rationale for not adopting other fuzzy number types—such as trapezoidal fuzzy numbers—is not addressed. A brief discussion on the choice of TFNs over alternative representations would strengthen the methodological justification.

Author Response

1. Summary
We sincerely thank you for the time and effort you dedicated to reviewing our manuscript.
Your constructive comments and suggestions have been invaluable in improving the clarity,
rigor, and overall quality of the work. Below, we provide detailed responses to each point,
with the corresponding revisions highlighted in red in the re-submitted files.
2. Point-by-point response to Comments and Suggestions for Authors
Comment 1: The term "boulder-capture mission" is occasionally referred to as "grab-a-boulder
scenario" in the text. It is recommended to standardize the terminology throughout the manuscript to
avoid reader confusion.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have
standardized the terminology throughout the manuscript. Specifically, we have applied the
term "boulder-capture mission" consistently, replacing all occurrences of "grab-a-boulder
scenario". These changes can be found in the revised manuscript on lines 43 and 543:
“The present work focuses on the latter —a boulder-capture mission— which was found […]”
“It is important to note that the four main candidates for the boulder-capture option of the
ARM mission[…]”
Comment 2: Section 2 already provides a detailed explanation of the methodology. However, some
formulae, such as Equation (37), are repeated in Section 3. It is suggested to move such content earlier
into the methods section, as Section 3 should primarily serve to describe the decision context and data
sources.
Response 2: Thank you for your comment and suggestion regarding the manuscript structure.
We carefully considered your proposal; however, we believe that the current organization—
presenting the MCDM weighting and ranking methods in Section 2 and the decision problem
formulation in Section 3—best reflects the modularity of the proposed framework. This
structure ensures flexibility, allowing alternative weighting and ranking methods to be
applied to the decision problem defined in Section 3, or, conversely, the methods from Section
2 to be used for different decision problems.
While we did not identify a duplication of Equation (37), we acknowledge that its formulation,
as well as that of Equations (17) and (18), could be clarified to prevent potential
misunderstandings. Therefore, we have revised their notation accordingly. These
modifications can be found in the revised manuscript on lines 257, 259 and 387.
Comment 3: Although the use of Triangular Fuzzy Numbers (TFNs) is well explained in the
manuscript, the rationale for not adopting other fuzzy number types—such as trapezoidal fuzzy
2
numbers—is not addressed. A brief discussion on the choice of TFNs over alternative representations
would strengthen the methodological justification.
Response 3: Thank you for this valuable suggestion. In response, we have expanded the
methodological justification for the use of Triangular Fuzzy Numbers (TFNs) by explicitly
discussing the reasons for selecting TFNs over other fuzzy number types, such as trapezoidal
fuzzy numbers. In particular, we highlight aspects related to computational simplicity,
interpretability, and their suitability for the type of decision-making problem addressed in
this study. Furthermore, we have included several additional references supporting the
widespread use of TFNs in similar contexts. These additions can be found in the revised
manuscript on lines 137-154:
“Fuzzy sets can be defined by different membership functions [1], [2]: triangular, trapezoidal,
sigmoid, gaussian, etc. Among the multiple ways to represent fuzzy sets, triangular fuzzy
numbers are particularly popular in MCDM. A TFN ?̃ is represented by the tuple ?̃ =
(?? , ??, ??), where ?? ≤ ?? ≤ ?? correspond to the minimum, modal, and maximum values
of the variable, respectively. This representation is notably advantageous: it captures
uncertainty rigorously while allowing direct and efficient computation, given that all fuzzy
arithmetic operations (addition, subtraction, multiplication, division), can be performed via
simple formulae on the three parameters (for details on TFNs and their arithmetic operations,
see [3]).
In addition, numerous empirical MCDM applications [1], [2], [4], [5], [6] demonstrate that
TFNs deliver robust decision outcomes without excessive modeling burden [7], [8]. The
consensus in the literature is that TFNs offer the most practical trade-off between
interpretability, computational complexity, and fidelity in representing uncertainty [9], [10].
In this study, a combination of real (crisp) variables and fuzzy variables represented as TFNs
is considered. This combination has proven to be an effective approach for handling MCDM
problems involving heterogeneous data types [4], [5], [11], [12]. Consequently, the
mathematical fundaments of the MCDM algorithms for the determination of the weights of
the criteria and the assessment of the alternatives must be adapted to their fuzzy
counterparts.”
References:
[1] M. C. F. Bazzocchi, J. M. Sánchez-Lozano, and H. Hakima, “Fuzzy multi-criteria decisionmaking
approach to prioritization of space debris for removal,” Advances in Space
Research, vol. 67, no. 3, pp. 1155–1173, Feb. 2021, doi: 10.1016/j.asr.2020.11.006.
[2] H. Wu and Z. Xu, “Fuzzy Logic in Decision Support: Methods, Applications and Future
Trends,” INT J COMPUT COMMUN, Int. J. Comput. Commun. Control, vol. 16, no. 1, Sep.
2020, doi: 10.15837/ijccc.2021.1.4044.
[3] G. J. Klir and B. Yuan, Fuzzy sets and fuzzy logic: theory and applications. Upper Saddle
River, New Jersey: Prentice Hall PTR, 1995.
[4] J. M. Sánchez-Lozano, A. Moya, and J. M. Rodríguez-Mozos, “A fuzzy Multi-Criteria
Decision Making approach for Exo-Planetary Habitability,” Astronomy and Computing,
vol. 36, p. 100471, Jul. 2021, doi: 10.1016/j.ascom.2021.100471.
3
[5] J. M. Sánchez-Lozano, J. Serna, and A. Dolón-Payán, “Evaluating military training aircrafts
through the combination of multi-criteria decision making processes with fuzzy logic. A
case study in the Spanish Air Force Academy,” Aerospace Science and Technology, vol. 42,
pp. 58–65, Apr. 2015, doi: 10.1016/j.ast.2014.12.028.
[6] J. M. Sánchez-Lozano, M. Fernández-Martínez, A. A. Saucedo-Fernández, and J. M. Trigo-
Rodriguez, “Evaluation of NEA deflection techniques. A fuzzy Multi-Criteria Decision
Making analysis for planetary defense,” Acta Astronautica, vol. 176, pp. 383–397, Nov. 2020,
doi: 10.1016/j.actaastro.2020.06.043.
[7] A. Mardani, A. Jusoh, and E. K. Zavadskas, “Fuzzy multiple criteria decision-making
techniques and applications – Two decades review from 1994 to 2014,” Expert Systems with
Applications, vol. 42, no. 8, pp. 4126–4148, May 2015, doi: 10.1016/j.eswa.2015.01.003.
[8] M. Dağdeviren, S. Yavuz, and N. Kılınç, “Weapon selection using the AHP and TOPSIS
methods under fuzzy environment,” Expert Systems with Applications, vol. 36, no. 4, pp.
8143–8151, May 2009, doi: 10.1016/j.eswa.2008.10.016.
[9] “Fuzzy Systems Engineering: Toward Human-Centric Computing | IEEE eBooks | IEEE
Xplore.” Accessed: Jul. 30, 2025. [Online]. Available:
https://ieeexplore.ieee.org/book/5201525
[10] M. Fernández-Martínez and J. M. Sánchez-Lozano, “Assessment of Near-Earth Asteroid
Deflection Techniques via Spherical Fuzzy Sets,” Advances in Astronomy, vol. 2021, no. 1,
p. 6678056, 2021, doi: 10.1155/2021/6678056.
[11] J. M. Sánchez-Lozano, M. S. García-Cascales, and M. T. Lamata, “GIS-based onshore wind
farm site selection using Fuzzy Multi-Criteria Decision Making methods. Evaluating the
case of Southeastern Spain,” Applied Energy, vol. 171, pp. 86–102, Jun. 2016, doi:
10.1016/j.apenergy.2016.03.030.
[12] “A comparison between fuzzy TOPSIS and VIKOR to the selection of aircraft for airspace
defense | Request PDF,” ResearchGate, Aug. 2025, doi: 10.1142/S0219622025500348.

Author Response File: Author Response.pdf

Back to TopTop