Abstract
Space technology, a frontier of global scientific innovation, is crucial for competitive edges and national tech innovation. Amid intensified international competition and rapid technological change, scientifically evaluating a country’s Scientific and Technological Strength in Space Technology (STSST) is vital. A model is innovatively proposed in this study called “Analytic Hierarchy Process-Maximum Entropy-Induced Ordered Weighted Average (AHP-ME-IOWA)” for the assessment of STSST. First, an STSST assessment indicator system is developed with four sub-dimensions: scientific research, industrial operation, innovation output, and policy resources. Second, the AHP model is used to convert experts’ qualitative judgments on indicator importance into initial individual weight vectors. Subsequently, the IOWA operator is employed to aggregate these individual weight vectors, thereby mitigating the impact of outliers and enhancing the robustness of the weights. Specifically, the weights are reordered using the cosine similarity between each expert’s weight vector and the temporary group mean as the induced value. Position weights are then determined via the ME method, and consensus weights are derived through re-aggregation. A systematic evaluation of the United States’ STSST was conducted using this method. The results show that the United States achieved a comprehensive STSST score of 8.73 (out of 10), which is in line with the actual situation, thereby providing empirical validation for the proposed method.
1. Introduction
In the long journey of human exploration of the cosmos, the space endeavor has always played a pivotal role. As the forefront of global scientific innovation, space technology is key to gaining strategic advantages and serves as a powerful engine propelling national technological innovation and development []. With the intensification of international competition and accelerated technological changes, countries’ investment and dependence on space technology are constantly increasing. In this context, it has become particularly important to accurately measure and continuously monitor a country’s Scientific and Technological Strength in Space Technology (STSST).
In recent years, comprehensive research capacity assessments have garnered significant attention both domestically and internationally. Current research exhibits a trend toward diversification and specialization, with various institutions adopting tailored evaluation frameworks and indicator systems. These studies extensively utilize rich and rigorous quantitative analysis tools in terms of methodology, covering various methodologies such as open source data mining extraction [], bibliometric statistics [,], patent depth statistical analysis [], collaborative network tightness calculation [], and so on. On this basis, global scientific and technological information database resources such as the Web of Science (WoS) database and the Derwent Innovations Index (DII) patent database, and a technology literature comprehensive retrieval platform are fully utilized to continuously track and accurately grasp the long-term trends and short-term dynamics of global scientific and technological development. This has provided us with a lot of inspiration for our research work.
Despite these advancements, existing methodologies present notable limitations. On one hand, some studies lean heavily toward qualitative assessments, lacking robust quantitative data support. For instance, reports such as the U.S. Government Accountability Office’s Technology Assessment Design Handbook: Handbook for Key Steps and Considerations in the Design of Technology Assessments [], and the RAND Corporation’s Disrupting Deterrence: Examining the Effects of Technologies on Strategic Deterrence in the 21st Century provide valuable insights but may compromise the objectivity and accuracy of their findings due to insufficient reliance on quantitative metrics []. On the other hand, certain studies overemphasize quantitative analysis, such as the KISTEP’s 2019 study using bibliometrics and patent analysis to evaluate South Korea’s technological gaps relative to the U.S., the European Union, Japan, and China [], and the 2022 joint research by Alibaba Research Institute and Zhipu AI, which analyzed global digital technology trends through academic publications, patent filings, and innovation activity levels []. While these approaches offer precise data-driven insights, they often fail to incorporate expert judgment, which may lead to algorithmic bias. This gap leaves the field without a unified framework that can balance qualitative expert knowledge and quantitative data rigor, a shortfall that hinders both academic progress in STSST assessment methodologies and practical support for policymakers seeking evidence-based insights into national space technology strength.
This study aims to develop an innovative assessment methodology that integrates the Analytic Hierarchy Process (AHP) model, Induced Ordered Weighted Average (IOWA) operator, Maximum Entropy (ME) method, and a quantified indicator system to systematically evaluate and monitor national STSST, as shown in Figure 1. Methodologically, this framework delivers three critical advances: first, its benchmark indicators (covering four sub-dimensions: scientific research, industrial operation, innovation output, policy resources, and 10 specific quantitative indicators) resolve the over-subjectivity of qualitative-only assessments by embedding data-driven precision. Second, the methodology transforms experts’ qualitative evaluations of indicator importance into initial individual weight vectors via AHP—then uses the IOWA operator to aggregate these weights, with cosine similarity (between each expert’s vector and the group mean) as the induced value for reordering. This step uniquely mitigates outlier expert judgments and ensures weight assignments reflect collective, reliable insights. Second, the methodology transforms experts’ qualitative evaluations of indicator importance into initial individual weight vectors via AHP, then uses the IOWA operator to aggregate these weights, with cosine similarity as the induced value for reordering. This step uniquely mitigates outlier expert judgments and ensures weight assignments reflect collective, reliable insights. Third, the integration of the ME method to determine position weights adds a layer of rigor missing in existing workflows: it optimizes weight distribution to balance consensus and diverse expert perspectives, further enhancing the robustness of final indicator weights. By integrating the AHP model, ME method, and IOWA operator into a unified analytical framework, the methodology balances the qualitative judgment of domain experts with data-driven rigor, which ensures the rationality of the evaluation outcomes to a certain extent.
Figure 1.
The research framework of the study.
This paper is structured as follows: Section 2 introduces the research methods to assess STSST, which includes three parts: construction of the evaluation system, calculation of indicator weights based on AHP-ME-IOWA, and determination of evaluation standards. Section 3 presents the case study of the U.S. Section 2 will specifically elaborate on the methodology adopted, while the specific implementation steps, operational details, and data application will be explained in detail in Section 3. Section 4 summarizes the study, discusses current limitations, and outlines future directions.
2. Research Methods
This section is dedicated to explaining the theoretical framework and methodological process of the evaluation of STSST in our study. The process of practice for expert scoring, data collection, and calculation of the assessment will be set out in detail in Section 3.
2.1. Construction of the Evaluation System
The construction principle of the quantitative evaluation indicator system for STSST: When establishing the evaluation indicator system, the rationality of the indicators should be comprehensively considered, and the selection of indicators must be accurate and effective, following the principles of systematicity, simplicity, scientificity, and timeliness []. A thorough review of domestic and international literature on comprehensive research capacity assessments informed the process [,,,,,,,,,,,,,,,,,,,,,,,,,]. Additionally, we conduct adequacy assessments, redundancy analyses, and feasibility checks to identify necessity. Indicators are selected not only based on their contribution to STSST but also on their ability to effectively cover all aspects of STSST. Indicators are carefully considered.
Based on these foundations, a comprehensive evaluation indicator system was developed to operationalize STSST. Four sub-dimensions are defined at level 2: scientific research(B1), industrial operation(B2), innovation output(B3), and policy resources(B4). Inspired by RAND’s 2022 quantum-technology benchmarking framework, which compares China and the U.S. across four metric categories (research metrics, government activity metrics, private industry metrics, and technical metrics) []. This study adopts the same conceptual logic while tailoring the dimensions to space technology. These are further subdivided into 10 indicators, numbered accordingly in Table 1.
Table 1.
Evaluation indicator system for STSST.
Typical Laboratories (C1): The basic value is the number of Typical Laboratories. Laboratories that are representative and advanced, reflecting the level of research infrastructure in the field of space science.
Leading Research Institutions (C2): The basic value is the number of the top 50 institutions in terms of publication volume in the WoS database. Research institutions that hold a leading position in the field of space science represent the top research level in this area.
Key Industrial Enterprises (C3): The basic value is the number of Key Industrial Enterprises. Key industrial enterprises that directly affect the integrity and maturity of the space industry chain, evaluated based on their distribution and influence in the space technology field.
Space Launch Mission (C4): The basic value is the number of successful launches per year. The frequency of space launch missions reflects a country’s activity and capability in space technology applications and space activities.
Journal Publications (C5): The basic value is the number of journal publications published according to WoS data, reflecting a country’s academic contribution and research activity in the field of space science.
Highly Cited Journal Publications (C6): The basic value is the number of highly cited papers from ESI core papers. The number of highly cited journal publications is an important criterion for measuring the international impact of research results.
Patents (C7): The basic value is the number of patent records from the DII. Reflecting the technological innovation capability and level of intellectual property protection in this field.
Core Patents (C8): The basic value is the number of patents identified by Price’s Law. Reflecting the core technological innovation capability and the depth and breadth of scientific research in this field.
Strategic Policies (C9): The basic value is the number of the Strategic policies. The government’s emphasis on decision-making in space science development reflects the strategic deployment and policy support of a country or region for space science.
Research Funds (C10): The basic value is the annual total national funding amount. The national total investment intensity in space science reflects a country’s financial support and investment intensity in this field.
In summary, these indicators together constitute a comprehensive and detailed quantitative evaluation system, which is crucial for STSST. The calculation method and data sources for each indicator value will be discussed in detail in Section 2.3 and Section 3.1.
2.2. Calculation of Indicator Weights
The calculation of indicator weights is a core link in the multi-indicator evaluation system, as the weight coefficient directly reflects the relative importance of each indicator in the evaluation context. Indicators with higher importance should correspond to larger weight coefficients, which is crucial for ensuring the scientificity and reliability of the final evaluation results. To accurately determine the indicator weights, this section integrates two key methods with complementary advantages. First, the AHP model is adopted to convert a single expert’s qualitative assessments of indicator importance into quantitative weight coefficients of that expert. Second, considering the complexity of practical multi-indicator evaluation problems and the differences in expert backgrounds, the ME-IOWA method is introduced for consensus adjustment of experts. This combination not only realizes the integration of multi-expert opinions but also automatically weakens the impact of outlier judgments, thereby improving the robustness of the final consensus weights.
2.2.1. AHP-Based Expert Weighting
The weight reflects the importance of a factor or indicator in a given context []. In this work, we employed the AHP model to determine the weights of each indicator through expert qualitative assessments. The AHP model was constructed by Thomas L. Saaty in the early 1980s [], which has a significant role in all segments of life [,,]. The AHP model assigns weights to indicators using empirical data derived from pairwise comparisons of expert judgments, minimizing bias and including a consistency check.
- Step 1: Construction of the hierarchical model
Based on the AHP model, the STSST is designated as the target layer A. Scientific research, industrial operation, innovation output, and policy resources are designated as the criteria layer B. All indicators, including typical laboratories, leading research institutions, key industrial enterprises, such as journal publications, highly cited journal publications, patents, and core patents, are designated as the indicator layer C. This structure is used to construct a hierarchical model for the quantitative evaluation indicator system of STSST.
- Step 2: Construction of the judgment matrix
The judgment matrix is a comparison of the relative importance of all factors in the current layer compared to a factor in the previous layer. According to the established hierarchical structure model, the judgment matrix for A-Bi (i = 1, 2, 3, 4) has been constructed. The relative importance ratings among the sub-dimensions were ascertained through a combination of questionnaire surveys and expert consultations, utilizing pairwise comparisons based on the 1–9 scale rating method. bij represents the quantitative value of the degree to which indicator Bi is more important than indicator Bj (j = 1, 2, 3, 4) as shown in Table 2.
Table 2.
AHP model importance scale table.
The judgment matrix for A-Bi presents the criterion-layer judgment matrix for the target layer A, which is used to quantify the relative importance of each factor in the criterion layer (Scientific Research B1, Industrial Operation B2, Innovation Output B3, Policy Resources B4) with respect to the target layer STSST. The rows and columns of this matrix are all the four criterion-layer factors as shown in Table 3. Specifically:
Table 3.
Regarding the criteria layer judgment matrix for the target A.
- The diagonal elements (e.g., b11 = 1, b22 = 1, etc.) indicate that a factor is equally important to itself, complying with the basic reciprocity rule of the judgment matrix;
- The off-diagonal elements (e.g., b12 = 1/b21, etc.) reflect the quantitative comparison of importance between the row factor and the column factor.
This matrix serves as a key quantitative basis for subsequently calculating the weight of each criterion-layer factor.
Similarly, the indicator layer judgment matrix B1-Ci (i = 1, 2), B2-Ci (i = 3, 4), B3-Ci (i = 5, 6, 7, 8), and B4-Ci (i = 9, 10) can be derived.
- Step 3: Computation of the weight vector
Based on the constructed judgment matrix A-Bi, calculate the maximum eigenvalue λmax and its corresponding eigenvector WA = (ωA1, ωA2, ωA3, ωA4)T. In this step, each column vector of the judgment matrix is normalized. Subsequently, these normalized column vectors are summed on a row-by-row basis. Finally, the resultant row sums are normalized once more to yield an approximate eigenvector, which is the weight vector of the judgment matrix. The eigenvector is WA = (ωA1, ωA2, ωA3, ωA4)T, where ωAi represents the weight of criterion Bi for the overall target of a quantitative evaluation indicator system for STSST A. For the judgment matrix A-Bi (i = 1, 2, 3, 4), the maximum eigenvalue λA is calculated using mathematical methods:
Similarly, for the judgment matrix B1-Ci (i = 1, 2), B2-Ci (i = 3, 4), B3-Ci (i = 5, 6, 7, 8) and B4-Ci (i = 9, 10), the maximum eigenvalue λB1, λB2, λB3 and λB4 is calculated using mathematical methods.
- Step 4: Consistency verification
In the AHP progress, a consistent matrix represents an ideal state of a judgment matrix. It reflects the complete rationality and consistency of the decision-maker’s judgments. In practical applications, constructing a fully consistent judgment matrix is generally impractical, due to the complexity of real-world scenarios and the inherent uncertainty in human judgments. Therefore, the consistency degree of judgments is evaluated by comparing the differences between the actual judgment matrix and the consistent matrix, so as to ensure that the judgment matrix remains within an acceptable range.
Theorem 1.
If matrix A is a consistent matrix, its maximum eigenvalue λmax = n, where n denotes the order of matrix A. All other eigenvalues of A are 0.
Theorem 2.
An n-order positive reciprocal matrix is a consistent matrix if and only if its maximum eigenvalue λmax = n. Moreover, if the positive reciprocal matrix is inconsistent, its maximum eigenvalue must satisfy λmax > n.
Based on the above theorems, after obtaining the calculated λmax, the value of the consistency indicator CI is defined as:
When CI equals 0, it indicates perfect consistency, while a larger CI value suggests greater inconsistency. During this process, we utilize the random consistency indicator RI, as shown in Table 4, developed by Thomas L. Saaty. The construction method of the RI is to randomly construct 1000 positive reciprocal matrices and calculate the average value of the consistency indicator.
Table 4.
Table of RI values.
The ratio of CI to the RI is used as the criterion for assessing consistency, specifically:
The CR, or consistency ratio, is used to evaluate the consistency of the judgment matrix. If the CR value is less than 0.1, the judgment matrix is considered to have passed the consistency verification. However, if the CR value is 0.1 or higher, the judgment matrix fails the consistency verification and must be revised to improve its consistency. Therefore, for the judgment matrix A-Bi (i = 1, 2, 3, 4), if the consistency ratio CRA is less than 0.1, the consistency verification is considered successful. It is the same for the judgment matrix B1-Ci (i = 1, 2), B2-Ci (i = 3, 4), B3-Ci (i = 5,6,7,8) and B4-Ci (i = 9, 10). If the consistency verification is successful, the corresponding weights can be confirmed.
2.2.2. ME-IOWA Consensus Adjustment
With the continuous increase in the complexity of practical multi-indicator evaluation problems, involving only one expert in the evaluation process will affect the accuracy of the final weights. Moreover, experts have different focuses in their background experience and research directions, so multiple opinions can be integrated. Therefore, in the evaluation process, a team composed of multiple experts is usually formed to evaluate decision-making problems. This method introduces the IOWA method [], which was proposed by Ronald R. Yager and is an extended form of the Ordered Weighted Averaging (OWA) operator. First, the cosine similarity between each expert’s weight vector and the group mean is used as the induced value, and the weights are reordered according to the level of similarity; then, the position weights are determined by the ME method, and the reordered weights are aggregated. Thus, while integrating multiple judgments, it automatically weakens the impact of outliers and improves the robustness of the consensus weights.
- Step 1: Calculation of the temporary group weights
The temporary group weight vector is calculated by aggregating the individual weight vectors provided by all experts. Specifically, assuming there are n experts involved in the evaluation process, and each expert k (where k = 1, 2, …, n) provides a weight vector Wk = (ωk,1, ωk,2, …, ωn,m)T corresponding to m evaluation indicators, the temporary group weight vector is determined by computing the arithmetic mean of the individual weight vectors across the expert dimension, expressed as:
This temporary group weight serves as an initial reference for subsequent consensus adjustment, reflecting the aggregated tendency of multi-expert judgments on indicator importance.
- Step 2: Calculation of the induced values
In our method, the cosine similarity s is introduced to measure the similarity between each expert’s weight vector and the temporary group weight vector. The cosine similarity is a widely used metric in vector space models, where a value closer to 1 indicates a higher degree of similarity between two vectors. The cosine similarity is selected to quantify the “distance” between individual expert vectors and the mean vector because it focuses on directional alignment rather than magnitude differences of vectors. This characteristic is particularly suitable for expert weight consensus measurement, as the core of weight assignment lies in the relative importance order of indicators (direction) rather than the absolute weight values (magnitude). The application of this approach is supported by existing literature. For instance, Ren et al. integrated cosine similarity into the normal cloud multi criteria group decision-making problem [], and calculated the consensus degree of the group through cosine similarity to determine the degree of consensus among experts’ decision-making opinions. Mathematically, the cosine similarity sk for the k-th expert’s weight vector Wk and the temporary group weight vector is defined as:
Here, and represent the Euclidean norm (L2 norm) of vectors and respectively. The calculation method is the square root of the sum of the squares of each element in the n-dimensional vector.
By calculating the cosine similarity s for each expert’s weight vector, we obtain a set of induced values that reflect the degree of alignment between individual expert judgments and the initial group consensus. These induced values will then be used in the subsequent step to reorder the expert weight vectors, which is a crucial part of the consensus adjustment process in the IOWA method.
- Step 3: Arrangement of the weights by similarity
The expert weight vectors are reordered based on the cosine similarity values {s1, s2, …, sn} calculated in Step 2. The core idea is that the higher the similarity between an expert’s weight vector and the temporary group weight vector, the further forward the position of the corresponding weight vector will be in the reordered sequence. This is because, in the subsequent OWA operation, positions closer to the front are associated with larger weight coefficients, meaning that expert judgments with higher similarity (and thus higher consistency with the group consensus) will have a greater influence on the final consensus weight.
Specifically, the reordering process is implemented as follows:
First, the expert weight vectors are sorted in descending order of their corresponding cosine similarity values s. That is, the weight vector with the highest s is placed at the first position p1, the one with the second-highest s is placed at p2, and so on, until the weight vector with the lowest s is placed at pn.
In cases where there is a tie in similarity values (for example, if the similarity values of expert 1 and expert 6 are both 0.9000), the order of these tied expert weight vectors can be random. In this specific context, to ensure a deterministic process, the tied weight vectors are ordered according to the order in which the experts were originally listed (i.e., the order of appearance in the expert group).
Through this reordering, the weight vectors are arranged in a sequence where those reflecting judgments more consistent with the group consensus are prioritized. This sequence {p1, p2, …, pn} will then be used in the next step to calculate the final consensus weight, leveraging the ordered structure to appropriately weight the expert judgments.
Step 4: Calculation of the consensus weights by ME
In this step, the rank-based ME method is adopted to determine the position weight Wp = (ωp1, ωp2, …, ωpn)T, which are further used to aggregate the reordered expert weight vectors {p1, p2, …, pn} and obtain the final consensus weight vector. The core logic of this method is to assign larger position weights to the expert weight vectors that are more consistent with the group consensus (i.e., ranked earlier in Step 3), and the weight values decrease in reverse order of the ranking—this design ensures that judgments with higher similarity to the group opinion have a greater influence on the final consensus result, while still incorporating the information of all expert judgments.
The degree of orness associated with the Wp is defined as:
where orness(Wp) = α ∈ [0, 1] is a measurement introduced by Yager, which can also be interpreted as the mode of decision-making in the aggregation process for weighting vector. The second characterizing measurement introduced by Yager is a measure of dispersion of the aggregation. The dispersion of Wp is defined as:
The principle of ME is integrated with IOWA operators to obtain the Wp, which is designed to attain ME under a preset level of orness. The mathematical programming approach is described as:
The Lagrange multiplier method is applied to the IOWA operator equation to derive a polynomial equation, which is the key tool for determining the Wp that satisfies the maximal entropy criterion. Wp can be obtained by:
and:
then:
Under this approach, the position weight Wp is calculated by assigning different values to the orness parameter (α = 0.5, 0.6, 0.7,0.8, 0.9, 1.0). Table 5 illustrates the position weights under maximal entropy when n is 10.
Table 5.
The position weights under maximal entropy (n = 10).
When α is 0.5, the position weights tend to be more evenly distributed, which treats each expert’s input with roughly equal importance, weakening the distinction between higher-ranked and lower-ranked experts. As α increases, the weight for the top-ranked position (held by the expert most consistent with the group) becomes dominant, signifying that the scoring of this top-ranked expert carries much greater weight, while the inputs of lower-ranked experts are largely sidelined. The experts with consistent judgments will be prioritized to ensure the reliability of weights, while also retaining non-negligible weights for less consistent experts to avoid missing valuable niche insights [].
2.3. Determination of Evaluation Standards
In the evaluation process of the indicator, different membership functions can lead to different results, which affect the credibility of the evaluation []. This study proposes a framework for constructing membership functions, tailored to the distribution characteristics of space technology indicators. Notably, indicators such as leading research institutions, space launch mission, journal publications, highly cited journal publications, patents, and core patents exhibit a significant right-skewed global distribution, with a small number of space-faring nations accounting for the majority of the top values. Under such distributions, traditional linear membership functions tend to cause “score saturation” for high-value indicators and “discrimination loss” for medium-to-low values. To address this, a power exponential membership function is adopted inspired by the fuzzy comprehensive evaluation method, leveraging its nonlinear mapping to preserve discriminability across all value ranges: it amplifies subtle differences among top performers while preventing undue compression of lower values, ensuring the evaluation remains objective, robust, and reflective of the actual global space technology hierarchy. The value f(i) of indicator i can be represented as:
Here, for each indicator item, a represents the actual value of the evaluated country, t represents the total global actual value, and k is the exponent of the membership function. The theoretical basis of this formula is derived from the power-law transformation method of probability distribution [], which can flexibly control the convexity and growth rate of the membership function by adjusting the parameter k. After experimental verification, when k = 0.2 (the experimentally verified optimal value), the function exhibits concave growth, which can effectively compress the marginal effects of high-value indicators and make the evaluation results more realistic.
For other indicators, a scoring standard was established by integrating quantitative data with expert judgment, as shown in Table 6. This standard converts precise values into fuzzy evaluation intervals by constructing a segmented mapping rule for indicator membership degrees.
Table 6.
Indicator scoring table.
Guided by Table 6, each indicator is scored and evaluated by industry experts, and then the evaluation results f(i) of indicator i can be represented as:
Here, f(i)n represents the score given by an expert for the indicator, and ωn represents the weight of that expert. In this study, experts were assigned equal weights during the scoring process for these indicators.
3. Case Study
As a traditional powerhouse in the space technology, the U.S. is selected as a practical case to validate the proposed method, through the evaluation of its STSST. There are three key reasons: (1) It is a long-standing global leader in space technology, with mature research infrastructure, a robust industrial chain, and sustained policy support, providing sufficient and high-quality data for empirical validation; (2) Its STSST has been widely discussed in existing literature, enabling cross-method comparisons to verify our model’s validity; (3) Insights from evaluating the U.S. can serve as a benchmark for other countries’ space technology development, enhancing the study’s practical value.
3.1. Data Description
To ensure the quality and accuracy of this work, the data from journal publications and patents used for this method were collected from the WoS database and the DII patent database. The WoS Core Collection is currently recognized as an authoritative citation indicator database, while the DII patent database is acknowledged worldwide for its high authority in patent literature. Based on a thorough investigation of the current status of scientific research and knowledge application in space technology [,,], we determined the search strategy for literature and patents. The data from the past decade is more current and representative, providing a better reflection of the current academic field’s development trends and hotspots []. Considering the latency of paper publication and patent authorization, the relevant journal publications and patents published within the time range set to be from 1 January 2014, to 31 December 2023, were selected as the object of analysis. The retrieval was conducted on 8 January 2025.
Typical laboratories: based on available information, 17 well-known typical laboratories can be identified in the U.S., which are primarily categorized into three types: nationally funded laboratories, such as the Air Force Research Laboratory under the Department of Defense; NASA laboratories, such as the Ames Research Center and the Armstrong Flight Research Center; and university laboratories, such as the Lincoln Laboratory at the Massachusetts Institute of Technology.
Leading research institutions: the top 50 institutions in terms of publication volume in the WoS database can be considered as research leaders in the field. This perspective is not solely based on the number of papers published but also on their profound influence in academia and industry and their contributions to technological advancements. When analyzing the top 50 institutions by publication count retrieved from the WoS, it becomes clear that the U.S. enjoys a significant numerical advantage in space technology, with 25 institutions making it into the top 50. These institutions encompass national-level space research organizations like NASA, leading universities, and research divisions of private enterprises.
Key industrial enterprises: As the only enterprise consortium of the U.S. Department of Defense in space technology, the enterprises covered by the Space Enterprise Consortium (SpEC) can be used for analyzing the strength of key industrial enterprises. The U.S. Space Force relies on the SpEC to accelerate the research and development process of space technologies and equipment []. As of now, the SpEC has more than 750 members, focusing on the development of space technology []. Among them, there are not only well-known large-scale space enterprises in the U.S., such as SpaceX, Boeing, and so on, but also a large number of early-stage innovative and start-up commercial space enterprises and academic research institutions.
Space launch mission: In 2024, there were a total of 259 orbital launch attempts, marking a 17% increase from the previous record of 221 attempts in 2023, according to open-source data. This figure excludes suborbital launches, such as the four test flights of SpaceX’s Starship/Super Heavy and the two suborbital launches of Rocket Lab’s Electron HASTE variant. The U.S. accounted for 153 of these successful orbital launches. Globally, there were 251 successful orbital launches in total [].
Journal publications: According to WoS data, overall, a total of 54,406 papers were published worldwide over the past decade, with 16,608 published in the U.S., ranking first globally. The U.S. has demonstrated high levels of activity and productivity in academic output within space technology. From 1256 papers in 2014 to 1789 papers in 2023, the U.S. has shown a fluctuating but overall upward trend in paper publication numbers. The performance of the U.S. becomes even more prominent when considering key metrics such as average citations per paper (28.24) and H-index (203).
Highly cited journal publications: This indicator analyzes the highly cited paper data from ESI core papers in space technology as the research basis. Through manual cleaning and deduplication, a total of 594 highly cited papers were generated from 2014 to 2023, of which 314 were highly cited papers by the U.S. researchers.
Patents: The number of patents is a crucial indicator of a country’s technological innovation capability and industrial competitiveness. According to data from the DII, there are 142,140 records worldwide between 2014 and 2023. The U.S. has a total of 31,602 patent records in space technology, with 55% representing new patent applications and 45% successfully granted, which likely reflects the higher efficiency and superior quality of the U.S. space technology research and development. The number of patents in the U.S. is not far ahead globally, which may be due to the limitations of the national defense intellectual property confidentiality management system on patent applications of American space technology companies.
Core patents: Typically, if a patent document is frequently cited, it indicates that the patent has a significant influence on subsequent research and is likely to be a foundational or core patent in the field. This work employs Price’s Law to identify core patents. Price’s Law states that within a particular subject area, half of the papers are written by a group of highly productive authors, and the number of authors in this group is approximately equal to the square root of the total number of authors. In the context of patent research, we can draw an analogy between authors and patents based on their shared adherence to the law of “a few core elements dominating overall contributions”. Specifically, the role of “number of papers” in measuring an author’s academic contribution in the academic field is equivalent to the role of “citation frequency” in evaluating a patent’s technical influence in the patent field. This analogy enables the rapid identification of core elements through quantitative thresholds, significantly reducing the cost of identifying core patented technologies. Defining Nmax as the maximum citation frequency, in the DII, the maximum citation frequency retrieved is 1759. By substituting this value into Price’s Law [], the threshold for the number of core patent citations M can be expressed as:
As a result, patents with a citation frequency of 31 or more are considered core patents. In the DII, a total of 3676 core patents were identified, with the U.S. institutions contributing 3280.
Strategic policies: The space strategy of the U.S. is an important component of its overall national security strategy and global strategy, which has a significant impact and significance in promoting the sustainable development of the space industry. The strategic thinking of the US space industry began in the 1950s. At different historical stages, successive US governments have continuously revised and improved their space strategies and policies based on changes in global politics, economy, military, and other factors during their tenure, providing top-level guidance and implementation guidance for the development of the space industry at the national level, and clarifying future development directions and priorities. As of 2024, there are over 145 relevant space strategies.
Research funds: The US Fiscal Year 2024 Defense Authorization Act provides a total of $30.1 billion in funding for the Department of Defense’s Space Force []. In addition, NASA’s budget is $24.875 billion. The budget for OSC under NOAA is $65 million. The budget for the FAA Office of Commercial Space Transportation is $42 million. The total amount above is $55.082 billion [].
To ensure that the data is suitable for our evaluation framework, we have applied specific standards to transform the raw data, as detailed in Section 2.3. This transformation process is crucial for ensuring the accuracy and reliability of our subsequent analysis. The transformed data is then used in Section 3.3 for the calculations and evaluations that form the basis of our results.
3.2. Indicator Weights Based on AHP-ME-IOWA
The selection of experts is crucial to obtain an accurate judgment matrix []. This study selected 10 experts from the China Aerospace Academy of Systems Science and Engineering to assign weights to STSST, all of whom have profound professional experience in space technology. The “Questionnaire on Indicators Weights of Scientific and Technological Strength in Space Technology” is distributed for this purpose, which is provided in the Supplementary Materials. Details of the experts are shown in Table 7. These experts have the following characteristics: (1) Working and researching in space technology for over 5 years; (2) Having rich experience or knowledge of assessment in space technology. The extensive research or work experience of the experts helps them to construct accurate judgment matrices. Each participant was asked to score various evaluation criteria using a 1-to-9 rating scale. The questionnaire included an importance scale table to guide respondents in providing consistent and rational pairwise comparisons across multiple criteria. In addition, a hierarchical structure of the STSST was provided to help experts accurately understand the analytical framework and evaluation logic. Ten questionnaires were distributed, and all were effectively recovered, achieving a 100% recovery rate, sufficient for analysis.
Table 7.
Details of the experts.
After aggregating all expert scores from the questionnaires, we strictly followed the procedure described in Section 2.2 to calculate the weights. First, ten pairwise comparison matrices were constructed from the valid questionnaires to derive each expert’s indicator weights. Then, the consistency test was conducted. All experts’ matrices’ CR values are below 0.1, satisfying the consistency requirement and ensuring the reliability of the subsequent calculations. Since the AHP calculation process is consistent across the 10 experts, only the complete AHP of one expert is presented herein to ensure conciseness while maintaining methodological transparency.
Table 8 presents the criteria layer judgment matrix for target A, which quantitatively reflects the pairwise comparison results of the importance among four aspects: scientific research B1, industrial operation B2, innovation output B3, and policy resources B4 in the STSST. With the maximum eigenvalue of 4.021, CI of 0.007, and CR of 0.008 (less than 0.1), the matrix passes the consistency test, indicating that the pairwise comparison results are reasonable and reliable.
Table 8.
Regarding the criteria layer judgment matrix for the target A with the actual value.
Table 9, Table 10, Table 11 and Table 12 present the judgment matrices for scientific research B1, industrial operation B2, innovation output B3, and policy resources B4, along with their respective consistency verification results. All these judgment matrices have successfully passed the consistency test, as shown in the tables. This indicates that the pairwise comparison results within each matrix are logically consistent and reliable, thereby providing a solid foundation for subsequent analytical work.
Table 9.
Judgment matrix for scientific research B1 with the actual value.
Table 10.
Judgment matrix for industrial operation B2 with the actual value.
Table 11.
Judgment matrix for innovation output B3 with the actual value.
Table 12.
Judgment matrix for policy resources B4 with the actual value.
Next, using the group-mean weight vector as the reference, the cosine similarity between each expert’s weight vector and this reference is computed. With the results rounded to 4 decimal places, the weight vectors of experts are reordered from high to low similarity. Through experiments, it is concluded that when α(orness) is set to 0.7, position weights meets the practical requirements better. It preserves the IOWA’s ranking logic by giving more weight to higher-ranked (more consistent) experts, while still retaining non-negligible weights for lower-ranked experts to avoid missing valuable insights, thus striking a balanced integration of multi-expert judgments. The final consensus weight is calculated with the induced ordered weights and the position weights determined by the ME method. All the data in this progress are listed in Table 13, Table 14, Table 15, Table 16 and Table 17.
Table 13.
Expert weights and consensus weight values of sub-dimensions.
Table 14.
Expert weights and consensus weight values of B1 indicators.
Table 15.
Expert weights and consensus weight values of B2 indicators.
Table 16.
Expert weights and consensus weight values of B3 indicators.
Table 17.
Expert weights and consensus weight values of B4 indicators.
Figure 2 presents the weight calculation results calculated by AHP-ME-IOWA for each indicator within the indicator system. These weights are indicative of the perceived importance of each indicator in the overall evaluation framework. In Figure 2a, the vertical axis quantifies the weight values, which range from 0 to 1, with higher values suggesting greater significance in the assessment process. The horizontal axis lists the sub-dimensions. In Figure 2b–e, the weight distribution of the indicators within each sub-dimension is presented in percentage form.
Figure 2.
Consensus Weights calculated by AHP-ME-IOWA: (a) Weights of indicators in A; (b) Weights of indicators in B1; (c) Weights of indicators in B2; (d) Weights of indicators in B3; (e) Weights of indicators in B4.
3.3. Indicator Values and Aggregation
Based on the membership function Equation (15), Table 6, and Equation (16), all of which are found in Section 2.3, the calculated values of each indicator are applied to the evaluation standard to obtain the corresponding calculation values for each indicator. For the indicators including leading research institutions, space launch missions, journal publications, highly cited journal publications, patents, and core patents, the calculated values are obtained by substituting U.S. data and global data into the membership function Equation (15); for the indicators such as typical laboratories, key industrial enterprises, strategic policies, and research funds, the calculated values are derived from the evaluation results given by 10 experts (the same as the experts who scored in the AHP) under the guidance of Table 6 and Equation (16). The indicator data were normalized to a scale of ten, with scores retained to two decimal places. These normalized scores were then multiplied by the weights of the respective evaluation indicators to calculate the final scores for each dimension using the weighted average method. The specific scores for the main dimension, sub-dimensions, and indicators are summarized in Table 18. Overall, the comprehensive scientific research capability value for the U.S. in space technology is 8.73.
Table 18.
Values of the quantitative evaluation indicator system for STSST.
3.4. Comparison and Discussion
On one hand, to verify the sensitivity of the AHP-ME-IOWA model to the core parameter orness (α), this study selected a total of 6 sets of decision preference coefficients, namely α = 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0, covering the entire range from “fully averaged decision (α = 0.5)” to “extremely consensus-based decision (α = 1.0)”. Based on the position weights corresponding to each set of α values, the weights of the secondary dimensions, the weights of the tertiary indicators, and the comprehensive score in the assessment of the United States’ STSST were recalculated as shown in Table 19, to analyze the impact of α value changes on the assessment results.
Table 19.
Weight values and STSST results under different α.
The results of the sensitivity analysis show that under the 6 sets of α values, the comprehensive score of the United States’ STSST ranges from 8.7182 to 8.7377 (out of 10 points), with a range of only 0.0195 points. The weight ranking of the secondary dimensions remains stable without significant fluctuations, and the weight fluctuation range of the core tertiary indicators is all less than 3%, indicating that the model has extremely low sensitivity to changes in the α value. This stability is partly due to the fact that the 10 experts all come from the same institution and have similar cognitive backgrounds and evaluation logics in the space field, resulting in small differences in the initial weight vectors (with cosine similarity all greater than 0.9775). In addition, the evaluation indicators are all core explicit indicators of Scientific and Technological Strength in Space Technology (such as the number of typical laboratories and the proportion of core patents), and the importance of these indicators conforms to industry consensus. In addition, the ME-IOWA operator balances consensus and diversity through the maximum entropy principle, the power exponential membership function compresses the marginal differences in high-value indicators, and the absolute advantage of the United States in the space field forms a “stable basic framework”. All these factors together weaken the impact of α value changes on the results.
On the other hand, the STSST of the U.S., Russia, and Japan has been evaluated by our method as well as by method AHP-Delphi and method AHP-CIE. Method AHP-Delphi [] integrates the AHP model with the Delphi method. It determines the weights of indicators through expert consultation and evaluation criteria through bibliometric analysis to assess STSST of various countries. Method AHP-CIE [] employs the AHP and comprehensive indicator evaluation method, integrating qualitative methods based on expert experience with quantitative methods based on mathematical statistics, to quantify and assess STSST. In the method comparison experiment, the same indicator system proposed in Table 1, as well as the expert scores, were used by all three methods. The comparative results of the three methods for the three countries are shown in Table 20.
Table 20.
Scores and rankings of evaluation utility values.
Table 20 indicates that our method has assigned scores of 8.73, 3.56, and 3.22 to the U.S., Russia, and Japan, respectively, which aligns with the actual global development situation [], suggesting a certain degree of feasibility in our approach. The AHP-Delphi method has given scores of 8.96, 2.92, and 3.46 to the U.S., Russia, and Japan, respectively. The AHP-CIE method has assigned scores of 9.05, 4.10, and 3.58. The difference in ranking by the method AHP-Delphi may be due to its greater reliance on expert consultation, where subjective judgments can influence the final rankings. In all methods, the U.S. ranks first, while Russia and Japan have relatively close scores. The general trend of STSST generated by our AHP-ME-IOWA model is consistent with the global space technology strength landscape reported in existing literature [,], further validating the reliability of our results.
4. Conclusions
This paper presents an innovative model that combines the AHP model, IOWA operator, ME method, and a quantitative indicator system to systematically evaluate and continuously track STSST. The key strength of the model lies in its structured approach to weight determination: AHP first converts experts’ qualitative judgments on indicator importance into initial individual weight vectors, addressing the challenge of quantifying subjective professional insights that pure data-driven methods often overlook. The ME-IOWA operator then enhances this process by introducing two critical improvements. The cosine similarity is used between each expert’s weight vector and the temporary group mean as an induced value to reorder individual weights, ensuring that weights align with collective expert consensus while preserving diverse but rational perspectives. The ME method is employed to calculate position weights for the reordered experts’ weight vectors, a design that effectively weakens the distorting impact of outlier judgments under given constraints, thus enhancing the robustness of the final composite weights. In addition, by adopting internationally recognized cross-border comparison indicators, this method provides a dynamic indicator system that can adapt to the rapid development of space technology. The total research framework balances the objectivity of quantitative data with the depth of expert judgment. A systematic evaluation of the scientific research capabilities of the U.S. was conducted through this method, conducting an initial empirical validation of the proposed method.
The following discussion will explore the limitations of our method to provide a balanced perspective. Firstly, due to data lag—particularly for space technology patents and scientific publications sourced from the WoS and DII—the dataset may not fully capture the latest activities of emerging space-faring nations, whose rapid technological progress could be underestimated. Secondly, the 10 experts surveyed in this case study are all from the China Aerospace Academy of Systems Science and Engineering; while they hold rich professional knowledge in space technology, this single-institution selection may lead to homogeneous judgments on indicator weights, failing to incorporate diverse views from other sectors like universities or commercial space enterprises. Additionally, the use of a single aggregated value for each indicator overlooks the unique influences of specific entities within each indicator category, limiting the evaluation’s depth and granularity.
Future research could address these limitations by incorporating additional data sources and expanding the scope of experts and countries. To mitigate data lag, additional timely data sources will be integrated alongside traditional databases. The scope of expert selection will be expanded to include professionals from diverse backgrounds, such as researchers from universities, engineers from international space enterprises, and government policy analysts, to enhance the objectivity of weight assignments. Furthermore, recognizing the limitation of single aggregated indicator values, in-depth research will be conducted on individual entities within each indicator, to capture fine-grained impacts and improve the precision of STSST assessment.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/e27111141/s1, Questionnaire on Indicator Weights of National Space Technology Scientific and Technological Strength.
Author Contributions
Methodology, Y.C.; validation, Y.C.; investigation, Y.C.; writing—original draft, Y.C.; writing—review and editing, Z.Q., J.L. and Y.Z.; supervision, Z.Q.; data curation, Y.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
The data are already in the graphs and references of the paper, and further inquiries can be made by contacting the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| STSST | Scientific and Technological Strength in Space Technology |
| AHP | Analytic Hierarchy Process |
| IOWA | Induced Ordered Weighted Average |
| ME | Maximum Entropy |
| WoS | Web of Science |
| DII | Derwent Innovations Index |
| OWA | Ordered Weighted Averaging |
| NASA | National Aeronautics and Space Administration |
| SpEC | Space Enterprise Consortium |
| OSC | Office of Space Commerce |
| NOAA | National Oceanic and Atmospheric Administration |
| FAA | Federal Aviation Administration |
| PCC | Pearson product–moment correlation coefficient |
References
- Ma, X.; Chen, X.; Liu, Y.; Hu, L.; Xu, Z.; Jiang, T. Research Hotspots and Trends in China’s Aerospace: Based on Bibliometric Analysis from 2016 to 2020. Sci. Focus 2023, 18, 57–66. [Google Scholar]
- Augustyn, J. Emerging Science and Technology Trends: 2017–2047; FutureScout: Providence, RI, USA, 2017. [Google Scholar]
- Allison, G.; Klyman, K.; Barbesino, K.; Yen, H. The great tech rivalry: China vs. the US. Sci. Dipl. 2021, 3, 73. [Google Scholar]
- Schmid, J. An Open-Source Method for Assessing National Scientific and Technological Standing: With Applications to Artificial Intelligence and Machine Learning; Rand Corporation: Santa Monica, CA, USA, 2021. [Google Scholar]
- Warnke, P.; Cuhls, K.; Schmoch, U.; Daniel, L.; Andreescu, L.; Dragomir, B.; Gheorghiu, R.; Baboschi, C.; Curaj, A.; Parkkinen, M.; et al. 100 Radical Innovation Breakthroughs for the Future; European Commission’s Foresight Group: Brussels, Belgium, 2019. [Google Scholar]
- KISTEP. The 5th Science and Technology Foresight (2016–2040). Available online: https://www.kistep.re.kr/board.es?mid=a20401000000&bid=0046&act=view&list_no=35988&nPage=1 (accessed on 2 November 2025).
- GAO. Technology Assessment Design Handbook; GAO: Washington, DC, USA, 2019. Available online: https://www.gao.gov/assets/gao-21-347g.pdf (accessed on 26 June 2025).
- Mazarr, M.J.; Rhoades, A.L.; Beauchamp-Mustafaga, N.; Blanc, A.A.; Eaton, D.; Feistel, K.; Geist, E.; Heath, T.R.; Johnson, C.; Langeland, K.; et al. Disrupting Deterrence: Examining the Effects of Technologies on Strategic Deterrence in the 21st Century; RAND: Santa Monica, CA, USA, 2022. [Google Scholar]
- KISTEP. The Evaluation of Science and Technology Innovation Capacity 2018. Available online: https://www.kistep.re.kr/board.es?mid=a20402000000&bid=0047&act=view&list_no=36538 (accessed on 2 November 2025).
- The Ali Institute; Zhipu, A.I. Global Digital Technology Development Research Report—Revealing Global Digital Technology Research Strength, Talent Reserve, and Top Ten Development Trends. 2023. Available online: www.aminer.cn/research_report/645c5ada7cb68b460fcef714 (accessed on 6 February 2025).
- Xian, X.; Li, T.; Lu, J.-Z. Ability of agricultural research institutes based on AHP to transform scientific and technological achievements research on evaluation index system. J. Agric. Sci. Technol. 2023, 25, 8–23. [Google Scholar]
- Parker, E.; Gonzales, D.; Kochhar, A.K.; Litterer, S.; O’connor, K.; Schmid, J.; Scholl, K.; Silberglitt, R.; Chang, J.; Eusebi, C.A.; et al. An Assessment of the U.S. and Chinese Industrial Bases in Quantum Technology; RAND: Santa Monica, CA, USA, 2022. [Google Scholar]
- Gaida, J.; Wong-Leung, J.; Robin, S.; Cave, D. ASPI’s Critical Technology Tracker: The Global Race for Future Power; Australian Strategic Policy Institute: Canberra, Australia, 2023. [Google Scholar]
- Fan, Q.; Song, T.; Shi, P.; Wei, H.; Chen, X.; Wang, X.; Guo, S. On Index System of Space Science Strength and Its Enlightenment. Bull. Chin. Acad. Sci. (Chin. Version) 2022, 37, 1076–1087. [Google Scholar]
- Zhang, Y. U.S. Space Development Assessment. Space Int. 2018, 12, 12–17. [Google Scholar]
- Shen, Y.; Wang, K.; Ma, X.; Hu, L.; Xu, Y. Construction and Analysis of Science and Technology Power Evaluation System. Bull. Chin. Acad. Sci. 2020, 35, 593–601. [Google Scholar]
- Godinho, M.M.; Simões, V.C. The Tech Cold War: What can we learn from the most dynamic patent classes? Int. Bus. Rev. 2023, 32, 102140. [Google Scholar] [CrossRef]
- Yang, W.; Wang, X.; Zhou, D. Research on the Impact of Industrial Policy on the Innovation Behavior of Strategic Emerging Industries. Behav. Sci. 2024, 14, 346. [Google Scholar] [CrossRef]
- Wang, T.; Yu, C.; Huang, J.; Su, H.N. Robust Networks, Pivotal Patents: Identifying and Assessing Key Technological Influencers. IEEE Trans. Eng. Manag. 2024, 71, 15254–15277. [Google Scholar] [CrossRef]
- Ding, J.; Du, D.; Duan, D.; Xia, Q.; Zhang, Q. A Network Analysis of Global Competition in Photovoltaic Technologies: Evidence from Patent Data. Appl. Energy 2024, 375, 124010. [Google Scholar] [CrossRef]
- Ezugwu, A.E.; Greeff, J.; Ho, Y.-S. A Comprehensive Study of Groundbreaking Machine Learning Research: Analyzing Highly Cited and Impactful Publications across Six Decades. J. Eng. Res. 2025, 13, 371–383. [Google Scholar] [CrossRef]
- Link, A.N.; Scott, J.T. Technological Change in the Production of New Scientific Knowledge: A Second Look. Econ. Innov. New Technol. 2019, 30, 371–381. [Google Scholar] [CrossRef]
- Grafström, J.; Alm, C. Diverging or converging technology capabilities in the European Union? J. Technol. Transf. 2025, 50, 728–751. [Google Scholar] [CrossRef]
- Tijssen, R.J.W.; Winnink, J.J. Capturing ‘R&D excellence’: Indicators, international statistics, and innovative universities. Scientometrics 2018, 114, 687–699. [Google Scholar]
- Qu, T.; Zhang, Z.; Wang, Q. Research on the Indicator System of National Defense Science and Technology Development. In Proceedings of the 2021 7th International Conference on Information Management (ICIM), London, UK, 27–29 March 2021; pp. 135–139. [Google Scholar]
- Carley, S.F.; Newman, N.C.; Porter, A.L.; Garner, J.G. An indicator of technical emergence. Scientometrics 2018, 115, 35–49. [Google Scholar] [CrossRef]
- Kruss, G.; Sithole, M.; Buchana, Y. Towards an Indicator of R&D and Human Development. Dev. S. Afr. 2020, 38, 248–263. [Google Scholar]
- Ozkaya, G.; Timor, M.; Erdin, C. Science, Technology and Innovation Policy Indicators and Comparisons of Countries through a Hybrid Model of Data Mining and MCDM Methods. Sustainability 2021, 13, 694. [Google Scholar] [CrossRef]
- Zhang, L. Construction of evaluation indicator system of college physical education teaching environment based on analytic hierarchy process. Comput. Intell. Neurosci. 2022, 2022, 4148866. [Google Scholar] [CrossRef]
- Saaty, T.L. The Analytic Hierarchy Process; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
- Milošević, D.M.; Milošević, M.R.; Simjanović, D.J. Implementation of Adjusted Fuzzy AHP Method in the Assessment for Reuse of Industrial Buildings. Mathematics 2020, 8, 1697. [Google Scholar] [CrossRef]
- Domínguez, S.; Carnero, M.C. Fuzzy Multicriteria Modelling of Decision Making in the Renewal of Healthcare Technologies. Mathematics 2020, 8, 944. [Google Scholar] [CrossRef]
- Simjanović, D.J.; Vesić, N.O.; Ignjatović, J.M.; Ranđelović, B.M. A Novel Surface Fuzzy Analytic Hierarchy Process. Filomat 2023, 37, 3357–3370. [Google Scholar] [CrossRef]
- Yager, R.R.; Filev, D.P. Induced ordered weighted averaging operators. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1999, 29, 141–150. [Google Scholar] [CrossRef]
- Ren, J.; Wang, J.; Hu, C. Multi-criterion group decision-making method based on normal cloud by cosine close degree and group consensus degree. Control Decis. 2017, 32, 665–672. [Google Scholar]
- Liaw, C.-S.; Chang, Y.-C.; Chang, K.-H.; Chang, T.-Y. ME-OWA based DEMATEL reliability apportionment method. Expert Syst. 2011, 38, 9713–9723. [Google Scholar] [CrossRef]
- Xie, W.L.; Xie, P.C.; Yuan, W.D.; Li, S.H.; Ouyang, S.; Zeng, J. A Combined Membership Function and its Application on Fuzzy Evaluation of Power Quality. Appl. Mech. Mater. 2014, 543, 518–523. [Google Scholar] [CrossRef]
- Shama, M.S.; El Ktaibi, F.; Al Abbasi, J.N.; Chesneau, C.; Afify, A.Z. Complete Study of an Original Power-Exponential Transformation Approach for Generalizing Probability Distributions. Axioms 2023, 12, 67. [Google Scholar] [CrossRef]
- Ma, X.; Xue, H.; Hu, L.; Jiang, S. Study on development status of aerospace technology research field in major aerospace countries—Based on scientometrics analysis of Web of Science database. Technol. Intell. Eng. 2017, 3, 87–98. [Google Scholar]
- Wei, F.; Deng, A.; Gu, S.; Li, T. Research on the Distribution of Rare Earth Application Technologies in the Aerospace Field from a Patent Perspective. Chin. Rare Earths 2024, 45, 149–158. [Google Scholar]
- Xia, D. A Patent Analysis on the Technological Innovation of Chinese Aerospace. J. Nanchang Univ. 2010, 41, 82–85. [Google Scholar]
- Zhang, Z.; Deng, W.; Wang, Y.; Qi, C. Visual analysis of trustworthiness studies: Based on the Web of Science database. Front. Psychol. 2024, 15, 1351425. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, C.; Xing, Y.; Zhu, T. Analysis of the Operation Mode and Development Practice of the American Space Enterprise Alliance. Dual Use Technol. Prod. 2022, 14–17. [Google Scholar] [CrossRef]
- Courtney, A. Space Force Racing to Meet Training, Testing Demands. C4ISRNET. Available online: http://www.defensenews.com/space/2024/12/04/space-force-racing-to-meet-training-testing-demands/ (accessed on 26 March 2025).
- Jeff, F. SpaceX Launch Surge Helps Set New Global Launch Record in 2024. Available online: http://Spacenews.com/spacex-launch-surge-helps-set-new-global-launch-record-in-2024/ (accessed on 2 April 2025).
- Price, D.J.S. Little Science, Big Science; Columbia University Press: New York, NY, USA, 1963. [Google Scholar]
- Brendan, W.M. FY 2024 NDAA: Summary of Funding Authorizations. Available online: https://www.congress.gov/crs_external_products/IN/PDF/IN12209/IN12209.5.pdf (accessed on 29 June 2025).
- NASA Fiscal Year 2024 Full Budget Request. Available online: https://www.nasa.gov/nasa-fiscal-year-2024-budget-request/ (accessed on 29 June 2025).
- Chen, X.; Liu, S.; Liu, R.W.; Wu, H.; Han, B.; Zhao, J. Quantifying Arctic oil spilling event risk by integrating an analytic network process and a fuzzy comprehensive evaluation model. Ocean Coast. Manag. 2022, 228, 106326. [Google Scholar] [CrossRef]
- Zhang, Z.C.; Wang, S. Global space development assessment. Space Int. 2018, 12, 4–10. [Google Scholar] [CrossRef]
- Fan, Q.L.; Bai, Q.J. American space science development and its enlightenment. Sci. Technol. Rev. 2019, 37, 73–87. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).