Next Article in Journal
Hierarchical Decision Making-Based Intelligent Game Confrontation on UAV Swarm
Previous Article in Journal
Deep Learning-Based Prediction of Commercial Aircraft Noise: A CNN–Transformer Hybrid Model Versus Support Vector Regression and Multi-Layer Perceptron
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps Based on Multi-Neural Network Fusion Model †

1
National Key Laboratory of Science and Technology on Reliability and Environmental Engineering, Beijing 100094, China
2
Beijing Institute of Spacecraft Environment Engineering, Beijing 100094, China
3
Beijing Institute of Spacecraft System Engineering, Beijing 100094, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Huang, S.; Yu, Y.; Liu, Q.; Wang, J. Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps. In Proceedings of the 34th European Safety and Reliability Conference, Cracow, Poland, 23–27 June 2024; p. 205.
Aerospace 2025, 12(11), 1032; https://doi.org/10.3390/aerospace12111032
Submission received: 25 August 2025 / Revised: 1 November 2025 / Accepted: 18 November 2025 / Published: 20 November 2025
(This article belongs to the Section Astronautics & Space Science)

Abstract

Fluid loop pumps, critical to spacecraft thermal control subsystems, are more prone to failures than other spacecraft components. Timely fault diagnosis is therefore crucial to ensure operational reliability. This paper proposes a multi-neural network fusion model (MNN) to improve the fault diagnosis accuracy for spacecraft fluid loop pumps. The proposed model integrates four neural network algorithms—back propagation neural network (BPNN), particle swarm optimization-back propagation neural network (PSO-BPNN), genetic algorithm-back propagation neural network (GA-BPNN), and fuzzy neural network (FNN)—through a model scoring and weighting mechanism. Additionally, a dedicated software has also been developed and implemented for the intelligent fault diagnosis of fluid loop pumps in Chinese spacecraft. By analyzing a dataset derived from on-orbit telemetry and expert knowledge, the proposed model demonstrates superior performance over individual models, achieving significant improvements in key metrics such as mean squared error (MSE), prediction stability, correlation coefficient (R), Accuracy, Precision, Recall, and F1-score. Furthermore, validation using both on-orbit telemetry data and ground test data confirms that the model can accurately diagnose both normal operations and various types of faults, making it a reliable and practical tool for on-orbit fault detection. The study provides an efficient, stable, and practical solution for intelligent fault diagnosis of spacecraft fluid loop pumps, with significant engineering application value.

1. Introduction

On 3 November 2022, the successful completion of the “T” configuration on-orbit assembly of the China Space Station marked a major milestone in Chinese manned spaceflight program [1]. Following this achievement, the space station transitioned into a long-term phase of on-orbit operations with astronauts [2]. A key feature of the space station is its maintainability [3], with fault diagnosis technology serving as the fundamental basis for efficient on-orbit maintenance [4]. Given the increasing complexity of spacecraft systems, traditional fault diagnosis methods find it difficult to meet the demands for efficiency and adaptability [5]. As a result, intelligent fault diagnosis technologies are gradually replacing conventional approaches and have become a major research focus [6], as well as a core technology for the intelligent operation and maintenance of spacecraft [7]. Intelligent fault diagnosis, particularly on space stations, is expected to advance rapidly from theoretical research to practical engineering applications.
Based on the experiences of the Mir space station and the International Space Station [8], the thermal control subsystem, particularly the fluid loop pumps, has been identified as having a high failure rate [9]. These components are key targets for fault diagnosis and on-orbit maintenance in space stations. Fluid loop pumps function as the “heart” of the thermal control system in spacecraft; a failure in these pumps could result in a total system breakdown and loss of temperature regulation. Such failures would not only critically affect the safe and reliable operation of various spacecraft systems but also directly jeopardize the health and safety of astronauts [10]. Therefore, research into fault diagnosis technologies for spacecraft fluid loop pumps is crucial for ensuring spacecraft reliability and crew safety.
Research on spacecraft fluid loop pumps has primarily focused on design and heat transfer characteristics [11,12,13], with relatively few studies addressing fault diagnosis. However, there has been significant research on centrifugal pumps, which share similar structures with fluid loop pumps. These studies mainly use physics-model-driven and signal-processing-based approaches for fault diagnosis. For instance, Kalleso et al. [14] proposed a physics-model-driven fault diagnosis method for centrifugal pumps that combines structural analysis, redundancy analysis, and observer design. This approach has proven effective in detecting and isolating five types of mechanical and hydraulic faults in centrifugal pumps. Beckerle et al. [15] discussed the use of a balanced filter in model-based fault diagnosis for centrifugal pumps, achieving accurate fault identification. Muralidharan and Sugumaran [16] employed wavelet transform techniques to extract multi-dimensional, multi-scale features from centrifugal pump operation signals, using a decision tree algorithm to swiftly and accurately identify various fault types. While physics-model-driven methods require detailed knowledge of pump structures, limiting their generalizability, signal-processing methods have yielded promising results for specific issues [17]. Nevertheless, both approaches face challenges in addressing the increasing diversity of fault types and achieving higher methodological intelligence.
In recent years, neural networks have become a mainstream approach for intelligent fault diagnosis. Zaman et al. [18] introduced a centrifugal pump fault diagnosis method using a SobelEdge spectrogram as input to a convolutional neural network (CNN). The results demonstrated that the SobelEdge spectrogram effectively enhanced the identification of fault-related information, while the CNN, with its strong feature extraction and classification capabilities, achieved accurate fault classification. AlTobi et al. [19] explored the combined application of multi-layer perceptron (MLP) neural networks and support vector machines (SVMs) for centrifugal pump fault diagnosis, providing a comprehensive evaluation of these methods in improving fault classification accuracy and efficiency. Ranawat et al. [20] proposed the use of SVM and artificial neural networks (ANNs) to diagnose centrifugal pump faults under various operating conditions. They extracted different statistical features from vibration signals in both time and frequency domains, utilizing various feature ranking methods to compare fault diagnosis efficiency. Yu et al. [21] investigated the application of four neural networks for the intelligent fault diagnosis of spacecraft fluid loop pumps, finding that fuzzy neural networks performed the best. Despite these advancements, several challenges remain. Factors such as randomness in training and test set divisions, variability in initial weight and threshold values, and hyperparameter variations—such as the number of hidden layer neurons and learning rates—can introduce significant randomness and instability into prediction results. These issues may hinder the practical application of these technologies in spacecraft.
For practical application of spacecraft fault diagnosis technology, software development is essential. NASA and ESA have conducted extensive research on on-orbit fault diagnosis technology for manned spacecraft, resulting in systems like the Advanced Caution and Warning System (ACAWS), which enables system-level fault diagnosis and health management [22,23]. NASA has also developed several fault diagnosis software tools, including TAMES-RT (Testability Engineering and Maintenance System Real-time Diagnostics Tool), based on graphical models [24], Livingstone, based on discrete models [25], and HyDE (Hybrid Diagnostic Engine), based on hybrid models [26]. These systems have been applied on the International Space Station. Nevertheless, while these software tools have significantly contributed to the initial development of on-orbit fault diagnosis technology, they were designed in an earlier era and primarily rely on traditional fault diagnosis methods. As a result, they may face challenges in achieving high diagnostic accuracy, simplifying operational procedures, and incorporating advanced intelligent features.
Looking ahead, fault diagnosis technology for spacecraft fluid loop pumps is expected to advance, becoming more capable of handling diverse faults, multi-dimensional data, and complex models [27]. Although neural networks are at the forefront of intelligent fault diagnosis due to their high accuracy, they continue to be challenged by issues of randomness and instability [28]. Therefore, enhancing the stability of neural networks and integrating them into practical software applications remains a central challenge in unlocking the engineering potential of intelligent fault diagnosis technologies for spacecraft.
In response to the urgent need for highly accurate and stable intelligent fault diagnosis in spacecraft, this paper proposes an MNN-based intelligent fault diagnosis method for spacecraft fluid loop pumps. The paper also designs software for practical applications and validates the method using both on-orbit telemetry data and ground-based test data. The remainder of this paper is organized as follows. Section 2 introduces the structure and operation principle of the spacecraft fluid loop pump, providing the physical foundation for subsequent analysis. Section 3 describes the proposed intelligent fault diagnosis methodology, including the four base neural network models and the construction of the MNN. Section 4 presents the development and implementation of the intelligent fault diagnosis software. Section 5 validates the proposed method using both on-orbit telemetry data and ground-based test data and includes a detailed parametric and performance analysis. Finally, Section 6 concludes the paper and discusses future research directions.

2. Introduction to Spacecraft Fluid Loop Pumps

Fluid loop pumps are critical components of the spacecraft thermal control system, responsible for maintaining efficient heat transfer within the cabin. The fluid loop system consists of two main circuits: the internal loop and the external loop. In the internal loop, the pump drives the circulation of the working fluid to collect and transport heat generated by onboard equipment. The heat is absorbed through the condenser dryer, which removes moisture and latent heat, and the cold plates, which directly extract heat from electronic devices and thermal loads. The warmed working fluid then passes through an intermediate heat exchanger, where the heat is transferred to the external loop fluid. The external loop includes the propulsion module cold plate and radiators, which ultimately dissipate the absorbed heat into outer space (shown in Figure 1). This closed-loop process enables continuous and precise thermal regulation within the spacecraft cabin, ensuring the stability and reliability of onboard systems [10].
As shown in Figure 2, a spacecraft fluid loop pump is connected to a loop controller, motor, and disconnector. To monitor the performance and health of the pump, it is equipped with outlet pressure sensors, rotational speed sensors, temperature sensors, and flow rate sensors. These sensors continuously provide data on four key status parameters of the fluid loop pump. These parameters serve as the foundational data for fault diagnosis.

3. Intelligent Fault Diagnosis Method

The intelligent fault diagnosis method utilizes the four status parameters of fluid loop pumps as inputs and their corresponding status labels as outputs. Four distinct neural network models are individually trained using this dataset. After the training phase is completed, each model is evaluated to generate a model score. These scores serve as the basis for weighing each model, and the weighted models are then integrated into an MNN. This fusion model is subsequently utilized to diagnose the system data with enhanced accuracy and stability.

3.1. Four Neural Network Models

The structures of these four neural networks are defined by specific model parameters, which are detailed in the “Model Parameter Settings” section. The dataset, comprising the four status parameters and their corresponding status labels for the fluid loop pump, is split into training and test sets. Each neural network is trained independently, taking the status parameters of fluid loop pumps as inputs and generating predicted status labels as outputs.

3.1.1. BPNN Model

The back propagation neural network (BPNN) is composed of an input layer, a hidden layer, and an output layer. During training, the network iteratively adjusts the weights and thresholds to minimize the discrepancy between the predicted outputs and the actual status labels. Key parameters for constructing a BPNN include the number of neurons in the hidden layer, error threshold, number of iterations, and learning rate. Figure 3 illustrates the BPNN structure used in this study, with the input layer comprising four neurons that represent the fluid loop pump status parameters and the output layer containing one neuron that represents the status label.
The mathematical relationships governing the connections between the input and hidden layers, and between the hidden and output layers, are described by Equations (1) and (2) [29].
b j = f ( ( i = 1 m x i w j , i ) + θ j )
y c = f ( ( j = 1 h b j v j ) + β )
where xi is the i-th input to input neurons (the i-th status parameter) and m is the number of input neurons; bj is the j-th input to hidden layer neurons, h is the number of hidden layer neurons, and θj is the j-th threshold of hidden layer neurons; wj,i is the connection weights between the j-th hidden layer neurons and i-th input neurons; f(●) is Sigmoid activation function. While yc is the prediction value of the neural network, β is the threshold of the prediction value; vj is the connection weights between the j-th hidden layer neurons and the prediction value of the neural network.

3.1.2. PSO-BPNN Model

The particle swarm optimization-back propagation neural network (PSO-BPNN) is an enhanced version of the BPNN. It utilizes the particle swarm optimization (PSO) algorithm to optimize the weights and thresholds of the BPNN, thereby improving its performance. This process is illustrated in Figure 4. In the PSO algorithm, each particle represents a potential solution in the solution space for the weights and thresholds of the neural network. By simulating particle velocities and positions, the algorithm aims to find the global optimum while avoiding being trapped in local optima. Key parameters of the PSO algorithm include the learning factor, number of iterations, population size, and the maximum and minimum velocities and positions of particles. The positions and velocities of particles are updated according to Equations (3) and (4) [30].
X j γ + 1 = X j γ + V j γ + 1
V j γ + 1 = V j γ + c 1 r 1 p b e s t , j γ X j γ + c 2 r 2 g b e s t γ X j γ
where X j γ is the position of j-th particle in γ-th generation and V j γ is the velocity of j-th particle in γ-th generation; p b e s t , j γ is individual best of j-th particle at γ-th iteration and g b e s t γ is the global best among γ iterations; c1 and c2 are the learning rate factors while r1 and r2 are the random numbers.
During the iteration optimization process, the fitness of individual particles and the global fitness are assessed using Equations (5) and (6).
f p γ = 1 n j = 1 n y c , j γ y d , j γ 2
f g = min f p 1 , f p 2 , , f p γ , , f p s
where y c γ γ d , j is the prediction value of j-th particle in γ-th generation and y γ p , j is the actual value of j-th particle in γ-th generation; n is the population size; f is the fitness of γ-th generation and fg is the global fitness.

3.1.3. GA-BPNN Model

The genetic algorithm-back propagation neural network (GA-BPNN) is also an enhanced variant of the BPNN. It utilizes the genetic algorithm (GA) to optimize the thresholds and weights of the BPNN, as shown in Figure 5. The GA mimics the evolutionary processes of natural populations through selection, crossover, and mutation. It iteratively searches for the optimal initial weights and thresholds of the network, thereby achieving global optimization [31]. In this study, the GA’s performance is fine-tuned by configuring parameters such as the number of generations and population size.

3.1.4. Fuzzy Neural Network Model

The fuzzy neural network (FNN) integrates fuzzy logic with an artificial feedforward neural network, offering strong self-learning capabilities and efficient direct data processing. The construction process of the FNN is illustrated in Figure 6. Initially, the network structure is established based on the inputs and outputs of the training set. Subsequently, the membership function, network parameters, and the weights and thresholds of the output layer are initialized. The Gaussian function is used for the membership function [32]. The training process involves calculating errors and updating the weights and thresholds iteratively until the error or iteration count meets the predefined termination conditions.
A j i = exp [ ( x j μ j i ) 2 σ j i ] , j = 1 , 2 , m , i = 1 , 2 , , l
where A j i is the membership function for j-th network input in the i-th fuzzy subset and l is the number of fuzzy subsets; μ j i is the center point of the membership function for j-th network input in the i-th fuzzy subset and σ j i is the width of the membership function for the j-th network input in the i-th fuzzy subset.

3.2. Model Evaluation Method

After training the neural network, we need to evaluate its performance by calculating the mean squared error (MSE) Mtrain and the correlation coefficient (R) Rtrain between the actual status labels and the predicted values for the training set. Similarly, compute the Mtest and Rtest for the test set. The MSE and R, as shown in Equations (8) and (9), are chosen as evaluation metrics. It is crucial to minimize the prediction error, measured by the MSE, to accurately identify the operational status of fluid loop pumps. Additionally, a high R ensures that the model’s predictions are reliable and closely aligned with the actual status labels.
M = i = 1 N ( y d , i y c , i ) 2 N
R = i = 1 N ( y d , i y ¯ d ) ( y c , i y ¯ c ) i = 1 N ( y d , i y ¯ d ) 2 i = 1 N ( y d , i y ¯ d ) 2
To comprehensively evaluate the model’s performance, a scoring mechanism is introduced, which combines the MSE and R from both the training and test sets. Since a higher R and a lower MSE indicate better model performance, the R/MSE ratio is used to integrate these two metrics. Additionally, considering that the MSE is typically much smaller than one, a logarithmic form is adopted to mitigate the significant impact of its minor fluctuations on the results. The training set Rtrain and Mtrain are combined with the test sets Rtest and Mtest in a proportional manner, as shown in Equation (10).
S = a ln ( R t r a i n M t r a i n ) + ( 1 a ) ln ( R t e s t M t e s t )
where a (0 ≤ a ≤ 1) represents the weight assigned to the training set performance (including both Rtrain and Mtrain) in the scoring formula. Meanwhile, (1 − a) represents the weight assigned to the test set performance (including both Rtest and Mtest) in the scoring formula. The optimal a value will be further determined through k-fold cross-validation to minimize the variance of the test set’s MSE and R, ensuring the model’s stability and generalization ability. A higher model score indicates better training and testing performance. Based on the model scores of the four neural networks, an MNN can be constructed using a weighting algorithm.
To provide a more comprehensive assessment of the model’s diagnostic accuracy, we will also calculate Accuracy, Precision, Recall, and F1-score for the test set. Accuracy measures the proportion of all predictions that are correctly predicted, Precision measures the proportion of true positive predictions among all positive predictions, Recall measures the proportion of true positive predictions among all actual positive instances, and F1-score is the harmonic mean of Precision and Recall, providing a balanced measure of the model’s performance.

3.3. Multi-Neural Network Fusion Model Intelligent Fault Diagnosis Method

Referring to the flowchart shown in Figure 7, based on the model scores of the four neural networks, the weights for each neural network in the MNN are calculated using Equation (11). The weight Wi of each neural network is proportional to its model score Si, ensuring that neural networks with higher scores contribute more to the final prediction. The output of the MNN is obtained by taking a weighted sum of the predicted status labels from each neural network, as shown in Equation (12).
W i = S i i = 1 4 S i
P 0 = i = 1 4 W i y c , i
This weighted sum allows the fusion model to leverage the strengths of each individual neural network, thereby improving the overall prediction accuracy and stability. The closest integer value p to the predicted value P0 is taken as the final status label. The status labels are as follows: 1 represents “normal,” 2 represents “slight rubbing between the impeller and pump casing,” 3 represents “severe rubbing between the impeller and pump casing,” 4 represents “fatigue spalling of bearing raceway,” 5 represents “severe fatigue spalling of bearing raceway,” and 6 represents “impeller jammed, pump functionality lost.”

4. Intelligent Fault Diagnosis Software

4.1. Overview of Software Basic Modules

Using the multi-neural network fusion intelligent fault diagnosis method, we have developed intelligent fault diagnosis software for spacecraft fluid loop pumps. As shown in Figure 8, the software features six tabs: (1) Import Data, (2) Train BPNN, (3) Train PSO-BPNN, (4) Train GA-BPNN, (5) Train FNN, and (6) Fault Diagnosis. Each tab is designed to handle specific tasks in the fault diagnosis process, from data import to model training and final diagnosis.

4.2. Software Usage Process

The software utilizes a dataset to train the built-in MNN and diagnose faults. Its functionalities are organized in a sequential manner across the tabs from left to right, as illustrated in Figure 9. The usage procedure consists of three main parts: “Input”, “Training”, and “Diagnosis”.
The “Input” part involves three steps as shown in Figure 10. A dataset, which includes status parameters and status labels, is composed of on-orbit telemetry data and expert experience data. The status parameter table includes four fluid loop pump status parameters: temperature, outlet pressure, flow rate, and rotational speed. The corresponding status label table covers six operating statuses of the fluid loop pumps. Additionally, a status label-to-status type table pairs each status label with its actual status type, enhancing software user-friendliness.
As shown in Figure 11, the “Training” part involves four distinct steps: training the four neural network models and individually evaluating their performance. This process is facilitated through the Train BPNN, Train PSO-BPNN, Train GA-BPNN, and Train FNN tabs, which each include two main areas: one for setting model parameters and another for evaluating model performance. To make better use of the dataset and more accurately evaluate model performance, the software provides a k-fold cross-validation option. Based on the chosen k value, the dataset is randomly divided into k parts, with k iterations performed. In each iteration, k − 1 parts of the data are used as a training set to train the neural networks, while the remaining part serves as a test set. The MSE and R are calculated for each iteration, and the average values of these metrics after k iterations are adopted as the final evaluation metrics.
As shown in Figure 12, the “Diagnosis” part involves two key steps: selecting a model and importing diagnostic data, both of which are accomplished through the Fault Diagnosis tab. In addition to utilizing the MNN, the software also recommends the neural network with the highest model score among the four individual models to diagnose the data.

5. Case Analysis and Application Validation

5.1. Dataset

Neural network training is performed using a dataset comprising status parameters and status labels, sourced from on-orbit data and expert experience. The dataset contains 36 samples listed in Table 1, each including four status parameters and their corresponding labels, covering six fluid loop pump status types: “normal,” “slight rubbing between the impeller and pump casing,” “severe rubbing between the impeller and pump casing,” “fatigue spalling of bearing raceway,” “severe fatigue spalling of bearing raceway,” and “impeller jammed, pump functionality lost.” The inputs to the neural network are the temperature (°C), flow rate (L/h), outlet pressure (kPa), and rotational speed (r/min) of the fluid loop pump. The flow rate refers to the loop flow rate governed by multiple pumps. Each status label represents a specific status type, serving as the neural network output. The dataset is evaluated using k-fold cross-validation, in which each subset alternately serves as training and test data. Therefore, Table 1 presents the complete dataset, and model performance is reported based on the test folds.
The impeller and motor bearings are key components of the fluid loop pump. When faults occur, the status parameters change in specific ways. For example, improper assembly or an abnormal vibration environment can cause the impeller to rub against the pump casing. This reduces the motor’s rotational speed, leading to a drop in flow rate and outlet pressure. Factors like frictional heat can also cause the fluid temperature to rise. Similarly, an abnormal vibration environment or lubrication failure can lead to bearing raceway fatigue spalling. This increases the bearing’s resistance torque, causing slight reductions in motor speed, flow rate, and outlet pressure. The decreased flow rate also lowers the loop’s heat transfer efficiency, resulting in higher fluid temperatures. Analyzing these fault mechanisms and their impacts helps us understand how the physical failure processes affect pump operation. This, in turn, allows for a more in-depth understanding of the conclusions drawn from the dataset and the data-driven fault diagnosis method.
While we acknowledge that this dataset size is relatively small, it is important to note that the complexity of the fault types and the limited availability of real-world on-orbit telemetry data pose challenges in obtaining a larger dataset. These six types of faults are also recognized as common faults by frontline aerospace experts [32].

5.2. Model Parameter Settings

Import the dataset presented in Table 1 into the spacecraft fluid loop pump intelligent fault diagnosis software. Randomize the dataset and train the four neural networks using six-fold cross-validation. According to experience and the relevant literature [32,33,34], the parameter settings for the four models are detailed in Table 2, Table 3, Table 4 and Table 5. These values are further verified through preliminary experiments to ensure stable convergence and reliable diagnostic performance. Given the large number of parameters and the diversity of network types, only the general parameter selection principles are presented here for clarity.

5.3. Discussion on Coefficient a

Referring to Table 6, the coefficient a has a significant impact on R, MSE, and their variances of the MNN. The R and MSE presented here are the mean values obtained from six-fold cross-validation, with the variances calculated from the six iterations. It can be observed that when a is 0, both Rtrain and Rtest reach their maximum values, while Mtrain and Mtest are minimized. Additionally, the variances of Rtest and Mtest are also the smallest. At this point, the model achieves optimal predictive performance and stability.
The correlation coefficient and mean squared error of the MNN as functions of a are shown in Figure 13 and Figure 14. As a increases from 0 to 0.6, the correlation coefficients Rtrain and Rtest gradually decrease, while Mtrain and Mtest increase approximately linearly. The closer a is to 0, the smaller the weight assigned to the training set and the larger the weight assigned to the test set in Equation (10), the better the prediction performance of the MNN. Combining this with the analysis results in Table 6, the optimal model is achieved when a is 0. At this point, the model performance is entirely determined by the test set correlation coefficient and mean squared error, independent of the training set. Equation (10) can thus be simplified to the following:
S = ln ( R t e s t M t e s t )

5.4. Performance and Stability Analysis of Neural Networks

As shown in Figure 15, the average values of the MSE and R for the test set are calculated after six-fold cross-validation. When the coefficient a = 0, the MNN demonstrates the best predictive performance. Specifically, it achieves a test set R of 0.9986, outperforming the four individual neural networks. Moreover, it has the lowest test set MSE of 0.0103 among the five models.
As illustrated in Table 7, the MNN significantly enhances performance compared to the four individual neural networks. Specifically, when compared with BPNN, PSO-BPNN, GA-BPNN, and FNN, its MSE is reduced by 95.84%, 88.37%, 83.75%, and 54.22%, respectively, while the R value remains relatively constant.
In terms of neural network stability, the variances of the MSE and R during six-fold cross-validation serve as the evaluation metrics. Figure 16 and Figure 17 show that the MNN experiences minimal fluctuations in both MSE and R across the six iterations, indicating its superior stability across diverse training and test sets. Notably, the variance of MSE in the test set as low as 0.0002, while the variance of R is 0.00001. Table 8 displays a stability comparison of MNN with four neural networks. Specifically, the fusion model reduces the variance of MSE by 99.97%, 99.39%, 99.60%, and 89.47%, and the variance of R by 99.94%, 98.03%, 99.67%, and 50.00% when compared to the BPNN, PSO-BPNN, GA-BPNN, and FNN, respectively. These results confirm that the stability of the fusion model is significantly improved compared with the other four models.
In addition to MSE and R, we compared the Accuracy, Precision, Recall, and F1-score of the MNN against four individual neural networks on the test set. It is worth noting that while the MNN and FNN demonstrated identical performance in terms of Accuracy, Precision, Recall, and F1-score, the MNN exhibited superior performance in MSE and R. Specifically, the MNN and FNN both achieved perfect scores of 1.000 for Accuracy, Precision, Recall, and F1-score. In comparison, the BPNN model has an Accuracy of 0.954, Precision of 0.846, Recall of 0.862, and an F1-score of 0.851; the PSO-BPNN model has an Accuracy of 0.972, Precision of 0.938, Recall of 0.917, and an F1-score of 0.926; the GA-BPNN model has an Accuracy of 0.961, Precision of 0.914, Recall of 0.942, and an F1-score of 0.926. These results underscore the MNN’s diagnostic accuracy, which is further validated by the fact that the Accuracy, Precision, Recall, and F1-score values are obtained through k-fold cross-validation (k = 6), ensuring their reliability and generalizability. Equations (14)–(17) illustrate the formulas used to calculate these metrics on the test set.
A c c u r a c y t e s t = 1 k i = 1 k 1 6 j = 1 6 T P i j + T N i j T P i j + F P i j + F N i j + T N i j
P r e c i s i o n t e s t = 1 k i = 1 k 1 6 j = 1 6 T P i j T P i j + F P i j
R e c a l l t e s t = 1 k i = 1 k 1 6 j = 1 6 T P i j T P i j + F N i j
F 1 s c o r e t e s t = 1 k i = 1 k 2 P r e c i s i o n i · R e c a l l i P r e c i s i o n i + R e c a l l i

5.5. Application Verification Based on On-Orbit Telemetry Data

To verify the functionality of the MNN, it is selected within the spacecraft fluid loop pump intelligent fault diagnosis software to perform fault diagnosis using on-orbit telemetry data under two different status types. The diagnosis is conducted based on two representative segments of steady-state telemetry data extracted from a large amount of on-orbit data to ensure reliability and typicality.
First, the on-orbit telemetry data for a fluid loop pump under normal status is imported, as shown in Table 9. As illustrated in Figure 18a, the software calculates the mean values of the four status parameters and performs a fault diagnosis. The result indicates “normal,” which aligns with the actual on-orbit status of the fluid loop pump.
Next, the on-orbit telemetry data for a fluid loop pump under fault status is imported, as shown in Table 10, with a specific fault “impeller jammed.” As illustrated in Figure 18b, the software calculates the mean values of the four status parameters and performs a fault diagnosis. The result indicates “impeller jammed, pump functionality lost,” which accurately reflects the actual on-orbit status of the fluid loop pump.

5.6. Application Verification Based on Ground Test Data

Referring to Figure 19, to further validate the MNN, a 6200 bearing test specimen with a simulated “fatigue spalling of bearing raceway” fault is manufactured. The simulated defect is located on the outer race of the fluid loop pump bearing, with dimensions of approximately 1.10 mm × 0.49 mm.
The 6200 faulty bearing is installed into the ground test facility for the fluid loop pump, as shown in Figure 20. The positions of the sensors in the ground test facility are consistent with those in the on-orbit configuration, including the outlet pressure sensor, rotational speed sensor, temperature sensor, and flow rate sensor. After stable operation for about 0.5 h, the mean values of the steady-state temperature, flow rate, outlet pressure, and rotational speed are measured to be 25.56 °C, 306.4 L/h, 300.9 kPa, and 8849 r/min, respectively.
As shown in Figure 21, when this data is imported into the software, the fault diagnosis result is identified as “fatigue spalling of bearing raceway,” which matches the actual fault type.

6. Conclusions

To address the high failure rate of the spacecraft thermal control subsystem, particularly fluid loop pumps, and the low stability of predictions from individual neural network models, this paper proposes an intelligent fault diagnosis method based on an MNN. This method provides an effective solution for diagnosing faults in spacecraft fluid loop pumps and is expected to play a crucial role in the intelligent maintenance of spacecraft. The main conclusions are as follows:
(1)
The MNN significantly outperforms the four individual neural networks it integrates. Using the fluid loop pump dataset, the MNN achieved a test set R of 0.9986 and an MSE of 0.0103, demonstrating excellent predictive performance. Compared to the BPNN, PSO-BPNN, GA-BPNN, and FNN, the MNN, respectively, reduced the MSE by 95.84%, 88.37%, 83.75%, and 54.22%.
(2)
The MNN demonstrates superior stability across various training and test sets compared to the individual models. The model has improved the stability of the individual models with the reduction in the MSE variance by 99.97%, 99.39%, 99.60%, and 89.47% and the reduction in the R variance by 99.94%, 98.03%, 99.67%, and 50.00%, respectively.
(3)
In addition to MSE and R, the MNN also demonstrated superior diagnostic accuracy in terms of Accuracy, Precision, Recall, and F1-score, achieving perfect scores of 1.000 for these metrics. These results are obtained through six-fold cross-validation, ensuring their reliability and generalizability.
(4)
An intelligent fault diagnosis software for spacecraft fluid loop pumps has been developed based on the proposed method. Validation with both on-orbit telemetry data and ground test data demonstrates that the software accurately identifies both normal and faulty statuses. Specifically, the software not only correctly diagnosed the normal operation and impeller jammed faults in on-orbit telemetry data but also accurately identified a simulated “fatigue spalling of bearing raceway” fault in ground test data. The accurate diagnostic results obtained from these three datasets demonstrate the robustness and generalization capability of the proposed method under different operating conditions. While the proposed intelligent fault diagnosis method shows promise, it has certain limitations, such as a limited range of identifiable status types and insufficient integration of knowledge and space-ground data.

Author Contributions

Conceptualization, S.H., Y.Y. and H.Z.; data curation, H.Z. and F.Y.; formal analysis, S.H. and Y.Y.; funding acquisition, S.H., Y.Y. and H.W.; investigation, J.W.; methodology, S.H. and Y.Y.; project administration, J.W., H.Z. and F.Y.; resources, J.W., H.Z. and F.Y.; software, Y.Y.; supervision, S.H., J.W. and F.Y.; validation, S.H. and Y.Y.; visualization, Y.Y.; writing—original draft, S.H. and Y.Y.; writing—review and editing, S.H., Y.Y. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number U22B2082.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MNNMulti-neural network fusion model
BPNNBack propagation neural network
PSO-BPNNParticle swarm optimization-back propagation neural network
GA-BPNNGenetic algorithm-back propagation neural network
FNNFuzzy neural network
CNNConvolutional neural network
MLPMulti-layer perceptron
SVMSupport vector machines
ANNArtificial neural networks
ACAWSAdvanced caution and warning system
TAMES-RTTestability Engineering and Maintenance System Real-time Diagnostics Tool
HyDEHybrid Diagnostic Engine

References

  1. Zhao, J. Chinese space station project overall vision. Manned Space 2013, 2, 1–10. (In Chinese) [Google Scholar] [CrossRef]
  2. Yang, H.; Zhang, H.; Zhou, H. Engineering technology and management innovation of China space station. Front. Sci. Technol. Eng. Manag. 2022, 41, 1–6. (In Chinese) [Google Scholar] [CrossRef]
  3. Lv, Z.; Fan, R.; Feng, S. On-orbit maintainability verification technology of space station. Int. J. Perform. Eng. 2019, 15, 66. [Google Scholar] [CrossRef]
  4. Li, W.; Hou, Y.; An, F.; Xia, Q.; Li, Z.; Zhou, Q. Safety Design for the China Space Station. Space Sci. Technol. 2023, 3, 0089. [Google Scholar] [CrossRef]
  5. Xu, D.; Xiao, X.; Zhang, J. Multivariable Correlation Feature Network Construction and Health Condition Assessment for Unlabeled Single-Sample Data. Eng. Appl. Artif. Intell. 2024, 133, 108220. [Google Scholar] [CrossRef]
  6. Li, W.J.; Cheng, D.Y.; Liu, X.G.; Wang, Y.B.; Shi, W.H.; Tang, Z.X.; Gao, F.; Zeng, F.M.; Chai, H.Y.; Luo, W.B.; et al. On-orbit service (OOS) of spacecraft: A review of engineering developments. Prog. Aerosp. Sci. 2019, 108, 32–120. [Google Scholar] [CrossRef]
  7. Ge, X.; Zhou, Q.; Liu, Z. Assessment of space station on-orbit maintenance task complexity. Reliab. Eng. Syst. Saf. 2020, 193, 106661. [Google Scholar] [CrossRef]
  8. Patterson, L.P. On-orbit maintenance operations strategy for the International Space Station—Concept and implementation. AIP Conf. Proc. 2001, 552, 139–146. [Google Scholar] [CrossRef]
  9. Thurman, R.L. Automation of space station thermal control systems—The important role of software. SAE Tech. Pap. 1996, 961604. [Google Scholar] [CrossRef]
  10. Bruckner, R.J.; Manco, R.A. ISS ammonia pump failure, recovery, and lesson learned: A hydrodynamic bearing perspective. In Proceedings of the 42nd Aerospace Mechanisms Symposium, Cleveland, OH, USA, 14–16 May 2014. [Google Scholar]
  11. Shen, F.; Drolen, B.; Prabhu, J.; Harper, L.; Eichinger, E.; Nguyen, C. Long Life Mechanical Fluid Pump for Space Applications. In Proceedings of the 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reston, VA, USA, 10–13 January 2005. [Google Scholar] [CrossRef]
  12. Cao, J.G.; Gu, Y.P.; Chen, G.; Sun, W.; Wang, J.; Liu, G. The flight experiments of active thermal control system with micro-mechanical pumped fluid loop. Spacecr. Environ. Eng. 2017, 34, 343–349. (In Chinese) [Google Scholar] [CrossRef]
  13. Zhang, P.; Wei, X.; Yan, L.; Xu, H.; Yang, T. Review of recent developments on pump-assisted two-phase flow cooling technology. Appl. Therm. Eng. 2019, 150, 811–823. [Google Scholar] [CrossRef]
  14. Kallesoe, C.S.; Izaili-Zamanabadi, R.; Rasmussen, H.; Cocquempot, V. Model based fault diagnosis in a centrifugal pump application using structural analysis. In Proceedings of the 2004 IEEE International Conference on Control Applications, New York, NY, USA, 2–4 September 2004. [Google Scholar]
  15. Beckerle, P.; Butzek, N.; Nordmann, R.; Rinderknecht, S. Application of a balancing filter for model-based fault diagnosis on a centrifugal pump in active magnetic bearings. In Proceedings of the ASME Design Engineering Technical Conference, New York, NY, USA, 30 August–2 September 2009; pp. 215–222. [Google Scholar]
  16. Muralidharan, V.; Sugumaran, V. Feature extraction using wavelets and classification through decision tree algorithm for fault diagnosis of mono-block centrifugal pump. Measurement 2013, 46, 353–359. [Google Scholar] [CrossRef]
  17. Yan, Z.; Liu, H.; Tao, L.; Ma, J.; Cheng, Y. A Universal Feature Extractor Based on Self-Supervised Pre-Training for Fault Diagnosis of Rotating Machinery under Limited Data. Aerospace 2023, 10, 681. [Google Scholar] [CrossRef]
  18. Zaman, W.; Ahmad, Z.; Siddique, M.F.; Ullah, N.; Kim, J.M. Centrifugal pump fault diagnosis based on a novel Sobel-Edge scalogram and CNN. Sensors 2023, 23, 5255. [Google Scholar] [CrossRef]
  19. AlTobi, M.A.S.; Bevan, G.; Wallace, P.; Harrison, D.; Ramachandran, K.P. Fault diagnosis of a centrifugal pump using MLP-GABP and SVM with CWT. Eng. Sci. Technol. Int. J. 2019, 22, 854–861. [Google Scholar] [CrossRef]
  20. Ranawat, N.S.; Kankar, P.K.; Miglani, A. Fault diagnosis in centrifugal pump using support vector machine and artificial neural network. J. Eng. Res. 2021, 9, 99–111. [Google Scholar] [CrossRef]
  21. Huang, S.; Yu, Y.; Liu, Q.; Wang, J. Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps. In Proceedings of the 34th European Safety and Reliability Conference, Cracow, Poland, 23–27 June 2024; p. 205. [Google Scholar]
  22. Colombano, S.P.; Spirkovska, L.; Baskaran, V.; Aaseng, G.; McCann, R.S.; Ossenfort, J.; Smith, I.; Iverson, D.L.; Schwabacher, M. A system for fault management and fault consequences analysis for NASA’s Deep Space Habitat. In Proceedings of the AIAA SPACE 2013 Conference and Exposition, San Diego, VA, USA, 10–12 September 2013; p. 5319. [Google Scholar]
  23. McAnn, R.S.; Spirkovska, L.; Smith, I. Putting integrated system health management to work: Development of an advanced caution and warning system for next-generation crewed spacecraft missions. In Proceedings of the AIAA Infotech@ Aerospace (I@A) Conference, Reston, VA, USA, 19 August 2013; p. 4660. [Google Scholar]
  24. Deb, S.; Pattipati, K.R.; Shrestha, R. QSI’s integrated diagnostics toolset. In Proceedings of the 1997 IEEE Autotestcon, Anaheim, CA, USA, 22–25 September 1997; pp. 22–25. [Google Scholar]
  25. Hayden, S.; Sweet, A.; Shulan, S. Lessons learned in the Livingstone 2 on Earth Observing One flight experiment. In Proceedings of the Infotech@ Aerospace, Arlington, VA, USA, 26–29 September 2005; p. 7000. [Google Scholar]
  26. Soots, S.; Burchett, B. Dynamic fuzzy models of the Fastrac startup sequence for fault detection. In Proceedings of the 46th AIAA Aerospace Sciences Meeting and Exhibit, Reston, VA, USA, 7–10 January 2008. [Google Scholar]
  27. Zhang, X.; Liu, M. A framework for fault diagnosis of electromechanical actuator based on ensemble learning method. Spacecr. Environ. Eng. 2023, 40, 559–566. (In Chinese) [Google Scholar] [CrossRef]
  28. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  29. Dehuri, S.; Cho, S.B. A comprehensive survey on functional link neural networks and an adaptive PSO–BP learning for CFLNN. Neural Comput. Appl. 2010, 19, 187–205. [Google Scholar] [CrossRef]
  30. Huang, S.Q.; Qin, T.C.; Yang, X.N.; Li, F.Y.; Zhou, Y.; Yu, Y.F.; Wang, H. Study on combined stress failure envelope of CMG based on PSO-BP neural network. AIP Adv. 2023, 13, 085003. [Google Scholar] [CrossRef]
  31. Zheng, D.; Qian, Z.D.; Liu, Y.; Liu, C.B. Prediction and sensitivity analysis of long-term skid resistance of epoxy asphalt mixture based on GA-BP neural network. Constr. Build. Mater. 2018, 158, 614–623. [Google Scholar] [CrossRef]
  32. Zhang, T.; Zhang, D.G.; Yan, H.R.; Qiu, J.N.; Gao, J.X. A new method of data missing estimation with FNN-based tensor heterogeneous ensemble learning for internet of vehicle. Neurocomputing 2021, 420, 98–110. [Google Scholar] [CrossRef]
  33. Yu, Y.; Huang, S.; Liu, Q.; Wang, Z.; Zhou, H.; Wang, J. Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps. Spacecr. Eng. 2024, 33, 61–67. [Google Scholar] [CrossRef]
  34. Khan, K.; Sahai, A. A comparison of BA, GA, PSO, BP and LM for training feed forward neural networks in e-learning context. Int. J. Intell. Syst. Appl. 2012, 4, 23. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the fluid loop system operation.
Figure 1. Flowchart of the fluid loop system operation.
Aerospace 12 01032 g001
Figure 2. Fluid loop pump and connecting components.
Figure 2. Fluid loop pump and connecting components.
Aerospace 12 01032 g002
Figure 3. BPNN model.
Figure 3. BPNN model.
Aerospace 12 01032 g003
Figure 4. PSO-BPNN model.
Figure 4. PSO-BPNN model.
Aerospace 12 01032 g004
Figure 5. GA-BPNN model.
Figure 5. GA-BPNN model.
Aerospace 12 01032 g005
Figure 6. FNN model.
Figure 6. FNN model.
Aerospace 12 01032 g006
Figure 7. Flowchart of fault diagnosis through multi-neural network fusion model.
Figure 7. Flowchart of fault diagnosis through multi-neural network fusion model.
Aerospace 12 01032 g007
Figure 8. Main interface of the spacecraft fluid loop pump intelligent fault diagnosis software.
Figure 8. Main interface of the spacecraft fluid loop pump intelligent fault diagnosis software.
Aerospace 12 01032 g008
Figure 9. Flowchart of the spacecraft fluid loop pump fault intelligent diagnosis software.
Figure 9. Flowchart of the spacecraft fluid loop pump fault intelligent diagnosis software.
Aerospace 12 01032 g009
Figure 10. Tab interface of data-importing.
Figure 10. Tab interface of data-importing.
Aerospace 12 01032 g010
Figure 11. (a) Tab interface of neural network training: (a) Training BP tab; (b) Training PSO-BP tab; (c)Training GA-BP tab; (d)Training FNN tab.
Figure 11. (a) Tab interface of neural network training: (a) Training BP tab; (b) Training PSO-BP tab; (c)Training GA-BP tab; (d)Training FNN tab.
Aerospace 12 01032 g011
Figure 12. Tab interface of fault diagnosis.
Figure 12. Tab interface of fault diagnosis.
Aerospace 12 01032 g012
Figure 13. Variation curves of the R of the training set and test set with changes in a.
Figure 13. Variation curves of the R of the training set and test set with changes in a.
Aerospace 12 01032 g013
Figure 14. Variation curves of the MSE of the training set and test set with changes in a.
Figure 14. Variation curves of the MSE of the training set and test set with changes in a.
Aerospace 12 01032 g014
Figure 15. Comparison of performance on neural network test sets.
Figure 15. Comparison of performance on neural network test sets.
Aerospace 12 01032 g015
Figure 16. MSE fluctuation in six-fold cross-validation.
Figure 16. MSE fluctuation in six-fold cross-validation.
Aerospace 12 01032 g016
Figure 17. R fluctuation in six-fold cross-validation.
Figure 17. R fluctuation in six-fold cross-validation.
Aerospace 12 01032 g017
Figure 18. Import the on-orbit telemetry data for fault diagnosis: (a) actual normal on-orbit status of the fluid loop pump; (b) actual impeller jammed on-orbit status of the fluid loop pump.
Figure 18. Import the on-orbit telemetry data for fault diagnosis: (a) actual normal on-orbit status of the fluid loop pump; (b) actual impeller jammed on-orbit status of the fluid loop pump.
Aerospace 12 01032 g018
Figure 19. “Fatigue spalling of bearing raceway” faulty bearing.
Figure 19. “Fatigue spalling of bearing raceway” faulty bearing.
Aerospace 12 01032 g019
Figure 20. Ground test rig for the spacecraft fluid loop pump.
Figure 20. Ground test rig for the spacecraft fluid loop pump.
Aerospace 12 01032 g020
Figure 21. Import the ground test data for fault diagnosis.
Figure 21. Import the ground test data for fault diagnosis.
Aerospace 12 01032 g021
Table 1. Neural network dataset for training and test comprising status parameters and status labels.
Table 1. Neural network dataset for training and test comprising status parameters and status labels.
NumberTemperature (°C)Flow Rate (L/h)Outlet Pressure (kPa)Rotation Speed (r/min)Status Label
15.30337.75339.189399.991
25.30338.83338.839396.071
35.30339.14338.729372.571
47.30304.38328.608270.742
56.90310.68330.838286.402
68.30297.07322.498396.072
715.30239.61239.305325.573
818.30208.83209.074298.163
920.30170.84180.493364.743
1021.30337.29330.728388.244
1123.28342.67329.188388.244
1225.29339.45318.837702.074
1335.29237.44238.146313.825
1433.29237.60219.075396.075
1536.30199.45178.604298.165
165.17337.60153.497.836
175.21338.99154.643.926
185.22338.68153.493.926
194.80350.75342.189422.991
205.00344.83344.839396.071
215.10334.14335.729352.571
227.80304.38328.608170.742
237.40300.68310.838186.412
248.80287.07312.498296.072
2515.30229.61220.305225.573
2618.80198.83209.074298.163
2720.80160.84180.493364.743
2821.80327.29330.728288.244
2923.08332.67322.188298.244
3025.79330.45308.837602.074
3135.89227.44228.146213.825
3232.09230.60210.075196.075
3336.80195.45173.604198.165
344.57336.60155.499.836
355.01334.99154.644.926
364.92342.68159.491.926
Table 2. BPNN parameters.
Table 2. BPNN parameters.
ParameterValue
Number of hidden layer neurons8
Maximum number of iterations1000
Error threshold1 × 10−6
Learning rate0.001
Table 3. PSO-BPNN parameters.
Table 3. PSO-BPNN parameters.
ParameterValue
Number of hidden layer neurons8
Maximum number of iterations1000
Error threshold1 × 10−6
Learning rate0.001
Learning rate factor 11.494
Learning rate factor 21.494
Number of generations20
Population size50
Maximum velocity1
Minimum velocity−1
Maximum position5
Minimum position−5
Table 4. GA-BPNN parameters.
Table 4. GA-BPNN parameters.
ParameterValue
Number of hidden layer neurons8
Maximum number of iterations1000
Error threshold1 × 10−6
Learning rate0.001
Generations50
Population size5
Table 5. FNN parameters.
Table 5. FNN parameters.
ParameterValue
Number of hidden layer neurons4
Maximum number of iterations1 × 104
Error threshold1 × 10−12
Learning rate0.075
Network parameter coefficient0.3
Membership parameter coefficient0.6
Table 6. R, MSE, and their variances of the multi-neural network fusion model with changes in a.
Table 6. R, MSE, and their variances of the multi-neural network fusion model with changes in a.
aRtrainRtestMtrainMtestVariance of RtestVariance of Mtest
00.99950.99860.00360.01037.0607 × 10−61.58 × 10−4
0.050.99950.99850.00360.01047.5136 × 10−61.69 × 10−4
0.10.99950.99850.00370.01057.9589 × 10−61.80 × 10−4
0.20.99940.99840.00370.01088.8259 × 10−62.03 × 10−4
0.30.99940.99840.00380.01119.6614 × 10−62.26 × 10−4
0.40.99940.99830.00390.01141.0467 × 10−52.47 × 10−4
0.50.99940.99820.00400.01181.1245 × 10−52.68 × 10−4
0.60.99940.99820.00410.01211.1997 × 10−52.88 × 10−4
Table 7. Performance comparison of multi-neural network fusion model with four neural networks.
Table 7. Performance comparison of multi-neural network fusion model with four neural networks.
MtestReduction in MtestRtestIncrease in Rtest
MNN0.0103/0.9986/
BPNN0.247495.84%0.96063.96%
PSO-BPNN0.088688.37%0.99080.79%
GA-BPNN0.063483.75%0.98641.24%
FNN0.022554.22%0.99740.01%
Table 8. Stability comparison of multi-neural network fusion model with four neural networks.
Table 8. Stability comparison of multi-neural network fusion model with four neural networks.
Variance of MSEReduction in Variance of MSEVariance of RReduction in Variance of R
MNN0.0002/0.00001/
BPNN 0.913399.97%0.0174099.94%
PSO-BPNN0.032699.39%0.0005198.03%
GA-BPNN0.050599.60%0.0030099.67%
FNN0.001989.47%0.0000250.00%
Table 9. On-orbit telemetry data of fluid loop pump under “normal” status.
Table 9. On-orbit telemetry data of fluid loop pump under “normal” status.
Temperature (°C)Flow Rate (L/h)Outlet Pressure (kPa)Rotation Speed (r/min)
5.30337.75339.189399.99
5.30338.83338.839396.07
5.30339.14338.729372.57
5.30340.38338.609270.74
5.30338.68338.839286.41
5.30340.07338.499396.07
5.30339.61339.309325.57
5.30338.83339.079298.16
Table 10. On-orbit telemetry data of fluid loop pump under “impeller jammed” status.
Table 10. On-orbit telemetry data of fluid loop pump under “impeller jammed” status.
Temperature (°C)Flow Rate (L/h)Outlet Pressure (kPa)Rotation Speed (r/min)
5.21338.53153.953.92
5.09342.06154.187.83
5.21340.07154.303.92
5.20338.06154.307.83
5.21341.91154.183.92
5.21340.07154.303.92
5.21340.22154.303.92
5.21337.75154.533.92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, S.; Yu, Y.; Wang, J.; Zhou, H.; Yao, F.; Wang, H. Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps Based on Multi-Neural Network Fusion Model. Aerospace 2025, 12, 1032. https://doi.org/10.3390/aerospace12111032

AMA Style

Huang S, Yu Y, Wang J, Zhou H, Yao F, Wang H. Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps Based on Multi-Neural Network Fusion Model. Aerospace. 2025; 12(11):1032. https://doi.org/10.3390/aerospace12111032

Chicago/Turabian Style

Huang, Shouqing, Yifang Yu, Jing Wang, Haocheng Zhou, Feng Yao, and Hao Wang. 2025. "Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps Based on Multi-Neural Network Fusion Model" Aerospace 12, no. 11: 1032. https://doi.org/10.3390/aerospace12111032

APA Style

Huang, S., Yu, Y., Wang, J., Zhou, H., Yao, F., & Wang, H. (2025). Intelligent Fault Diagnosis Method for Spacecraft Fluid Loop Pumps Based on Multi-Neural Network Fusion Model. Aerospace, 12(11), 1032. https://doi.org/10.3390/aerospace12111032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop