Next Article in Journal
Identification of Lunar Craters in the Chang’e-5 Landing Region Based on Kaguya TC Morning Map
Previous Article in Journal
Estimation of Daily Maize Gross Primary Productivity by Considering Specific Leaf Nitrogen and Phenology via Machine Learning Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Dimensional Feature Fusion Recognition Method for Space Infrared Dim Targets Based on Fuzzy Comprehensive with Spatio-Temporal Correlation

1
Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China
2
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(2), 343; https://doi.org/10.3390/rs16020343
Submission received: 7 December 2023 / Revised: 8 January 2024 / Accepted: 13 January 2024 / Published: 15 January 2024

Abstract

:
Space infrared (IR) target recognition has always been a key issue in the field of space technology. The imaging distance is long, the target is weak, and the feature discrimination is low, making it difficult to distinguish between high-threat targets and decoys. However, most existing methods ignore the fuzziness of multi-dimensional features, and their performance mainly depends on the accuracy of feature extraction, with certain limitations in handling uncertainty and noise. This article proposes a space IR dim target fusion recognition method, which is based on fuzzy comprehensive of spatio-temporal correlation. First, we obtained multi-dimensional IR features of the target through multi-time and multi-spectral detectors, then we established and calculated the adaptive fuzzy-membership function of the features. Next, we applied the entropy weight method to ascertain the objective fusion weights of each feature and computed the spatially fuzzified fusion judgments for the targets. Finally, the fuzzy comprehensive function was used to perform temporal recursive judgment, and the ultimate fusion recognition result was obtained by integrating the results of each temporal recursive judgment. The simulation and comparative experimental results indicate that the proposed method improved the accuracy and robustness of IR dim target recognition in complex environments. Under ideal conditions, it can achieve an accuracy of 88.0% and a recall of 97.5% for the real target. In addition, this article also analyzes the impact of fusion feature combinations, fusion frame counts, different feature extraction errors, and feature database size on recognition performance. The research in this article can enable space-based IR detection systems to make more accurate and stable decisions, promoting defense capabilities and ensuring space security.

1. Introduction

Space-based infrared (IR) target recognition is of great significance in modern military and space technology. Due to the long distances, the energy of space targets received by IR detectors is weak, and the extraction accuracy of radiation and other features is easily affected by many factors. In addition, in order to achieve effective penetration, the attacker will release a large number of decoys in midcourse of the flight to confuse the defender, making it more difficult to identify the real target.
Although there are many decoys, they all simulate target characteristics from one or more aspects but cannot completely simulate all characteristics of the real target [1]. In the past, IR target recognition [2,3] usually relied on a single feature or sensor data, but these methods have certain limitations in terms of dealing with uncertainty and noise. For example, [2] only used the target IR radiation information and performed classification through the support vector data description (SVDD) method; this method is simple to implement and has high matching accuracy, but it is prone to accumulation errors when the target characteristics change greatly.
With the continuous breakthroughs in offensive and defensive technologies and improvements in space-based IR surveillance systems, the accuracy of extracting multi-dimensional features from space targets has also been continuously enhanced. The research cited in [4,5] analyzed the factors that interfere with the extraction of radiation characteristics of space targets from the aspects of space-based detection systems and space target imaging and provided the equivalent relationship between the radiation value at the target’s entrance pupil and the extracted value. Reference [6] analyzed different motion states of space targets and extracted the IR features of targets in different postures, while [7,8] used variational mode decomposition (VMD) along with the temperature measurement method to extract and denoise the temperature and equivalent cross-sectional area. Micro-motion is the key feature distinguishing active targets from passive decoys. The intrinsic patterns and extraction methods of micro-motion features are discussed in detail in [9,10]. The study cited in [11] used cross-positioning and robust local weighted regression (RLWR) to achieve high-precision estimation of velocity characteristics under multi-satellite observation.
Multi-dimensional features can reflect different attributes of the status of the target. When a certain feature is disturbed, other features can still support the correct recognition of the target. Multiple features are usually fused to achieve complementary description of the target and improve recognition accuracy. Traditional target fusion recognition methods include weighted average, canonical correlation analysis (CCA) [12], multi-kernel learning (MKL) [13], the Dempster–Shafer (DS) evidence theory [14], Bayesian inference [15], etc. These methods are theoretically mature and have been widely used in engineering, but they heavily rely on high-precision extraction of features. When there is significant uncertainty or noise in a certain feature, it will seriously affect the recognition accuracy and cause inestimable losses. Fusion recognition methods based on deep neural networks [16,17,18,19,20] are also a hot topic in current research. For example, [16] combined the two-dimensional convolutional neural network (2D-CNN) to perform fusion recognition processing on the VV and VH polarized bands of Sentinel-2 and Sentinel-1 data and demonstrated excellent performance in terms of fusion metrics and classification accuracy. Nevertheless, the theoretical foundation for deep learning methods is not fully established, posing challenges in interpreting certain outcomes. Moreover, this type of method requires a large amount of training data and calculations, as well as high requirements for hardware performance. These characteristics make it difficult to meet the real-time and stability requirements in engineering.
Fuzzy set theory has great advantages in dealing with uncertainty issues such as information processing [21], task decision making [22], tracking and identification [23], and robust analysis [24]. It is currently one of the most effective theories in the field of probabilistic reasoning. Fuzzy comprehensive is an important branch of fuzzy set theory and is widely used in current research [25,26]. Fuzzy comprehensive can fuse information from different features to reduce the impact of uncertainty and improve the reliability of recognition. However, traditional static fuzzy comprehensive [27,28,29] tends to ignore the correlation and complementarity of information at previous and subsequent moments and does not fully consider the spatio-temporal relationship between variables with different features and different moments, leading to errors in inference results.
Aiming at the key problem of identifying high-threat space IR targets, this article proposes a fusion recognition algorithm based on spatio-temporal correlation fuzzy comprehensive. This method is simple to implement, fully considers the target recognition results of different features at different times, and has higher accuracy and stability than other methods. This work mainly makes the following three contributions:
(1) A fuzzy comprehensive fusion recognition method based on spatio-temporal correlation is proposed, which can stably, quickly, and accurately identify the types of space IR targets.
(2) The effects of feature combination, fused frame counts, feature extraction error, and feature database size on recognition accuracy were analyzed.
(3) The performances of the proposed method and the comparison method are discussed under various interference environments.
The subsequent sections of this article are organized as follows. Section 2 introduces some prerequisite knowledge and background. Section 3 details the multi-dimensional feature fusion recognition method based on spatio-temporal correlation fuzzy synthesis. Section 4 provides the simulation and comparative experimental results and analysis and analyzes the impact of different influencing factors on accuracy. Finally, a discussion and conclusions are given in Section 4 and Section 5, respectively.

2. Preliminaries

2.1. Multi-Dimensional Features of Space Targets under Space-Based IR Observation

When space targets fly, they are usually far away from the IR detector. The amount of radiation from the target reaching the entrance pupil of the detector is weak, and the image only has one or a few pixels, resulting in the loss of target shape and size information. Recognition only from point target images is obviously undesirable [30]. Accurately extracting multi-dimensional features of targets from weak signals is the primary prerequisite for recognition.
Radiation characteristics are the primary attributes for the multi-dimensional feature extraction of research targets. The accuracy of radiation extraction directly affects the effect of target identification [5]. Under the observation of detectors in different spectral bands, the extracted radiation intensities are different. Commonly used spectrum bands are midwave (3–5 μ m ) and longwave (8–12 μ m ). Different space targets have unique radiation intensity characteristics, which provide an important reference for their identification. However, the extraction of radiation features is affected by many factors such as optical point spread, cross-pixel, detector noise, calibration error, etc. These uncertainties seriously affect the recognition results.
The temperature level and its continuous changes reflect the internal state changes of the target during flight. The true target has significant thermal inertia and almost maintains the initial temperature; the balloon decoy has low thermal inertia and soon reaches the equilibrium temperature [31]. However, the extraction of temperature features is susceptible to interference from environmental radiation, and further confirmation is needed in terms of reliability and accuracy.
The physical meaning of the target equivalent cross-sectional area is the product of the target detection cross-section and the emission coefficient. We calculated the blackbody radiance at the current temperature based on Planck’s law and then divided the target’s IR radiation intensity by the blackbody radiance to obtain the equivalent cross-sectional area. It should be noted that this feature is different from the radiation cross-section of the target body and reflects the target cross-section features extracted by the sensor signal processing end.
Considering factors such as weight control and design cost, the total mass and mass distribution of each decoy will not be the same as that of the real target. This difference leads to different micro-motion characteristics caused by the transverse impulse moment of the target when it is released. According to the different micro-motion of the target, a series of micro-motion features such as precession angle, nutation angle, rotation angle, and micro-motion period are also different. This article mainly considers the micro-motion period. In fact, passive decoys can be effectively identified using the micro-motion period [32].
In addition to the above features, space targets also include different features under different sensors. This article mainly identifies targets in the background of space-based IR detection. Based on the principle of robustness of identification, five features, i.e., midwave IR radiation intensity (MW), longwave IR radiation intensity (LW), temperature (T), equivalent cross-sectional area (EA), and micro-motion period (P), are considered. These features have clear physical meanings, are less difficult to extract, and are highly distinguishable. Figure 1 shows an overview of multi-dimensional feature extraction of space dim IR targets.

2.2. Space Target IR Simulation Model

Due to the high sensitivity of the fields involved, it is generally impossible to obtain actual measured data on high-threat real targets in research. IR simulation of different space targets is very necessary. This article simulates five categories of typical space targets (true target, heavy decoy, balloon decoy, equal-shaped light decoy, debris) based on the spatial IR model given in [33,34]. The specific parameters are shown in Table 1.
In Table 1, α and β represent the azimuth and pitch angles of the corresponding micro-motion axes in the reference coordinate system, and ω represents the speed of the corresponding micro-motion mode. In this article, 1000 groups of each type of target are simulated, for a total of 5000 groups. From each category, 200 groups are randomly selected as targets to be identified, and the remaining 4000 groups are used as the original data of the multi-dimensional feature database. Combining the physical properties and motion patterns of the target itself, the feature extraction method in [7,8,32,33] is used to obtain the corresponding multi-dimensional IR feature data. In the real detection environment, there are many factors that affect recognition such as optical point spread, cross-pixel, detector noise, calibration error, feature extraction error, background, flicker, edges, etc. We assumed that these noises and errors reflected on each feature conformed to the normal distribution; the mean value was 0, and the standard deviations were σ M W , σ L W , σ T , σ E A , and σ P , respectively. By setting different standard deviations, complex space detection scenarios were simulated.

2.3. Fuzzy Comprehensive Function

We assumed that the possibility distribution of the fusion system judging target sample point u as the m -th target type was A m u = { a m 1 ( u ) , a m 2 ( u ) , , a m N ( u ) } [ 0,1 ] N , where a m n ( u ) represents the possibility measure on feature n ; m = 1,2 , , M represents the target type serial number; n = 1,2 , , N represents the feature number. Then, the fuzzy comprehensive function [35] could be defined as mapping:
S N : 0,1 N 0,1
That is, the judgment results of N features were mapped into one judgment result. The mapping should meet the following two conditions:
(a)
Order preservation. For A p u , A q u [ 0,1 ] N exists
A p u A q u S N A p u S N A q u
(b)
Comprehensive. For A m u [ 0,1 ] N exists
n = 1 N a m n u S N A m u n = 1 N a m n u
After introducing the fuzzy comprehensive function, we defined the fuzzy membership that determines the target sample point u as the type m as a m u = S N A m u . Among them, a m u is the final fusion decision result, which reflects the fusion process of multi-dimensional features.

3. Proposed Method

There is often no direct one-to-one correspondence between the characteristic information of spatial targets and the target type. If the type of target point is judged directly based on the characteristic information of the target point, it will cause a great misjudgment. The essence of the target recognition problem based on feature extraction is to establish a mapping relationship between feature information and target types. The proposed method mainly uses the fuzzy comprehensive method of spatio-temporal correlation to establish this mapping relationship. It is mainly divided into five parts: multi-dimensional IR feature extraction; establishment of fuzzy-membership function and calculation; use of the entropy weight method (EWM) to determine feature fusion weight; spatio-temporal fusion judgment; and database update. The extraction of multi-dimensional IR features from targets serves as the data source for the entire recognition process. Based on these data, fuzzy membership can be calculated. Inputting the fuzzy membership into the spatio-temporal correlation fusion judgment identifies the target’s category. Among them, the fuzzy membership functions and initial fusion weights are computed from the multi-dimensional IR feature database, employing the EWM method for the latter. The finally identified multi-dimensional IR features of the target are added to the database after expert appraisal to further correct the fuzzy-membership function and fusion weights. The multi-dimensional IR feature extraction methods are complex and varied and are not the central focus of this article. They will not be described again. The entire method’s flowchart is depicted in Figure 2.

3.1. Establishment of Fuzzy-Membership Function and Calculation

The space-based IR surveillance system received various target information reported by the detector, thus accumulating a large amount of historical data. Combining different feature extraction methods, a multi-dimensional feature database of spatial IR targets can be constructed. We assume that there were identified data of M targets in the current database. Each type of target had N features and K groups of data, and the length of each group was L . Then, the fuzzy set on the database could be expressed as:
Θ m n k l           m = 1,2 , , M ;   n = 1,2 , , N ; k = 1,2 , , K ; l = 1,2 , , L
For the target to be identified, the fuzzy set was expressed as:
θ n l             n = 1,2 , , N ; l = 1,2 , , L
Since the extraction of target features will be interfered by noise, etc., the extracted features have certain uncertainties. The process of determining the membership function is actually to fuzzify the extracted features. Using the normal membership function, we defined the membership of the sample point θ of the fuzzy set to be tested under the m -th target and n -th feature as:
χ m n l θ = exp θ b m n 2 2 σ m n 2
where b m n and σ m n are, respectively, the central value and standard deviation of the fuzzy set Θ m n k l in the multi-dimensional feature database and are calculated as follows:
b m n = k = 1 K l = 1 L Θ m n k l K L
σ m n = k = 1 K ( 1 L l = 1 L Θ m n k l b m n ) 2 K
The fuzzy membership represents the degree to which its feature values are consistent with different targets. The normal membership function can naturally represent the distribution characteristics of fuzzy sets and has a simple mathematical form. In addition, the continuity and smoothness of the normal function help to avoid introducing discontinuities in system reasoning, thereby improving the system’s stability and performance. We input the features of the target to be identified into Equation (6), and the membership matrix X = ( χ m n l ) M N L of the target to be identified could be calculated.

3.2. EWM to Determine Fusion Weight

Information entropy is a measure of system state uncertainty, and EWM uses the entropy value of each feature to determine the fusion weight of the feature. If a certain feature has a greater degree of discreteness, it contains more information and plays a greater role in fusion recognition, so its weight is correspondingly larger. EWM has strict mathematical meaning. The calculation of weight value uses the original data in the database, which can eliminate the influence of subjective factors [36,37]. The specific steps are as follows:
(a) Building the target feature matrix. According to Equation (7), the target feature matrix B = ( b m n ) M N is constructed based on the multi-dimensional feature database, where b m n is the central value of the m -th target under the n -th feature.
(b) Matrix normalization. The purpose of normalization is to be dimensionless and facilitate objective confirmation of the feature weights. As various target features lack explicit directionality, normalization cannot be directly achieved using a simple forward or reverse range method. In this article, the normalized value is first calculated based on the absolute distance between each feature value and the average value. Then, the forward range method is used to further normalize it so that it is in the range of [0, 1]. The calculation is as follows:
c m n = 1 b m n μ μ
c m n = c m n min 1 m M c m n max 1 m M c m n min 1 m M c m n
where μ = m = 1 M b m n M . Finally, the normalized matrix C = ( c m n ) M N was obtained.
(c) Calculate the feature entropy value. First, we calculated the proportion of each feature under different targets and obtained the matrix D = ( d m n ) M N , where d m n = c m n m = 1 M c m n . According to the definition of information entropy in information theory, the information entropy of each feature is:
e n = m = 1 M d m n ln d m n ln m
(d) Calculate the feature fusion weight. The fusion weight of each feature is calculated through information entropy, as shown in Equation (12). The fusion weight vector is expressed as w = ( w n ) N .
w n = 1 e n N n = 1 N e n

3.3. Spatio-Temporal Fusion Judgment

The traditional fuzzy comprehensive method is based on the similarity of feature matching, and its fusion results rely on the design of typical features and do not fully utilize the constrained prior knowledge between features. At the same time, static fuzzy comprehensive cannot express the dynamic changes in feature variables in the sequence and has limitations. Therefore, this article proposes a fuzzy comprehensive recognition method of spatio-temporal correlation. The entire spatio-temporal fusion judgment process is shown in Figure 3.

3.3.1. Spatio-Domain Feature Fusion

After obtaining the fusion weights, the feature fusion needs to be completed based on the decision-making information of the features and certain rules. This article uses a weighted summation method to fuse the characteristics of the targets. As shown in Equation (13), by multiplying the membership matrix X l of the l -th frame of the target to be identified by the fusion weight vector w , the local judgment results F l of the target can be obtained.
F l = X l · w = χ 11 l χ 12 l χ 1 N l χ 21 l χ 22 l χ 2 N l χ M 1 l χ M 2 l χ M N l w 1 w 2 w N = f 1 l f 2 l f M l
Spatio-domain feature fusion is performed on each frame to obtain the spatial fusion matrix F = ( f m l ) M L .

3.3.2. Temporal-Domain Recursive Fusion

Due to the existence of interference, the target recognition results of a single frame are often unreliable. It is necessary to recursively accumulate the local judgment results of spatial fusion. We assumed that the number of fused frames was S   ( 0 < S L ) , and the fusion result was obtained after fusing the historical data of the previous S 1 frames with the current frame l   ( S l L ) . Using Equation (14), the temporal-domain recursive fusion judgment of consecutive multiple frames was obtained.
g m l = f m l · i = l S 1 l 1 g m i 1 S
where g m i and g m l are the recursive fusion results of the historical frame and the current frame, respectively. In this article, the fusion frame count S is set to 70, and the impact of the number of fusion frames will be discussed in the experimental section. For the sample to be identified, the last frame, g m L , is taken as the final fusion result. The temporal-domain accumulation fusion result is expressed as G = ( g m L ) M .

3.3.3. Final Judgment

The result G of temporal-domain accumulation and fusion is not a category with a clear pattern but belongs to each category with a certain degree of membership. In the final level of judgment, a clear category judgment is needed, and the principle of maximum membership is used to process the fuzzy judgment results. The principle of maximum membership is expressed as follows:
r e s = argmax 1 m M g m L
Combining Equations (1) and (12)–(15), the entire fusion recognition calculation process can be obtained:
r e s = S χ m n l = argmax 1 m M χ m n L M N · w n N i = l S 1 l 1 g m i 1 S
where S · is the fuzzy comprehensive function of spatio-temporal correlation designed in this article. This takes into account the temporal-domain and spatio-domain fusion so that the decision process does not rely on a single feature or single frame, and the final result is more stable and credible.

3.4. Expert Identification and Updates

Since the proposed method has certain requirements for the integrity and reliability of historical data in the multi-dimensional feature database, the steps of expert identification and updating the database are added to correct the recognition results in a timely manner and continuously improve the historical data in the database. This improves the adaptability of the algorithm and can enable better adaptation to complex detection environments, and the real targets and decoys are constantly updated. The number of database samples, K , will be discussed further in the experimental section.

4. Experiments

This section is mainly divided into three parts: simulation experiment, comparative experiment, and analysis of influencing factors. All the methods were run on the PC platform (Intel Core I7-11800H) produced by Dell in Rondelock, TX, USA. Except for the comparative deep learning method, which was implemented in Python 3.8 with the TensorFlow 2.4 wrapper, other methods were implemented based on MATLAB R2021b.

4.1. Simulation Experiment

Since the measured radiation characteristic data of real high-threat space targets and decoys are not yet available, the method in this article mainly uses the simulation data generated by the space IR target model simulation in Section 2.2 for testing. Among them, 1000 groups of targets to be identified were randomly selected, including 200 of each category, and no noise was added. The number of fused frames S was 70, the number of fused features N was 5, and the size K of the multi-dimensional feature database was 4000.
Due to the high destructiveness of real targets, missed negatives and false alarms will cause incalculable losses. Therefore, the main indicators considered in the experiment are recall, false alarm rate (FAR), missed alarm rate (MAR), and overall accuracy. It should be noted that when calculating the recall, FAR, and MAR of a certain category in this article, all other categories were regarded as negative samples. The results are presented in Table 2. The overall recognition accuracy of the proposed method reached 88.0%, of which the recall of true targets was 97.5%, the FAR was 0.5%, and the MAR was 2.5%. In addition, the balloon decoy recall was the lowest, at 61.0%, and the FAR of the equal-shaped light decoy was the highest, at 8.5%.
The specific recognition results of each category in the test sample are shown in Figure 4. Among them, (a)–(e), respectively, represent the confidence that the sample of this category belongs to each category. It can be seen from the figure that the recognition results of heavy decoys and true targets were better, with only 4 to 5 errors, and the confidence of the heavy decoy reached 0.8; the recognition results of the balloon decoy were the worst, with a confidence of 0.3–0.4. In addition, true targets were easily confused with heavy decoys; balloon decoys, equal-shaped light decoys, and debris were also easily confused. This is the result of the combined effect of multi-dimensional features.

4.2. Comparison Experiment

To further validate the effectiveness and novelty of the proposed method, a comparative analysis was conducted with other existing methods [38,39,40]. Among them, there are two traditional fusion recognition algorithms and a deep neural network method. Zhou’s method uses feature fusion based on Bayesian decision theory to identify radar deception jamming signals and uses kernel density estimation to improve the fusion algorithm [38]. Gao proposed a similarity standard based on cross-entropy to modify the basic probability assignment of multiple features and then performed fusion recognition based on DS evidence theory [39]. Dual-channel long short-term memory (DC-LSTM) is a deep learning method that inputs space IR object grayscale into two LSTM channels to extract global and local features, respectively [40]. In the comparative experiment, five scenes under different noise and error environments were set up. The scene settings are shown in Table 3. The experimental results are shown in Table 4, where the recall is the recall of the true target.
As can be seen from Table 4, the recall and accuracy of the method proposed in this article were better than other comparison methods in all scenarios except scene 0. From the perspective of scenes, all methods gradually decreased in accuracy as noise and error increased, and the recall of true targets decreased. From the perspective of indicators, the indicators of the proposed method were better than other methods in most scenes. DC-LSTM had the highest accuracy in scene 0, but as the scene became more complex, the accuracy dropped sharply, indicating that this method is difficult to adapt to complex spatial environments. Gao’s method also has the same problem. In scene 4, the recall of Gao’s method was 0, which was invalid. However, the proposed method was also able to maintain an accuracy of 65.8% and a recall of 62.5%. Overall, the proposed method performed well in terms of both recognition accuracy and robustness.
Figure 5 plots the confusion matrices of the four methods under scene 1. The table in the right column of each picture shows the recall and MAR of each category. Zhou’s method (Figure 5a) easily identifies the equal-shaped light decoy as debris, both of which have consistent micromotion patterns and similar temperature changes. The recall of Gao’s method (Figure 5b) for the balloon decoy, equal-shaped light decoy, and heavy decoy was less than 0.5, and the recall for the true target was only 61.5%, the lowest among the four methods. The DC-LSTM method (Figure 5c) showed a balanced performance in terms of identifying various categories in this scene. The proposed method (Figure 5d) had the best performance in the recall of true targets, reaching 96%, and had no obvious shortcomings in other categories.
In offensive and defensive confrontations in space, time consumption is a very important indicator. Based on the recognition process of five scenes and one thousand targets to be identified in each scene, the average runtime for the four methods is presented in Table 5. Gao’s method had the shortest running time of 0.0539 s; DC-LSTM had the longest running time of 1.3674 s, which was related to its huge number of parameters; the proposed method had a running time of 0.1283 s, which is still acceptable in space-based IR detection systems.
We plotted the receiver operating characteristic (ROC) curve under scene 1 in Figure 6 to further compare the performance of the four methods under different criteria. The area under curve (AUC) is a performance indicator to measure the pros and cons of a classifier. In Figure 6, the proposed method achieves the best performance with an AUC of 0.948, indicating that it has the best classification ability.

4.3. Analysis of Influencing Factors

4.3.1. Analysis of Fused Feature Combinations

We will further explore the impact of the combination of fused features on the accuracy of the proposed method. As mentioned in Section 2.2, the features fused in this article include midwave radiation intensity (MW), longwave radiation intensity (LW), temperature (T), equivalent cross-sectional area (EA), and micro-motion period (P). The results are shown in Figure 7.
From the perspective of the number of features, the average accuracy from a single feature to five features gradually increased. For example, in scene 0, the average accuracies were 0.5604, 0.6927, 0.7932, 0.8442, and 0.88, respectively. This is consistent with actual experience. Adding features can enable the recognition method to better capture non-linear relationships in the data and improve the recognition ability under complex patterns. We also noticed that as the number of features increased, the accuracy improvement gradually slowed down. The added features may contain redundant information or noise, which may negatively impact performance.
From the feature itself, under scene 0, the different feature combinations with the highest accuracies were, in order, EA (0.7), MW-T (0.753), MW-T-P (0.83), MW-T-EA-P (0.865). The most accurate combinations all involved midwave radiation intensity and temperature. In scene 4, the different feature combinations with the highest accuracies were, in order, LW (0.407), LW-P (0.496), LW-T-P (0.574), and MW-LW-T-P (0.647), and the most accurate combinations with the highest accuracies all involved longwave radiation intensity and the micro-motion period. These situations are related to the physical properties of the features themselves and the noise generation mode, which can provide a reference for feature selection in practical applications.
From the perspective of scenes, as the noise increased, combinations with fewer features experienced a faster decline in accuracy. For example, the average accuracy of the combination of five features dropped by 0.222 from scene 0 to scene 4, but the average accuracy of the single feature combination dropped by 0.2888. In fact, in scene 4, the lowest accuracy reached 0.2 (equivalent to random selection) and did not drop further. These situations illustrate that multi-dimensional features are more robust against noise and can better adapt to complex space environments.

4.3.2. Analysis of Fused Frame Counts

In the proposed method, a parameter S for temporal recursive fusion frame counts is set. Figure 8 illustrates the impact of fusion frame counts on recognition results. When there is no temporal recursive fusion, the last frame is used as the recognition result. From the graph, it can be observed that with an increase in the number of fusion frames, accuracy and recall gradually improved. The highest accuracy was achieved at around 70 frames, after which it began to stabilize. As the scene became more complex, the peak of accuracy gradually shifted to the left. This is because, with increasing noise, more fusion frames lead to more interference. In specific scenes, it is necessary to choose an appropriate number of fusion frames.

4.3.3. Analysis of Feature Extraction Errors

The effects of different feature extraction errors on recognition accuracy are not uniform. In each scene, the presence or absence of the above five feature extraction errors is changed in turn. Figure 9 illustrates the impact of each feature extraction error in the recognition process. Through linear fitting, the magnitudes of the five feature extraction errors exhibited an approximate linear relationship with the final classification accuracy. Additionally, the accuracy changes caused by different feature errors had certain differences. From the graph, it can be observed that the P had the greatest impact on accuracy, followed by T, LW, EA, with MW having the least impact. For the recall of true targets, P had the greatest impact, followed by LW, T, MW, and EA. Therefore, considering the overall recognition situation, MW was relatively robust, while P required more accurate extraction. If considering the recognition of true targets, the EA feature would be a relatively robust feature, while P would require more accurate extraction.

4.3.4. Analysis of Feature Database Size

Figure 10 illustrates the impact of the size K of the multi-dimensional feature database on recognition results in different scenes. Clearly, as the size of the feature database decreased, both accuracy and the recall of true targets gradually declined. This trend became more pronounced as the scenes became more complex. In scene 0, the recall of true targets for K = 4000 (0.945) differed by only 0.22 from K = 1000 (0.725). However, in scene 4, the recall of true targets for K = 4000 (0.87) differed by 0.85 from K = 1000 (0.02). These cases demonstrate that in more complex environments, the recognition of space targets requires ensuring an ample sample library. A larger feature database can provide more information, enabling the model to better learn the data.

5. Discussion

The proposed method enhances the recognition accuracy of space IR targets and demonstrates robustness in complex scenes. This can be attributed to two key characteristics.
(1) Spatio-temporal correlation fusion method: The spatio-temporal correlation fusion method maximally preserves information in both the spatial and temporal domains. Figure 11 and Figure 12 provide specific examples in scene 4.
Figure 11 illustrates the confidence outputs for various classes of a specific true target sample in the last frame. Figure 11a shows the output of single-feature decision making, while Figure 11b presents decision making after the fusion of five features. It can be observed that without spatial fusion, when making decisions based on the maximum confidence, three out of the five features mistakenly identified the true target as a balloon decoy, debris, and a heavy decoy, respectively. After spatial fusion of the five features, the highest confidence was assigned to the true target, which was consistent with reality.
Figure 11 displays the real-time confidence outputs for various classes of a specific true target sample. Figure 12a shows the output without temporal fusion, while Figure 12b shows the output with a fusion frame count of 50 (2 s). It can be observed that without temporal fusion, the confidence of decoys was higher than that of the true target at some time points, leading to misjudgments. However, with the inclusion of temporal fusion, the confidence in the true target remained consistently higher than the other types.
From these two figures, it can be seen that the fusion of various decision results leads to stable recognition outcomes. Errors in target misjudgments under single features and single frames are corrected after spatio-temporal correlation fusion. The temporal-spatial correlation fusion algorithm effectively integrates various decision results, improving the reliability of spatial infrared target recognition.
(2) Use of fuzzy set theory: The extraction of multi-dimensional features from space IR dim targets involves a significant amount of noise and errors. Using them directly as inputs for the recognition algorithm would introduce considerable instability. In this article, based on the principles of fuzzy set theory, the uncertainty of extracted features is characterized using fuzzy membership degrees. We propose a multi-dimensional feature fusion algorithm based on EWM. This algorithm assigns appropriate weights to different features based on the fuzziness of the effective measurement sets of each feature. The weights adaptively change with updates to the feature library, demonstrating objectivity and environmental adaptability.
This research focuses on multi-dimensional feature fusion recognition of space IR targets and has achieved certain results. However, there are still some shortcomings in the research work:
(1) The data used in the research are the processed multi-dimensional feature data of space IR targets. However, in a real detection process, there are many factors that affect recognition, such as optical point spread, cross-pixel, detector noise, calibration error, feature extraction error, etc. This article uniformly simplifies this parameter. We assumed that these noises and errors reflected on each feature conformed to the normal distribution; the mean value was 0, and the standard deviations was σ . However, the effects of these parameters on recognition are complex and varied. In future research, we will strive to conduct a more comprehensive analysis of the entire detection process to optimize our method.
(2) Regarding the issue of small targets, which are far from the imaging surface and usually occupy few pixels in an IR image of the detector. In addition, due to the instability of the previous feature extraction method, the extracted features will not meet the input requirements of the proposed method. Although we briefly analyzed the impact of feature combination on recognition in Section 4.3.1, more in-depth research on the small target issue is still needed to meet the application requirements in more scenes.
This article only briefly discusses the impact of the combination of fused features on the accuracy. However, the variable orbits and complex parameters of satellites in orbit will inevitably affect velocity estimation. Therefore, the influence of satellite parameters needs to be further explored in the future to meet the application requirements in more scenarios.

6. Conclusions

In the identification of high-threat targets, the uncertainty of features in each dimension will directly affect the correctness of threat judgment. To improve the recognition accuracy and robustness, this article proposes a recognition method based on multi-dimensional feature fusion, which utilizes the fuzzy comprehensive method of spatio-temporal correlation to handle uncertainty information and feature weight allocation. Simulation and comparative experiments verified the effectiveness of the proposed method. Among them, under an ideal environment, the recognition accuracy of our method can reach 88.0%, and the recall rate for high-threat real targets can reach 97.5%. We also analyzed the impact of fusion feature combination, fusion frame counts, feature extraction error, and feature database size on the accuracy of the proposed method. Our research results provide valuable insights into the identification of high-threat space targets and have certain value for engineering application in specific fields.

Author Contributions

Conceptualization, S.Z.; methodology, S.Z.; software, S.Z.; validation, H.X., T.H. and X.C.; formal analysis, X.C.; investigation, T.H.; resources, P.R.; data curation, S.Z.; writing—original draft preparation, S.Z.; writing—review and editing, P.R.; visualization, S.Z.; supervision, T.H. and H.X; project administration, H.X.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62175251.

Data Availability Statement

The data presented in this study are available on request from the corresponding author (P.R.). The data are not publicly available due to the sensitivity of the data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dai, H.; Zhou, Y.; Huang, S.; Yin, X. Target recognition of ballistic middle segment based on infrared multiple features. J. Command Control 2019, 5, 302–307. [Google Scholar]
  2. Kang, H.; Huang, S.; Ling, Q.; Wu, J.; Zhong, Y. A detection method based on spectrum characteristics of missile plume using SVDD algorithm. Infrared Technol. 2015, 37, 696–700. [Google Scholar]
  3. Silberman, G.L. Parametric classification techniques for theater ballistic missile defense. Johns Hopkins APL Tech. Dig. 1998, 19, 322–339. [Google Scholar]
  4. Li, W.; Liu, Z.; Mu, Y.; Yang, R.; Zhang, X. Modeling and research of a space-based spacecraft infrared detection system. Appl. Opt. 2017, 56, 2428–2433. [Google Scholar] [CrossRef]
  5. Liu, J. Research on Features Extraction and Recognition Based on Infrared Signatures of Space Targets; National University of Defense Technology: Changsha, China, 2017. [Google Scholar]
  6. Gu, M.; Ren, Q.; Zhou, J.; Liao, S. Analysis and identification of infrared radiation characteristics of different attitude targets. Appl. Opt. 2021, 60, 109–118. [Google Scholar] [CrossRef] [PubMed]
  7. Liu, Z.; Yao, S.; Mao, H.; Dai, C.; Wei, H. A model of faint target temperature estimation based on dual-band infrared. Procedia Comput. Sci. 2019, 147, 151–157. [Google Scholar]
  8. Zhang, H.; Rao, P.; Chen, X.; Xia, H.; Zhang, S. Denoising and Feature Extraction for Space Infrared Dim Target Recognition Utilizing Optimal VMD and Dual-Band Thermometry. Machines 2022, 10, 168. [Google Scholar] [CrossRef]
  9. Wu, Y.; Lu, H.; Liu, J.; Zhao, F. Shape and micromotion parameters estimation of exoatmosphere object from the infrared signature. Opt. Eng. 2017, 56, 033103. [Google Scholar] [CrossRef]
  10. Li, K.; Dai, X.; Luo, Y.; Zhang, Q. Review of Radar Micro-Motion Feature Extraction and Recognition for Ballistic Targets. J. Air Force Eng. Univ. 2023, 24, 7–17. [Google Scholar]
  11. Zhang, S.; Rao, P.; Zhang, H.; Chen, X. Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression. Remote Sens. 2023, 15, 2767. [Google Scholar] [CrossRef]
  12. Kamlaskar, C.; Deshmukh, S.; Gosavi, S.; Abhyankar, A. Novel canonical correlation analysis based feature level fusion algorithm for multimodal recognition in biometric sensor systems. Sens. Lett. 2019, 17, 75–86. [Google Scholar] [CrossRef]
  13. Han, M.; Zhang, H. Multiple kernel learning for label relation and class imbalance in multi-label learning. Inf. Sci. 2022, 613, 344–356. [Google Scholar] [CrossRef]
  14. Li, J.; Yang, X.; Zhou, L. Multi-Sensor Target Recognition Based-on Multi-Period Improved DS Evidence Fusion Method. J. Nanoelectron. Optoelectron. 2018, 13, 758–767. [Google Scholar]
  15. Li, S.; Yang, K.; Ma, J.; Tian, X.; Yang, X. Anti-interference recognition method of aerial infrared targets based on the Bayesian network. J. Opt. 2021, 50, 264–277. [Google Scholar] [CrossRef]
  16. Shakya, A.; Biswas, M.; Pal, M. CNN-based fusion and classification of SAR and Optical data. Int. J. Remote Sens. 2020, 41, 8839–8861. [Google Scholar] [CrossRef]
  17. Zhang, S.; Rao, P.; Zhang, H.; Chen, X.; Hu, T. Spatial Infrared Objects Discrimination based on Multi-Channel CNN with Attention Mechanism. Infrared Phys. Technol. 2023, 132, 104670. [Google Scholar] [CrossRef]
  18. Wu, D.; Cao, L.; Zhou, P.; Li, N.; Li, Y.; Wang, D. Infrared Small-Target Detection Based on Radiation Characteristics with a Multimodal Feature Fusion Network. Remote Sens. 2022, 14, 3570. [Google Scholar] [CrossRef]
  19. Zuo, Z.; Tong, X.; Wei, J.; Su, S.; Wu, P.; Guo, R.; Sun, B. AFFPN: Attention Fusion Feature Pyramid Network for Small Infrared Target Detection. Remote Sens. 2022, 14, 3412. [Google Scholar] [CrossRef]
  20. Wang, X.; Lu, R.; Bi, H.; Li, Y. An Infrared Small Target Detection Method Based on Attention Mechanism. Sensors 2023, 23, 8608. [Google Scholar] [CrossRef]
  21. Zhang, K.; Dai, J. A novel TOPSIS method with decision-theoretic rough fuzzy sets. Inf. Sci. 2022, 608, 1221–1244. [Google Scholar] [CrossRef]
  22. Al-shami, T.M.; Mhemdi, A. Generalized Frame for Orthopair Fuzzy Sets: (m,n)-Fuzzy Sets and Their Applications to Multi-Criteria Decision-Making Methods. Information 2023, 14, 56. [Google Scholar] [CrossRef]
  23. Sharma, P.; Alshehri, M.; Sharma, R. Activities tracking by smartphone and smartwatch biometric sensors using fuzzy set theory. Multimed. Tools Appl. 2023, 82, 2277–2302. [Google Scholar] [CrossRef]
  24. Li, Z.; Zhong, Z.; Cao, X.; Hou, B.; Li, L. Robustness analysis of shield tunnels in non-uniformly settled strata based on fuzzy set theory. Comput. Geotech. 2023, 162, 105670. [Google Scholar] [CrossRef]
  25. Wan, T.; Cheng, F.; Cheng, Y.; Liao, C.; Bai, Y. Investigation into effect of non-uniform thermal environment on thermal sensation under stratum ventilation for heating by using interpolation-based multi-level fuzzy comprehensive evaluation. J. Build. Eng. 2023, 64, 105592. [Google Scholar] [CrossRef]
  26. Yao, Y.; Cheng, L.; Chen, S.; Chen, H.; Chen, M.; Li, N.; Li, Z.; Dongye, S.; Gu, Y.; Yi, J. Study on Road Network Vulnerability Considering the Risk of Landslide Geological Disasters in China’s Tibet. Remote Sens. 2023, 15, 4221. [Google Scholar] [CrossRef]
  27. Lv, J.; Ren, J.; Wang, D. A method of point target identification based on fuzzy set theory. In Proceedings of the Third International Workshop on Advanced Computational Intelligence (IWACI), Suzhou, China, 25–27 August 2010; pp. 277–281. [Google Scholar]
  28. Yao, D.; Chai, H.; Wang, Z. Target recognition based on stratified synthesis strategy. In 2015 Joint International Mechanical, Electronic and Information Technology Conference (JIMET-15); Atlantis Press: Amsterdam, The Netherlands, 2015; pp. 338–343. [Google Scholar]
  29. Azimirad, E.; Haddadnia, J. Target threat assessment using fuzzy sets theory. Int. J. Adv. Intell. Inf. 2015, 1, 57–74. [Google Scholar] [CrossRef]
  30. Ma, Y.; Hu, M.; Lu, H.; Chang, Q. Recurrent neural networks for discrimination of exo-atmospheric targets based on infrared radiation signature. Infrared Phys. Technol. 2019, 96, 123–132. [Google Scholar] [CrossRef]
  31. Lu, X.; Sheng, J. Review of surface temperature of ballistic missile in flight. Infrared 2016, 37, 1–6. [Google Scholar]
  32. Zhang, H.; Rao, P.; Chen, X.; Xia, H.; Zhang, S. Study on periodic law of micromotion feature for space infrared moving target recognition. In Proceedings of the 5th Optics Young Scientist Summit (OYSS 2022), Fuzhou, China, 16–19 September 2022; pp. 78–93. [Google Scholar]
  33. Zhang, H.; Rao, P.; Xia, H.; Weng, D.; Chen, X.; Li, Y. Modeling and analysis of infrared radiation dynamic characteristics for space micromotion target recognition. Infrared Phys. Technol. 2021, 116, 103795. [Google Scholar] [CrossRef]
  34. Zhang, S.; Chen, X.; Rao, P.; Zhang, H. Visualization of radiation intensity sequences for space infrared target recognition. In Proceedings of the Earth and Space: From Infrared to Terahertz (ESIT 2022), Nantong, China, 17–19 September 2022; pp. 546–553. [Google Scholar]
  35. Huang, H. Research on Techniques of Detection and Recognition of Target in Dual-band Infrared; National University of Defense Technology: Changsha, China, 2013. [Google Scholar]
  36. Zhu, Y.; Tian, D.; Yan, F. Effectiveness of entropy weight method in decision-making. Math. Probl. Eng. 2020, 2020, 3564835. [Google Scholar] [CrossRef]
  37. Tan, J.; Zhao, H.; Yang, R.; Liu, H.; Li, S.; Liu, J. An Entropy-Weighting Method for Efficient Power-Line Feature Evaluation and Extraction from LiDAR Point Clouds. Remote Sens. 2021, 13, 3446. [Google Scholar] [CrossRef]
  38. Zhou, H.; Dong, C.; Wu, R.; Xu, X.; Guo, Z. Feature Fusion Based on Bayesian Decision Theory for Radar Deception Jamming Recognition. IEEE Access 2021, 9, 16296–16304. [Google Scholar] [CrossRef]
  39. Gao, X.; Pan, L.; Deng, Y. Cross entropy of mass function and its application in similarity measure. Appl. Intell. 2022, 52, 8337–8350. [Google Scholar] [CrossRef]
  40. Zhao, F.; Zhang, Z.; Hu, M.; Deng, Y.; Shen, X. Exo-atmospheric infrared objects classification based on dual-channel LSTM network. Infrared Phys. Technol. 2020, 111, 103535. [Google Scholar] [CrossRef]
Figure 1. Schematic of multi-dimensional features of space dim IR targets.
Figure 1. Schematic of multi-dimensional features of space dim IR targets.
Remotesensing 16 00343 g001
Figure 2. Flowchart of the proposed spatio-temporal correlation fuzzy comprehensive method.
Figure 2. Flowchart of the proposed spatio-temporal correlation fuzzy comprehensive method.
Remotesensing 16 00343 g002
Figure 3. Schematic of spatio-temporal fusion judgment.
Figure 3. Schematic of spatio-temporal fusion judgment.
Remotesensing 16 00343 g003
Figure 4. Confidence of test samples of different categories.
Figure 4. Confidence of test samples of different categories.
Remotesensing 16 00343 g004
Figure 5. Confusion matrix of four methods. (a) Zhou’s method [38]; (b) Gao’s method [39]; (c) DC-LSTM [40]; (d) proposed method.
Figure 5. Confusion matrix of four methods. (a) Zhou’s method [38]; (b) Gao’s method [39]; (c) DC-LSTM [40]; (d) proposed method.
Remotesensing 16 00343 g005
Figure 6. ROC curve of four methods [38,39,40].
Figure 6. ROC curve of four methods [38,39,40].
Remotesensing 16 00343 g006
Figure 7. The impact of different feature combinations on recognition.
Figure 7. The impact of different feature combinations on recognition.
Remotesensing 16 00343 g007
Figure 8. The impact of different fusion frame counts on recognition. (a) Accuracy; (b) recall of true targets.
Figure 8. The impact of different fusion frame counts on recognition. (a) Accuracy; (b) recall of true targets.
Remotesensing 16 00343 g008
Figure 9. The impact of different feature-extracted errors on recognition. (a) Accuracy; (b) recall of true targets.
Figure 9. The impact of different feature-extracted errors on recognition. (a) Accuracy; (b) recall of true targets.
Remotesensing 16 00343 g009
Figure 10. The impact of different feature database size on recognition. (a) Accuracy; (b) recall of true targets.
Figure 10. The impact of different feature database size on recognition. (a) Accuracy; (b) recall of true targets.
Remotesensing 16 00343 g010
Figure 11. Confidence of each class of a true target. (a) Before spatial fusion; (b) after spatial fusion.
Figure 11. Confidence of each class of a true target. (a) Before spatial fusion; (b) after spatial fusion.
Remotesensing 16 00343 g011
Figure 12. Confidence of each class of a true target. (a) Before temporal fusion; (b) after temporal fusion.
Figure 12. Confidence of each class of a true target. (a) Before temporal fusion; (b) after temporal fusion.
Remotesensing 16 00343 g012
Table 1. Simulation parameters of space targets.
Table 1. Simulation parameters of space targets.
TargetsTrue TargetHeavy DecoyBalloon DecoyEqual-Shaped Light DecoyDebris
ShapeRemotesensing 16 00343 i001Remotesensing 16 00343 i002Remotesensing 16 00343 i003Remotesensing 16 00343 i004Remotesensing 16 00343 i005
Micro-motion modeSpinning and coningSpinning and coningNoneTumblingTumbling
Micro-motion parameters α c = 0.0   π
β c = 0.1   π ~ 0.5   π
ω s = 1.0   π ~ 3.0   π
ω c = 0.1   π ~ 1.0   π
α c = 0.0   π
β c = 0.1   π ~ 0.5   π
ω s = 1.0   π ~ 3.0   π
ω c = 0.0   π ~ 0.2   π
α t = 0.0   π
β t = 0.15   π ~ 0.3   π
ω t = 0.25   π ~ 0.4   π
α t = 0.0   π
β t = 0.1   π ~ 0.5   π
ω t = 0.35   π ~ 0.5   π
Coating thickness/mm0.01~0.150.01~0.150.01~0.10.01~0.10.01~0.05
Initial temperature/K290~310290~310200~300200~300200~300
Density/ k g / m 3 3849195013909002700
Emissivity0.940.750.50.50.45
Specific capacity/ J / k g K 71061011501950904
IR detector parametersWave bands: 3 ~ 5   μ m , 8 ~ 12   μ m ; observation time: 50 s; sample frequency: 25 Hz
Table 2. Simulation experiment results.
Table 2. Simulation experiment results.
TargetsTrue TargetHeavy DecoyBalloon DecoyEqual-Shaped Light DecoyDebris
Recall97.5%98.0%61.0%87.0%96.5%
FAR0.5%2.6%2.9%8.5%0.5%
MAR2.5%2.0%39.0%13.0%3.5%
Accuracy88.0%
Table 3. Scene setting.
Table 3. Scene setting.
Scene σ M W ( W · s r 1 ) σ L W ( W · s r 1 ) σ T ( K ) σ E A ( m 2 ) σ P ( s )
000000
111100.10.1
222200.20.2
333300.30.3
444400.40.4
Table 4. Comparison experiment results.
Table 4. Comparison experiment results.
SceneZhou’s [38]Gao’s [39]DC-LSTM [40]Proposed Method
RecallAccuracyRecallAccuracyRecallAccuracyRecallAccuracy
097.5%86.9%65.0%61.3%92.0%89.1%97.5%88.0%
193.0%80.0%61.5%51.0%87.5%74.3%96.0%81.3%
283.0%72.9%41.0%39.4%63.5%61.9%93.0%78.2%
371.2%67.1%3.5%35.0%30.5%47.2%88.0%73.5%
459.0%60.3%034.7%28.5%37.5%62.5%65.8%
Table 5. Comparison of average runtimes.
Table 5. Comparison of average runtimes.
Zhou’s [38]Gao’s [39]DC-LSTM [40]Proposed Method
Average runtime (s)0.08440.05391.36740.1283
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Rao, P.; Hu, T.; Chen, X.; Xia, H. A Multi-Dimensional Feature Fusion Recognition Method for Space Infrared Dim Targets Based on Fuzzy Comprehensive with Spatio-Temporal Correlation. Remote Sens. 2024, 16, 343. https://doi.org/10.3390/rs16020343

AMA Style

Zhang S, Rao P, Hu T, Chen X, Xia H. A Multi-Dimensional Feature Fusion Recognition Method for Space Infrared Dim Targets Based on Fuzzy Comprehensive with Spatio-Temporal Correlation. Remote Sensing. 2024; 16(2):343. https://doi.org/10.3390/rs16020343

Chicago/Turabian Style

Zhang, Shenghao, Peng Rao, Tingliang Hu, Xin Chen, and Hui Xia. 2024. "A Multi-Dimensional Feature Fusion Recognition Method for Space Infrared Dim Targets Based on Fuzzy Comprehensive with Spatio-Temporal Correlation" Remote Sensing 16, no. 2: 343. https://doi.org/10.3390/rs16020343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop