Next Article in Journal
Stride Segmentation during Free Walk Movements Using Multi-Dimensional Subsequence Dynamic Time Warping on Inertial Sensor Data
Previous Article in Journal
Assessment of Human Respiration Patterns via Noncontact Sensing Using Doppler Multi-Radar System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Local Ternary Patterns for Automatic Target Recognition in Infrared Imagery

1
School of Computer Science and Technology, Henan Polytechnic University, 2001 Century Avenue, Jiaozuo 454000, China
2
School of Electrical and Computer Engineering, Oklahoma State University, 202 Engineering South, Stillwater, OK 74078, USA
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(3), 6399-6418; https://doi.org/10.3390/s150306399
Submission received: 12 November 2014 / Revised: 25 December 2014 / Accepted: 16 February 2015 / Published: 16 March 2015
(This article belongs to the Section Physical Sensors)

Abstract

: This paper presents an improved local ternary pattern (LTP) for automatic target recognition (ATR) in infrared imagery. Firstly, a robust LTP (RLTP) scheme is proposed to overcome the limitation of the original LTP for achieving the invariance with respect to the illumination transformation. Then, a soft concave-convex partition (SCCP) is introduced to add some flexibility to the original concave-convex partition (CCP) scheme. Referring to the orthogonal combination of local binary patterns (OC_LBP), the orthogonal combination of LTP (OC_LTP) is adopted to reduce the dimensionality of the LTP histogram. Further, a novel operator, called the soft concave-convex orthogonal combination of robust LTP (SCC_OC_RLTP), is proposed by combing RLTP, SCCP and OC_LTP Finally, the new operator is used for ATR along with a blocking schedule to improve its discriminability and a feature selection technique to enhance its efficiency Experimental results on infrared imagery show that the proposed features can achieve competitive ATR results compared with the state-of-the-art methods.

1. Introduction

Automatic target recognition (ATR) is an important and challenging problem for a wide range of military and civilian applications. Since forward-looking infrared (FLIR) images are frequently used in ATR applications, many algorithms have been proposed in FLIR imagery in recent years [1], such as learning-based [2,3] and model-based [49] methods. Furthermore, there are also many hybrid vision-based approaches that combine learning-based and model-based ideas for object tracking and recognition in visible band images [1012]. The advances in target detection and tracking in FLIR imagery and the performance evaluation work for the ATR system are referred to in [13] and [14], respectively.

Different from the learning-based, model-based and hybrid vision-based algorithms, Patel et al. introduced sparse representation-based classification (SRC) [15] into infrared ATR in [16], and the experimental results show that it outperforms the traditional ones with promising results.

As one of the learning approaches, the ATR task has also been cast as a texture analysis problem due to rich texture characteristics in most infrared imagery. Various texture-based ATR methods have been proposed in recent years [17,18]. In this paper, we focus on local binary pattern (LBP), a simple yet effective approach, for infrared ATR. It also has achieved promising results in several ATR applications in recent years, such as maritime target detection and recognition in [19], infrared building recognition in [20], ISAR-based ATR in [21] and infrared ATR in our previous work [22].

The LBP operator was firstly proposed by Ojala et al., in [23], and it has been proven to be a robust and computationally simple approach to describe local structures. In recent years, the LBP operator has been extensively exploited in many applications, such as texture analysis and classification, face recognition, motion analysis, ATR and medical image analysis [24]. Since Ojala' s original work [23], the LBP methodology has been developed with a large number of extensions in different fields, such as the extensions from the viewpoint of improving the neighborhood topology [2530], the extensions from the viewpoint of reducing the impact of noise [3134], the extensions from the perspective of reducing the feature dimensionality [25,35,36], the extensions from the viewpoint of improving the encoding methods [22,3742] and the extensions from the perspective of obtaining rotation invariant property [25,4346].

More specifically, we are interested in the applicability of the local ternary pattern (LTP) [31] and the concave-convex partition (CCP) [22] for infrared ATR. The reason is that the LTP is robust to image noise, and it has been proven to be effective for infrared ATR. Additionally, the CCP can greatly improve the performance of the LTP in ATR [22]. In this work, we make several important improvements to further enhance the performance of LTP and CCP. First, we propose a robust LTP (RLTP) to reduce the sensitivity of LTP to the illumination transformation. Second, we develop soft CCP (SCCP) to overcome the rigidity of CCP. Third, the scheme of the orthogonal combination of local binary patterns (OC_LBP) [36] and a feature selection method [47] are introduced to reduce the dimensionality of the feature. Based on RLTP, SCCP and OC_LBP, a novel operator is introduced in the paper, which is named the soft concave-convex orthogonal combination of robust local ternary patterns (SCC_OC_RLTP). In addition, we also introduce a simple, yet effective blocking technique to further improve the feature discriminability for infrared ATR. Finally, we evaluate the newly-proposed operator with the sCCLTP (spatial concave-convex partition based local ternary pattern) [22] and the latest sparsity-based ATR algorithm proposed in [16]. Experimental results show that the presented method gives the best performance among the state-of-the-art methods.

The rest of the paper is organized as follows. We first briefly review the background of the basic LBP, LTP and OC_LBP. Then, we present the detailed feature extraction step, followed by the extensive experimental results on the texture databases and the ATR database. Finally, we provide some concluding remarks.

2. Brief Review of LBP-Based Methods

In this section, we only give a brief introduction of the basic LBP and its extensions, LTP and OC_LBP.

2.1. Local Binary Pattern

The basic LBP operator is first introduced in [23] for texture analysis. It works by thresholding a neighborhood with the gray level of the central pixel. The LBP code is produced by multiplying the thresholded values by weights given by powers of two and adding the results in a clockwise way. It was extended to achieve rotation invariance, optional neighborhoods and stronger discriminative capability in [25]. For a neighborhood (P,R), the basic LBP is commonly referred to as LBPP,R, and it is written as:

LBP P , R = i = 0 P - 1 s ( p i - p c ) × 2 i , s ( x ) = { 1 if x 0 0 otherwise
where P is the number of the sampling pixels on the circle, R is the radius of the circle, pc corresponds to the gray value of the central pixel and pi corresponds to the gray value of each sampling pixel on the circle. In order to extract the most fundamental structure and rotation invariance patterns from LBP, the uniform and rotation invariant operator LBP P , R riu 2 [25] is given as:
LBP P , R riu 2 = { i = 0 P - 1 s ( p i - p c ) if U ( LBP P , R ) 2 P + 1 otherwise
where the superscript riu2 refers to the rotation invariant uniform patterns that have a U value (U ≤ 2). The uniformity measure U corresponds to the number of transitions from zero to one or one to zero between successive bits in the circular representation of the binary code LBPP,R, which is denned as:
U ( LBP P , R ) = | s ( p P - 1 - p c ) - s ( p 0 - p c ) | + i = 1 P - 1 | s ( p i - p c ) - s ( p i - 1 - p c ) |

All nonuniform patterns are classified as one pattern for LBP P , R riu 2. The mapping from LBPP,R to LBP P , R riu 2, which has P + 2 distinct output values, can be implemented with a lookup table.

2.2. Local Ternary Pattern

The LBP is sensitive to noise, because a small gray change of the central pixel may cause different codes for a neighborhood in an image, especially for the smooth regions. In order to overcome such a flaw, Tan and Triggs [31] extended the basic LBP to a version with three-value codes, which is called the local ternary pattern (LTP). In LTP, the indicator s (x) is further defined as:

LTP P , R , τ = i = 0 P - 1 s ( p i - p c ) × 3 i , s ( x ) = { 1 x τ 0 | x | < τ - 1 x - τ
where τ is a threshold specified by the user. In order to reduce the feature dimension, a coding scheme is also represented by Tan and Triggs [31] by splitting each ternary pattern into two parts: the positive part and the negative part, as illustrated in Figure 1. Though the LTP codes are more resistant to noise, it is no longer strictly invariant to gray-level transformations, because τ is constant in feature extraction for all neighborhoods and all images in the database.

2.3. Orthogonal Combination of Local Binary Patterns

In [36], Zhu et al. proposed the orthogonal combination of local binary patterns (OC_LBP), which drastically reduces the dimensionality of the original LBP histogram to 4 × P by combining the histograms of [P/4]different four-orthogonal neighbor operators. Experimental results given in [36] show that OC_LBP is better than uniform patterns LBP P , R U 2 in [25]. Figure 2 gives the comparison of calculating LBP and OC_LBP with eight neighboring pixels.

3. Feature Extraction

LTP and CCP have been proven to be robust for ATR in our previous work [22]. We also adopt them for feature descriptions in the paper. Furthermore, the robust local ternary patterns (RLTP) and soft concave-convex partition (SCCP) are presented to solve the flaws of LTP and CCP, respectively.

3.1. Robust Local Ternary Patterns

For LTP, it is not invariant to the gray-level transformations, because the threshold τ is a constant for all neighborhoods. Instead of employing a fixed threshold, we propose a robust method to assign its value based on the average gray value of the neighborhood. Let ω(i, j) be a neighborhood centered at pixel (i, j) in an image, pi,j be the gray value of the pixel (i, j) and μi,j rage gray value of ω(i, j). Specifically, the new threshold τi,j or the neighborhood ω(i, j) is defined as follows:

τ i , j = α × μ i , j
where α is a scaling factor and μi,j is defined as:
μ i , j = 1 P + 1 ( p i , j + k = 0 P - 1 p k )

It is evident that the threshold τi,j changes with the variation of the gray levels of the neighborhood ω(i, j). Therefore, it can help the LTP to retain the invariance with respect to illumination transformation. In this case, the robust LTP (RLTP) is given as:

RLPT P , R , τ i , j = k = 0 P - 1 ( p k - p i , j ) × 3 k , s ( x ) = { 1 x τ i , j 0 | x | < τ i , j - 1 x - τ i , j

3.2. Soft Concave-Convex Partition

It has been shown that the neighborhoods with different visual perceptions may have the same binary code by the LBP-based operators, and the concave-convex partition (CCP) was proposed to solve such a flaw in [22]. For simplicity, the average gray value (μ) of the whole image is chosen as a threshold to partition all of the neighborhoods into two categories, the concave and convex category. If μi,j < μ, the neighborhood falls into the concave category, or else, it is classified as the convex category. It can be seen that the classification results depend entirely on the threshold μ. for CCP. Therefore, such a classification is a rigid partition. In this paper, we introduced a soft concave-convex partition (SCCP) definition as follows to overcome its shortcoming.

Given β as a scaling factor, if μi,j < (1 − β) × μ, the central pixel (i, j) is regarded as a concave pixel and the neighborhood ω(i, j) as a concave neighborhood. If μi,j ≥ (1 + β) × μ, the central pixel (i, j) is regarded as a convex pixel and ω(i, j) as a convex neighborhood. When β = 0, the SCCP reduces to the CCP.

3.3. Orthogonal Combination of Robust Local Ternary Patterns Based on SCCP

Based on OC_LBP and LTP, the orthogonal combination of local ternary patterns (OC_LTP) is proposed firstly in the paper. Figure 3 gives a calculation example for an eight-pixel neighborhood. Furthermore, OC_LTP is enhanced by the RLTP and SCCP. The new approach is named the soft concave-convex orthogonal combination of robust local ternary patterns (SCC_OC_RLTP). Table 1 gives the dimensionality comparison of OC_LBP, OC_RLTP and SCC_OC_LTP.

3.4. Blocking Methods

According to the report in [22] and [48], it is better to divide the infrared image into patches and to combine the feature of each patch together for higher performance. Six different blocking methods have been tested in our previous work [22], and the results show that the blocking method (illustrated in Figure 4a), which divides a chip into four quadrants that are slightly overlapped, gives more promising results. Because the objects are basically located in the center of the infrared image, we choose the center region as an additional block in this paper, which is illustrated in Figure 4b. After that, the features of the five blocks and that of the whole image are concentrated together for the image description.

3.5. Feature Selection

In our previous work [22], three different multi-resolutions, P = 8 and R = 1, P = 16 and R = 2 and P = 24 and R = 3, are combined together for feature description. Obviously, this leads to a sharp increase of the feature dimensionality. For sCCLTP [22], the dimensionality is 1080 bins ((104 + 72 + 40) × 5 = 1080); while, for the novel operator SCC_OC_RLTP, its dimensionality reaches to 4608 bins ((384 + 256 + 128) × 6 = 4608).

A tremendous amount of previous studies have demonstrated that a highly redundant feature set should have an intrinsic dimensionality much smaller than the actual dimensionality of the original feature space [49]. Namely, many features might have no essential contributions to characterize the datasets, and the features that do not affect the intrinsic dimensionality could be dropped. There are two general approaches of feature reduction, which include feature selection and feature recombination. The former method chooses a subset of original feature set just like the feature filter to achieve feature reduction, e.g., LBP P , R U 2 in [25], the method based on differential evolution [47] (called FSDE in the paper) and discriminative features [35]. The latter obtains a new smaller feature set by a weighted recombination of the original feature set, e.g., independent component analysis (ICA), principal component analysis (PCA) and their improvements. In this paper, we performed the feature selection step to get a discriminative features subset from the original high dimensional features. To reach this goal, our interest focus on the FSDH in [47] for its promising results in feature selection.

3.6. Dissimilarity Measure

Various metrics have been presented to evaluate the dissimilarity between two histograms. As most LBP-based algorithms, we also chose the chi-square distance as the dissimilarity measure, which is defined as:

d ( H , B ) = i = 1 K ( h i - b i ) 2 h i + b i
where H = { hi } and B = {bi} (i = 1, 2,…, K) denote the two feature histograms and K is the number of bins in the histogram.

4. Experiments and Discussions

In this section, we first evaluate and compare LTP [31] and CCLTP [22], along with the improved methods, RLTP and SCCLTP (Soft Concave-Convex LTP), respectively, for texture classification. Then, we focus on OC_LBP, OC_LTP, CC_OC_LTP, OC_RLTP and SCC_OC_RLTP to examine their effectiveness for infrared ATR.

4.1. Experiments for Texture Classification

For texture classification, we chose the Outex database [50], which has been widely used for the comparison of LBP-based methods, as the test beds. For the Outex database, we chose Outex_TC_0010 (TC10) and Outex_TC_0012 (TC12), where TC10 and TC12 contain the same 24 classes of textures collected under three different illuminants (“horizon”, “inca” and “tl84”) and nine different rotation angles (0°, 5°, 10°, 15°, 30°, 45°, 60°, 75° and 90°). There are 20 non-overlapping 128 × 128 texture samples for each class under each situation. For TC10, samples of illuminant ‘inca’ and an angle of 0° in each class were used for classifier training, and the other eight rotation angles with the same illumination were used for testing. Hence, there are 480 (24 × 20) models and 3840 (24 × 8 × 20) validation samples. For TC12, all of the 24 × 20 × 9 samples captured under illumination “tl84” or “horizon” were used as the test data.

In this experiments, we firstly test the influence of α and β on RLTP and SCCLTP. For TC12, the samples captured under illumination “horizon” (TC12_001) were used as the test data. The curves of precision vs. α and β on TC10 and TC12_001 are shown in Figures 5 and 6, where RLTP(8,1), RLTP(16,2) and RLTP(24,3) denote RLTP 8 , 1 riu 2, RLTP 16 , 2 riu 2, and RLTP 24 , 3 riu 2, and SCCLTP(8,1), SCCLTP(16,2) and SCCLTP(24,3) denote SCCLTP 8 , 1 riu 2, SCCLTP 16 , 2 riu 2 and SCCLTP 24 , 3 riu 2, respectively. The colored boxes in the curves of different methods in Figures 5 and 6 denote such methods obtaining the best performance at those points. It can be seen that the optimal values of α and β are different for P = 8 and R = 1, P = 16 and R = 2 and P = 24 and R = 3. The results in Figure 5b show that the features SCCLTP 8 , 1 riu 2 and SCCLTP 16 , 2 riu 2 get the best performance when β < 0. While, the feature SCCLTP 24 , 3 riu 2 gets the best performance when β > 0. The results in Figure 6b show that the three features, SCCLTP 8 , 1 riu 2, SCCLTP 16 , 2 riu 2 and SCCLTP 24 , 3 riu 2, achieve the best performance when β < 0. The experimental results in Figures 5b and 6b also show that the scaling factor β may have different values for different features and image databases.

The comparison between the proposed methods (RLTP and SCCLTP with optimal threshold α and β) and the methods in [22] (τ = 5 and β = 0) is given in Table 2. The improved methods, RLTP P , R riu 2, and SCCLTP P , R riu 2, get an average accuracy improvement of 1% and 0.5% over their original versions, respectively.

Further, we compare the feature extraction complexity of the proposed operators, SCC_OC_RLTP, OC_RLTP and CCLTP, in [22]. The experimental results on TC10 are given in Table 3, where the three different multi-resolutions, P = 8 and R = 1, P = 16 and R = 2 and P = 24 and R = 3 are concentrated together for feature description, as in [22]. The time complexity of computing the two thresholds α and β was not considered in this experiment, because they can be achieved off-line. It is clear that the proposed methods have lower computational complexity than CCLTP.

4.2. Experiments for ATR

The same FLIR dataset as [22] is used in this paper for ATR. There are 10 different military targets denoted as T1, T2, …, T10. For each target, there are 72 orientations, corresponding to aspect angles of 0°, 5°, …, 355°. The dataset contains 437 to 759 images (40 × 75) for each target type, in total 6930 infrared chips. In Figure 7, we show some infrared chips for 10 targets under 10 different views. In the following experiments, the three different multi-resolutions, P = 8 and R = 1, P = 16 and R = 2 and P = 24 and R = 3, are also concentrated together for feature description as in [22].

4.2.1. Comparison of CC_OC_LTP, OC_LTP, OC_LBP and CCLTP

We evaluate the performance of the operators CC_OC_LTP, OC_LTP, OC_LBP [36] and CCLTP [22] in this section. We randomly chose about 10% (718 chips), 20% (1436 chips), 30% (2154 chips), 40% (2872 chips) and 50% (3590 chips) target chips in each target class as training data. The remaining 90%, 80%, 70%, 60% and 50% images in the dataset are set as testing data, respectively. The mean and variance of the recognition accuracy averaged by 10 trails are given in Figure 8, where CC_OC_LTP, OC_LTP, OC_LBP and CCLTP denote CO _ OC _ LTP 8 , 1 + 16 , 2 + 24 , 3 riu 2, CO _ LTP 8 , 1 + 16 , 2 + 24 , 3 riu 2, CO _ LTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 and CCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2, respectively. In can be seen from the experimental results that:

  • The operators CC_OC_LTP and OC_LTP get better results than CCLTP in [22], and CC_OC_LTP is the best in the four operators.

  • With CCP enhancement, the average accuracy improvement of CC_OC_LTP is 4.94% compared with OC_LTP. It further was proven that the CCP method introduced in [22] is effective at improving the performance of the LBP-based methods.

  • The OC_LTP gets better recognition performance than OC_LBP [36] and CCLTP [22].

  • The CCLTP [22] is better than OC_LBP [36].

  • The experimental results also show that CC_OC_LTP, OC_LTP and OC_LBP are robust for infrared ATR, because they are fairly stable in 10 random trials, as CCLTP.

4.2.2. Comparison of RLTP, SCCLTP with LTP and CCLTP, Respectively

In this experiment, we mainly tested the impact of α and β on the RLTP and SCCP for infrared ATR, and the training data and test data are set the same as the above experiment. The curves of the precision vs. α and β for RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 and SCCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 are given in Figure 9, where the colored boxes in the curves denote that the methods obtain the best performance at that point.

The comparison between RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 (with optimal threshold α) and LTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 (τ = 8) and SCCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 (with optimal threshold β) and CCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 in [22] (τ = 8 and β = 0) are given in Tables 4 and 5, respectively. It can be seen that the RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 gets an average of nearly 3.4% higher performance than LTP 8 , 1 + 16 , 2 + 24 , 3 riu 2, and the SCCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 gets an average of nearly 0.5% higher performance than CCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2. The experimental results show that the introduced schemes are effective for LTP and CCP in infrared ATR.

4.2.3. Comparison of Blocking Methods

In this section, the sCCLTP proposed in [22] was chosen as the testing operator to compare the performance of the two blocking methods given in Figure 4a,b. The training data and test data are set the same as the above experiment. The recognition accuracy averaged by 10 trials is given in Table 6. The experimental results shows that the blocking method introduced in the paper (Figure 4b) gets an average accuracy improvement of 1.3% compared to that of Figure 4a used in [22].

4.2.4. Comparison of Feature Selection

In this experiment, we randomly selected 10% (718 chips), 20% (1436 chips), 30% (2154 chips), 40% (2872 chips), 50% (3590 chips), 60% (4308 chips), 70% (4958 chips) and 80% (5607 chips) target chips in each target class as training data. The remaining 90%, 80%, 70%, 60%, 50%, 40%, 30% and 20% images in the dataset are set as testing data, respectively. The operators SSC _ OC _ RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 and OC _ RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 2 are selected for feature description. Furthermore, the blocking methods in Figure 4b are chosen in the paper. After that, the features of each block and that of the whole image are concentrated together for image description, which are denoted as sSCC_OC_RLTP and sOC_RLTP, respectively. At the same time, the FSDE introduced in [47] is used for feature selection. The selected dimensionalities for sSCC_OC_RLTP and sOC_RLTP are 288, 576, 864, 1152 and 1440 bins, respectively.

The recognition accuracy averaged by 10 trials was given in Tables 7 and 8. It can be seen from the experimental results that:

  • The dimensionalities of the selected features are only 6.25%, 12.5%, 18.75%, 25% and 31.25% of sSCC_OC_RLTP (4608) and 12.5%, 25%, 37.5%, 50% and 62.5% of sOC_RLTP (2304).

  • It can be seen from Tables 7 and 8 that, with SCCP enhancement, sSCC_OC_RLTP gets higher accuracy than sOC_RLTP.

  • The experimental results in Table 7 show that the sSCC_OC_RLTP-1440 (sSCC_OC_RLTP with 1440 dimensionalities by feature selection) gets the best performance when we chose 10%, 20%, 30%, 40% or 50% target chips in each target class as training data and the sSCC_OC_RLTP-1152 (sSCC_OC_RLTP with 1152 dimensionalities by feature selection) gets the best performance when we chose 60%, 70% or 80% target chips in each target class as the training data. For the leave-one-out experiment, sSCC_OC_RLTP-1152 also gets the best results.

  • The experimental results in Table 8 show that the sOC_RLTP-576 (sOC_RLTP with 576 dimensionality by feature selection) gets the best performance in the five different cases.

  • The results in Tables 7 and 8 also prove that not all of the features in sSCC_OC_RLTP and sOC_RLTP have essential contributions to the operators. The feature selection method FSDE presented in [47] is effective, and it can drop the redundant features effectively.

4.2.5. Comparison of sSCC_OC_RLTP, sOC_RLTP, sCCLTP and SRC-Based Methods

In this section, we compare the performance of the proposed methods, sOC_RLTP and sSCC_OC_RLTP, with sCCTLP introduced in [22] and two SRC-based methods (Sparselab-lasso and SPG-lasso) [16], which are also tested in [22]. The training data and test data are set the same as the above experiment. The dimensionality of 576 for sSCC_OC_RLTP and sOC_RLTP is chosen in the experiment. The recognition accuracies of sSCC_OC_RLTP-576, sOC_RLTP-576, sCCLTP and the sparse-based methods that are averaged by 10 trials are given in Table 9, where we also include the leave-one-out experimental result for each method. It can be seen from the experimental results that:

  • The operator sCCLTP gets better performance than the SRC-based methods (SPG-lasso and Sparselab-lasso), which have been verified in [22].

  • The performance of sSCC_OC_RLTP-576 is better than sCCLTP and sOC_RLTP-576, while, its dimensionality is far less than that of sCCLTP.

  • Because of the lower dimensionality, the time consumed for training and recognition for sSCC_OC_RLTP-576 and sOC_RLTP-576 is also lower than that of the sCCLTP.

Furthermore, we gave the confusion matrices of sSCC_OC_RLTP-1152 and sCCLTP corresponding to the leave-one-out experiment in Figure 10. The results show that the sSCC_OC_RLTP-1152 result has only one non-diagonal entry greater than 1% (Figure 10a), while sCCLTP has three non-diagonal entries greater than 1% (Figure 10b). On the other hand, all of the diagonal entries of sSCC_OC_RLTP are greater than that of sCCLTP, which shows the better robustness of sSCC_OC_RLTP.

Finally, we give a brief comparison of sSCC_OC_RLTP, sOC_RLTP and sCCLTP [22] on computing complexity. Their complexity mainly contains two aspects: one is the feature extraction complexity, and the other is the training and recognition complexity The experimental results in Table 3 denote that the feature extraction complexity of the proposed methods is lower than that of sCCLTP The training and recognition complexity for the three methods is associated with their dimensionalities according to the dissimilarity measure (chi-square distance). By feature selection, the dimensionalities of the proposed methods may be lower than that of the sCCLTP (1080). The comparison among them is given in Tables 7, 8 and 9. The results proved that the proposed methods can achieve better performance with far less dimensionality than that of the sCCLTP. The feature selection step and the step of obtaining the two thresholds α and β can be implemented off-line. Hence, they do not increase the computing complexity of the real-time recognition of the infrared target.

4.2.6. The Impact of the Gray Variance on the Recognition Performance

In general, the gray values of the target are larger than that of the background for the infrared chips that we chose in the experiments. It is obvious that the gray variance of each chip reflects the contrast between the target and the background. On the one hand, the larger variance denotes greater contrast between the target and its background. On the other hand, larger variance means the target in the chips is easier to recognize. Therefore, such contrast reflects the signal-to-noise ratio of the chips to some extent. In this case, the recognition rates in different variance ranges are able to prove the performance of the different operators. We will further evaluate the methods' performance by the gray variance of the chips.

Firstly, the variance range and the number of chips of each target class is given in Table 10, where min_variance and max_variance denote the minimum and maximum variance of each class. It is clear that the variance range is maximum for the first target class and minimum for the seventh target class. The maximum and minimum of the gray variance are 9.5 and 143.6 for the whole database.

By gray variance, the chips of each class are classified into five different ranges in this experiment, which are (9.5, 35.0), (35.0, 46.8), (46.8, 58.5), (58.5, 70.3) and (70.3, 143.6). The numbers of chips in each range are given in Table 11. Further, we give an example chip for each target class in different variance ranges in Figure 11. For each range, we randomly selected almost 50% chips in each class as the training data and the remaining as testing data, respectively. The three operators, sSCC_OC_RLTP-576, sOC_RLTP-576 and sCCLTP, are selected for feature description. The recognition rate in each range averaged by 10 random trials is given in Table 12.

It can be seen from Table 12 that the recognition rate is improved gradually with the increase of the gray variance. The same conclusion can also be obtained from the confusion matrices of sSCC_OC_RLTP-1152 and sCCLTP in Figure 10. Whether for sSCC_OC_RLTP-1152 or sCCLTP, the recognition rate of the seventh class is minimal, and that of the first class is maximal. We think the variance range is the main reason.

5. Conclusions

This paper presents improved local ternary patterns (LTP) for ATR in infrared imagery. Firstly, the RLTP and SCCP approaches are proposed to overcome the shortcomings of the LTP and CCP, respectively. Combined with the advantage of OC_LBP, SCC_OC_RLTP is further introduced based on RLTP and SCCP. Then, a simple, yet effective, blocking scheme and a feature selection method are introduced to enhance its efficiency for ATR in infrared imagery. Experiments show that the proposed operators can achieve competitive results compared with the state-of-the-art methods.

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions, which improved this paper. This work was supported by the Backbone Teacher Grant of Henan Province (2010GGJS-059), the International Cooperation Project of Henan Province (134300510057) and the research team of HPU(T2014-3). The authors would like to thank MVG, Sparselab and SPGL1 for sharing the source codes of the LBP and sparse-based methods.

Author Contributions

Xiaosheng Wu and Junding Sun developed the methodology, performed the experimental analysis, and wrote the manuscript. They gave the same contribution to the paper. Guoliang Fan and Zhiheng Wang gave some valuable advices and revised the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, B.; Chellappa, R.; Zheng, Q.; Der, S.; Nasrabadi, N.; Chan, L.; Wang, L. Experimental Evaluation of forward-looking IR data set automatic target recognition approaches Comparative Study. Comput. Vis. Image Underst. 2001, 84, 5–24. [Google Scholar]
  2. Chan, L.A.; Nasrabadi, N.M.; Mirelli, V. Multi-stage target recognition using modular vector quantizers and multilayer perceptrons. Proceedings of the 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 114–119.
  3. Wang, L.C.; Der, S.Z.; Nasrabadi, N.M. A committee of networks classifier with multi-resolution feature extraction for automatic target recognition. Proceedings of the International Conference on Neural Networks, Houston, TX, USA, 9–12 June 1997.
  4. Lamdan, Y.; Wolfson, H. Geometric hashing: A general and efficient model-based recognition scheme. Proceedings of the Second International Conference on Computer Vision, Tampa, FL, USA, 5–8 December 1988; pp. 238–249.
  5. Olson, C.; Huttenlocher, D. Automatic target recognition by matching oriented edge pixels. IEEE Trans. Image Process. 1997, 6, 103–113. [Google Scholar]
  6. Grenander, U.; Miller, M.; Srivastava, A. Hilbert-Schmidt lower bounds for estimators on matrix lie groups for ATR. IEEE Trans. Pattern Anal. Mach Intell. 1998, 20, 790–802. [Google Scholar]
  7. Venkataraman, V.; Fan, G.; Yu, L.; Zhang, X.; Liu, W.; Havlicek, J.P. Automated Target Tracking and Recognition using Coupled View and Identity Manifolds for Shape Representation. EURASIP J. Adv. Signal Process. 2011, 124, 1–17. [Google Scholar]
  8. Gong, J.; Fan, G.; Yu, L.; Havlicek, J.P.; Chen, D.; Fan, N. Joint View-Identity Manifold for Infrared Target Tracking and Recognition. Comput. Vis. Image Underst. 2014, 118, 211–224. [Google Scholar]
  9. Gong, J.; Fan, G.; Yu, L.; Havlicek, J.P.; Chen, D.; Fan, N. Joint Target Tracking, Recognition and Segmentation for Infrared Imagery Using a Shape Manifold-Based Level Set. Sensors 2014, 14, 10124–10145. [Google Scholar]
  10. Liebelt, J.; Schmid, C.; Schertler, K. Viewpoint-independent object class detection using 3D Feature Maps. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008.
  11. Khan, S.; Cheng, H.; Matthies, D.; Sawhney, H. 3D model based vehicle classification in aerial imagery. Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1681–1687.
  12. Toshev, A.; Makadia, A.; Daniilidis, K. Shape-based object recognition in videos using 3D synthetic object models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 288–295.
  13. Sanna, A.; Lamberti, F. Advances in Target Detection and Tracking in Forward-Looking InfraRed (FLIR) Imagery. Sensors 2014, 14, 20297–20303. [Google Scholar]
  14. Li, Y.; Li, X.; Wang, H.; Chen, Y.; Zhuang, Z.; Cheng, Y.; Deng, B.; Wang, L.; Zeng, Y.; Gao, L. A Compact Methodology to Understand Evaluate Predict the Performance of Automatic Target Recognition. Sensors 2014, 14, 11308–11350. [Google Scholar]
  15. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar]
  16. Patel, V.M.; Nasrabadi, N.M.; Chellappa, R. Sparsity-motivated automatic target recognition. Appl. Opt. 2011, 50, 1425–1433. [Google Scholar]
  17. Bhanu, B. Automatic Target Recognition: State of the Art Survey. IEEE Trans. Aerosp. Electron. Syst. 1986, AES-22, 364–379. [Google Scholar]
  18. Jeong, C.; Cha, M.; Kim, H.M. Texture feature coding method for SAR automatic target recognition with adaptive boosting. Proceedings of the 2nd Asian-Pacific Conference on Synthetic Aperture Radar (APSAR), Xi'an, China, 26–30 October 2009; pp. 473–476.
  19. Rahmani, N.; Behrad, A. Automatic marine targets detection using features based on Local Gabor Binary Pattern Histogram Sequence. Proceedings of the 1st International eConference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 13–14 October 2011; pp. 195–201.
  20. Qin, Y.; Cao, Z.; Fang, Z. A study on the difficulty prediction for infrared target recognition. Proc. SPIE 2013, 8918. [Google Scholar] [CrossRef]
  21. Wang, F.; Sheng, W.; Ma, X.; Wang, H. Target automatic recognition based on ISAR image with wavelet transform and MBLBP. Proceedings of the 2010 International Symposium on Signals Systems and Electronics (ISSSE), Nanjing, China, 17–20 September 2010; Volume 2, pp. 1–4.
  22. Sun, J.; Fan, G.; Yu, L.; Wu, X. Concave-convex local binary features for automatic target recognition in infrared imagery. EURASIP J. Image Video Process. 2014, 2014, 1–13. [Google Scholar]
  23. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar]
  24. Brahnam, S.; Jain, L.C.; Nanni, L.; Lumini, A. Local Binary Patterns: New Variants and Applications; Springer: Berlin Heidelberg, Germany, 2014; Volume 506. [Google Scholar]
  25. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar]
  26. Liao, S.; Zhu, X.; Lei, Z.; Zhang, L.; Li, S. Learning multi-scale block local binary patterns for face recognition. Lect. Notes Comput. Sci. 2007, 4642, 828–837. [Google Scholar]
  27. Wolf, L.; Hassner, T.; Taigman, Y. Descriptor based methods in the wild. Proceedings of the Workshop on Faces in “Real-Life” Images: Detection, Alignment, and Recognition, Marseille, France, 17 October 2008.
  28. Nanni, L.; Lumini, A.; Brahnam, S. Local binary patterns variants as texture descriptors for medical image analysis. Artif. Intell. Med. 2010, 49, 117–125. [Google Scholar]
  29. Lei, Z.; Pietikainen, M.; Li, S. Learning discriminant face descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 289–302. [Google Scholar]
  30. Ren, J.; Jiang, X.; Yuan, J.; Wang, G. Optimizing LBP Structure For Visual Recognition Using Binary Quadratic Programming. IEEE Signal Process. Lett. 2014, 21, 1346–1350. [Google Scholar]
  31. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar]
  32. Ren, J.; Jiang, X.; Yuan, J. Noise-resistant local binary pattern with an embedded error-correction mechanism. IEEE Trans. Image Process. 2013, 22, 4049–4060. [Google Scholar]
  33. Song, T.; Li, H.; Meng, F.; Wu, Q.; Luo, B.; Zeng, B.; Gabbouj, M. Noise-Robust Texture Description Using Local Contrast Patterns via Global Measures. IEEE Signal Process. Lett. 2014, 21, 93–96. [Google Scholar]
  34. Kylberg, G.; Ida-Maria, S. Evaluation of noise robustness for local binary pattern descriptors in texture classification. EURASIP J. Image Video Process. 2013, 2013, 1–20. [Google Scholar]
  35. Guo, Y.; Zhao, G.; PietikäInen, M. Discriminative features for texture description. Pattern Recognit. 2012, 45, 3834–3843. [Google Scholar]
  36. Zhu, C.; Bichot, C.E.; Chen, L. Image region description using orthogonal combination of local binary patterns enhanced with color information. Pattern Recognit. 2013, 46, 1949–1963. [Google Scholar]
  37. Ahonen, T.; Pietikäinen, M. Soft histograms for local binary patterns. Proceedings of the Finnish signal processing symposium (FINSIG 2007), Oulu, Finland, 30 August 2007; pp. 1–4.
  38. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar]
  39. Zhao, Y.; Huang, D.S.; Jia, W. Completed local binary count for rotation invariant texture classification. IEEE Trans. Image Process. 2012, 21, 4492–4497. [Google Scholar]
  40. Sapkota, A.; Boult, T.E. GRAB: Generalized Region Assigned to Binary. EURASIP J. Image Video Process. 2013, 35. [Google Scholar] [CrossRef]
  41. Yuan, F. Rotation and scale invariant local binary pattern based on high order directional derivatives for texture classification. Digit. Signal Process. 2014, 26, 142–152. [Google Scholar]
  42. Hong, X.; Zhao, G.; Pietikäinen, M.; Chen, X. Combining LBP Difference and Feature Correlation for Texture Description. IEEE Trans. Image Process. 2014, 23, 2557–2568. [Google Scholar]
  43. Guo, Z.; Zhang, L.; Zhang, D. Rotation invariant texture classification using LBP variance (LBPV) with global matching. Pattern Recognit. 2010, 43, 706–719. [Google Scholar]
  44. Qi, X.; Xiao, R.; Li, C.G.; Qiao, Y.; Guo, J.; Tang, X. Pairwise Rotation Invariant Co-Occurrence Local Binary Pattern. IEEE Trans. Pattern Anal. Mach Intell. 2014, 36, 2199–2213. [Google Scholar]
  45. He, J.; Ji, H.; Yang, X. Rotation invariant texture descriptor using local shearlet-based energy histograms. IEEE Signal Process. Lett. 2013, 20, 905–908. [Google Scholar]
  46. Li, C.; Li, J.; Gao, D.; Fu, B. Rapid-transform based rotation invariant descriptor for texture classification under non-ideal conditions. Pattern Recognit. 2014, 47, 313–325. [Google Scholar]
  47. Khushaba, R.N.; Al-Ani, A.; Al-Jumaily, A. Feature subset selection using differential evolution and a statistical repair mechanism. Expert Syst. Appl. 2011, 38, 11515–11526. [Google Scholar]
  48. Yang, B.; Chen, S. A Comparative Study on Local Binary Pattern (LBP) based Face Recognition: LBP Histogram versus LBP Image. Neurocomputing 2013, 120, 365–379. [Google Scholar]
  49. Zhu, L.; Yang, J.; Song, J.N.; Chou, K.C.; Shen, H.B. Improving the accuracy of predicting disulfide connectivity by feature selection. J. Comput. Chem. 2010, 31, 1478–1485. [Google Scholar]
  50. Ojala, T.; Maenpaa, T.; Pietikainen, M.; Viertola, J.; Kyllonen, J.; Huovinen, S. Outex-new framework for empirical evaluation of texture analysis algorithms. Proceedings of the 16th International Conference on Pattern Recognition, Quebec, Canada, 11–15 August 2002; Volume 1, pp. 701–706.
Figure 1. Calculation of the LTP with eight neighboring pixels.
Figure 1. Calculation of the LTP with eight neighboring pixels.
Sensors 15 06399f1 1024
Figure 2. Calculation of local binary patterns (LBP) and the orthogonal combination of local binary patterns (OC_LBP) with eight neighboring pixels.
Figure 2. Calculation of local binary patterns (LBP) and the orthogonal combination of local binary patterns (OC_LBP) with eight neighboring pixels.
Sensors 15 06399f2 1024
Figure 3. Calculation of the OC_LTP operators with eight neighboring pixels.
Figure 3. Calculation of the OC_LTP operators with eight neighboring pixels.
Sensors 15 06399f3 1024
Figure 4. Two blocking methods to divide an infrared chip into multiple segments. (a) The chip is divided into four overlapped quadrants; (b) The chip is divided into five overlapped regions.
Figure 4. Two blocking methods to divide an infrared chip into multiple segments. (a) The chip is divided into four overlapped quadrants; (b) The chip is divided into five overlapped regions.
Sensors 15 06399f4 1024
Figure 5. The curve of precision vs. α and β on TC10. (a) The results of RLTP(8,1), RLTP(16,2) and RLTP(24,3). (b) The results of SCCLTP(8,1), SCCLTP(16,2) and SCCLTP(24,3).
Figure 5. The curve of precision vs. α and β on TC10. (a) The results of RLTP(8,1), RLTP(16,2) and RLTP(24,3). (b) The results of SCCLTP(8,1), SCCLTP(16,2) and SCCLTP(24,3).
Sensors 15 06399f5 1024
Figure 6. The curve of precision vs. α and β on TC12_001. (a) The results of RLTP(8,1), RLTP(16,2) and RLTP(24,3). (b) The results of SCCLTP(8,1), SCCLTP(16,2) and SCCLTP(24,3).
Figure 6. The curve of precision vs. α and β on TC12_001. (a) The results of RLTP(8,1), RLTP(16,2) and RLTP(24,3). (b) The results of SCCLTP(8,1), SCCLTP(16,2) and SCCLTP(24,3).
Sensors 15 06399f6 1024
Figure 7. Some infrared chips of the 10 targets (row-wise) in 10 views (column-wise) in the forward-looking infrared (FLIR) dataset.
Figure 7. Some infrared chips of the 10 targets (row-wise) in 10 views (column-wise) in the forward-looking infrared (FLIR) dataset.
Sensors 15 06399f7 1024
Figure 8. Recognition accuracy comparison for CC_OC_LTP, OC_LTP, OC_LBP and CCLTP.
Figure 8. Recognition accuracy comparison for CC_OC_LTP, OC_LTP, OC_LBP and CCLTP.
Sensors 15 06399f8 1024
Figure 9. The curve of precision vs. α and β for ATR.
Figure 9. The curve of precision vs. α and β for ATR.
Sensors 15 06399f9 1024
Figure 10. Confusion matrices of the sSCC_OC_RLTP and sCCLTP.
Figure 10. Confusion matrices of the sSCC_OC_RLTP and sCCLTP.
Sensors 15 06399f10 1024
Figure 11. Examples of targets in each variance range.
Figure 11. Examples of targets in each variance range.
Sensors 15 06399f11 1024
Table 1. Dimensionality comparison.
Table 1. Dimensionality comparison.
(P,R) = (8,1)(P,R) = (16,2)(P,R) = (24,3)
OC_LBPP,R326496
OC_RLTPP,R64128192
SCC_OC_RLTPP,R128256384
Table 2. Classification accuracy (%) on the TC10 and TC12 texture sets.
Table 2. Classification accuracy (%) on the TC10 and TC12 texture sets.
(P, R) = (8, 1)(P,R) = (16,2)(P, R) = (24, 3)

TC10TC12AverageTC10TC12AverageTC10TC12Average



ththth
LTP P , R riu 294.1475.8773.9581.3296.9590.1686.9491.3598.2093.5889.4293.73
RLTP P , R riu 293.7877.7175.5882.3697.4291.0288.7092.3898.9194.9192.5995.47
CCLTP P , R riu 296.8786.9688.1090.6498.2094.5394.4695.7398.7595.6792.9195.77
SCCLTP P , R riu 297.3787.4388.5291.1198.5295.1494.7596.1498.8396.1193.7796.24
Table 3. The average feature extraction time per image on TC10.
Table 3. The average feature extraction time per image on TC10.
SCC_OC_RLTPOC_RLTPCCLTP
Average feature extraction time (s)0.0120.0090.013
Table 4. Accuracy of infrared ATR (%) for RLTP and LTP under different training datasets.
Table 4. Accuracy of infrared ATR (%) for RLTP and LTP under different training datasets.
Methods10%20%30%40%50%
LTP 8 , 1 + 16 , 2 + 24 , 3 riu 251.5862.6769.1273.5376.71
RLTP 8 , 1 + 16 , 2 + 24 , 3 riu 254.2265.9072.8177.1480.48
Table 5. Accuracy of infrared ATR (%) for SCCLTP and CCLTP under different training datasets.
Table 5. Accuracy of infrared ATR (%) for SCCLTP and CCLTP under different training datasets.
Methods10%20%30%40%50%
CCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 260.2371.9778.6182.7885.74
sCCLTP 8 , 1 + 16 , 2 + 24 , 3 riu 260.8472.4879.1683.2886.13
Table 6. Accuracy of infrared ATR (%) for the two blocking methods under different training datasets.
Table 6. Accuracy of infrared ATR (%) for the two blocking methods under different training datasets.
Blocking Methods10%20%30%40%50%
Figure 4a66.5079.0585.8889.8092.25
Figure 4b68.6180.3286.7991.3392.81
Table 7. Accuracy of infrared ATR (%) for sSCC_OC_RLTP by FSDE [47] under different training datasets.
Table 7. Accuracy of infrared ATR (%) for sSCC_OC_RLTP by FSDE [47] under different training datasets.
Dimensionality10%20%30%40%50%60%70%80%Leave-One-Out
28870.7182.2887.7991.1593.4294.4895.4896.1497.94
57671.6483.0688.7491.7994.0395.1696.0896.7298.34
86472.1283.5089.2092.3094.4395.3296.2496.8598.43
115272.7984.5089.6192.5494.5695.5796.4696.9198.61
144072.9184.7789.7092.5994.5895.4796.3396.8898.37
Table 8. Accuracy of infrared ATR (%) for sOC_RLTP by FSDE [47] under different training datasets.
Table 8. Accuracy of infrared ATR (%) for sOC_RLTP by FSDE [47] under different training datasets.
Dimensionality10%20%30%40%50%60%70%80%Leave-One-Out
28867.8379.9886.1989.5892.2793.4894.7195.2397.50
57669.2481.6187.5790.9793.2994.3295.6096.3698.11
86468.2280.6186.9990.4192.9194.0595.1795.9697.81
115268.1880.5686.7990.2792.7693.9795.0595.7497.66
144068.4080.7587.0490.3792.8794.1095.2295.9097.78
Table 9. Accuracy of infrared ATR (%) for the six methods under different training datasets.
Table 9. Accuracy of infrared ATR (%) for the six methods under different training datasets.
Methods10%20%30%40%50%60%70%80%Leave-One-Out
sSCC_OC_RLTP-57671.6483.0688.7491.7994.0395.1696.0896.7298.34
sOC_RLTP-57669.2481.6187.5790.9793.2994.3295.6096.3698.11
sCCLTP66.5079.0585.8889.8092.2593.5594.6495.2997.63
SPG-lasso75.4584.4388.5191.1092.7693.8794.5495.2396.87
Sparselab-lasso75.6583.9587.9590.2491.8693.0493.8294.4395.84
Table 10. The variance range of each target class.
Table 10. The variance range of each target class.
Target Class12345678910
number of chips729660691753702733735759437731
min_variance17.718.919.912.113.09.512.713.123.213.1
max_variance143.6102.3107.1115.1121.792.982.0118.8119.9104.4
Table 11. Number of chips in each range.
Table 11. Number of chips in each range.
Variance Range(9.5, 35.0)(35.0, 46.8)(46.8, 58.5)(58.5,70.3)(70.3,143.6)
number of chips14091747147810091287
Table 12. The recognition rate in each range averaged by 10 random trials.
Table 12. The recognition rate in each range averaged by 10 random trials.
Variance Range(9.5, 35.0)(35.0, 46.8)(46.8,58.5)(58.5,70.3)(70.3,143.6)
sSCC_OC_RLTP-57692.5493.2393.8594.6195.88
sOC_RLTP-57691.8892.3993.0293.4594.17
sCCLTP90.0391.3992.0892.9493.62

Share and Cite

MDPI and ACS Style

Wu, X.; Sun, J.; Fan, G.; Wang, Z. Improved Local Ternary Patterns for Automatic Target Recognition in Infrared Imagery. Sensors 2015, 15, 6399-6418. https://doi.org/10.3390/s150306399

AMA Style

Wu X, Sun J, Fan G, Wang Z. Improved Local Ternary Patterns for Automatic Target Recognition in Infrared Imagery. Sensors. 2015; 15(3):6399-6418. https://doi.org/10.3390/s150306399

Chicago/Turabian Style

Wu, Xiaosheng, Junding Sun, Guoliang Fan, and Zhiheng Wang. 2015. "Improved Local Ternary Patterns for Automatic Target Recognition in Infrared Imagery" Sensors 15, no. 3: 6399-6418. https://doi.org/10.3390/s150306399

Article Metrics

Back to TopTop