Next Article in Journal
Three-Motorized-Stage Cyclic Stretching System for Cell Monitoring Based on Chamber Local Displacement Waveforms
Previous Article in Journal
Experimental Investigation of Structural Response of Corrugated Steel Sheet Subjected to Repeated Impact Loading: Performance of LNG Cargo Containment System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ordered Subspace Clustering for Complex Non-Rigid Motion by 3D Reconstruction

Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(8), 1559; https://doi.org/10.3390/app9081559
Submission received: 18 February 2019 / Revised: 31 March 2019 / Accepted: 9 April 2019 / Published: 15 April 2019
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
As a fundamental and challenging problem, non-rigid structure-from-motion (NRSfM) has attracted a large amount of research interest. It is worth mentioning that NRSfM has been applied to dynamic scene understanding and motion segmentation. Especially, a motion segmentation approach combining NRSfM with the subspace representation has been proposed. However, the current subspace representation for non-rigid motions clustering do not take into account the inherent sequential property, which has been proved vital for sequential data clustering. Hence this paper proposes a novel framework to segment the complex and non-rigid motion via an ordered subspace representation method for the reconstructed 3D data, where the sequential property is properly formulated in the procedure of learning the affinity matrix for clustering with simultaneously recovering the 3D non-rigid motion by a monocular camera with 2D point tracks. Experiment results on three public sequential action datasets, BU-4DFE, MSR and UMPM, verify the benefits of method presented in this paper for classical complex non-rigid motion analysis and outperform state-of-the-art methods with lowest subspace clustering error (SCE) rates and highest normalized mutual information (NMI) in subspace clustering and motion segmentation fields.

1. Introduction

Modeling and analysis of non-rigid motions from image sequence are challenging problems in computer vision due to complex deformable pattern and shape structure, for example, dynamic scenes, human body activities, expressive or talking faces, etc. NRSfM tries to restore 3D non-rigid features and camera movements out of 2D point tracks collected by a monocular camera, which has received increasing attentions in the related community. Generally, two classes of current popular NRSfM methods roughly are: shape basis factorization and correspondence. The shape basis methods [1,2,3] assume principal components of non-rigid motion and then utilizes different shape bases to give a linear representation. The correspondence approaches [4,5] aim to reconstruct 3D motions from dense points or each pixel in the image sequence, which generally need spatial constraints as regularizers. Although current NRSfM methods work well in reconstructing simple non-rigid deformations, these NRSfM methods still have problems when it comes to practical scenarios with complex non-rigid shape variations and different kinds of motions, such as human activities of sitting, walking, bending, dancing etc.
Recently, Dai et al. [6,7] adopted a simple subspace to model non-rigid 3D shapes and proposed a “prior-free” method for NRSfM problem, where there is no prior assumption about the non-rigid or camera motions. In fact, the method can be regarded as an extension of the Robust Principal Components Analysis (RPCA) method [8]. RPCA aims to recover low-rank subspaces from noisy data. Dai’s method makes the same assumption as RPCA, however, Dai’s method is oriented to 3D data reconstructed from the known 2D information. Unfortunately, the method suffers from low accuracy in reconstructing complex and various non-rigid motion. Although this method has been improved by iterative shape clustering [9], which reconstructs 3D shapes and clusters the ones recurrently, however, the improved method still faces problems for dealing with the complex non-rigid motion.
Considering complex non-rigid motion in NRSfM, Zhu et al. [10] followed Dai et al. [6,7] and proposed a method named complex non-rigid motion 3D reconstruction by union of subspaces (CNRMS), which reconstructs 3D complex non-rigid motion from 2D image sequence with relative camera motions. In this method, the clustering for 3D non-rigid motion is simultaneously implemented by a union of subspaces. It is considered that the union specifics of the individual subspaces are more suitable to model the complex and various non-rigid motion. The experimental results show that the method has higher clustering accuracy and better reconstruction results compared with the method in [6].
Though introducing a subspace clustering method into the NRSfM model brings considerable improvement for both 3D reconstruction and motion segmentation, current NRSfM methods usually apply fundamental subspace clustering theory. However, the clustering research has had many successful applications in computer vision, pattern recognition, and image processing [11,12,13], especially spectral clustering of subspace clustering methods by affinity matrix have better performance. For example, the segmentation accuracy in [14] is three percentage points higher than RPCA, since the constraint of affinity matrix is helpful for data representation [13,14,15]. According to the representative clustering methods [16,17,18,19,20], it is interpreted that to obtain good clustering results, the intrinsic property of the data should be explored, and the feasible structure of the affinity matrix should also been considered. However, the current methods of NRSfM take no account of these factors. On one hand, the sequence property of non-rigid motion has not been considered in current methods, which is ubiquitous in motion segmentation and other applications involved in sequential data. It is proved by ordered subspace clustering (OSC) [16,17] methods which is recently proposed, that clustering using sequential or ordered properties will improve clustering accurancy significantly. On the other hand, there is no constraint for the structure of the affinity matrix in current NRSfM methods. From the success of the subspace clustering methods [17,18,19], which adopted structure of the affinity matrix such as block-diagonal property, it is necessary to make constraints for the structure of the affinity matrix in NRSfM clustering methods.
Kumar et al. [21] proposed a joint framework that both segmentation and reconstruction benefit each other. In the trajectory space and shape space there are multiple subspaces with better reconstruction results. While Kumar et al. [22] demonstrated a different view about dense NRSfM problem by considering dense NRSfM on a Grassmann manifold.
In this paper, based on ordered subspace clustering for complex and various non-rigid motion by 3D reconstruction, a novel method is proposed, where the sequential property is properly formulated in the procedure of learning the affinity matrix for clustering with simultaneously recovering the 3D non-rigid motion from 2D point tracks. Experiments results on a few sequential datasets show the benefits of the proposed model about the complex non-rigid motion analysis and its results outperform state-of-the-art methods of motion segmentation problem. The contributions are mainly listed in detail:
  • A novel framework is proposed for segmentation of complex and various non-rigid motion from 3D reconstruction using ordered subspace clustering.
  • Instead of nuclear normal, a quadratic constraint is used in the self-representation model to improve the clustering performance.
  • An efficient algorithm is implemented solving the complicated optimization involved in proposed framework.
This paper is organized as follows. First, related works are presented in Section 2. Section 3 gives the proposed model in detail. The solution to the optimization model above is given in Section 4. In Section 5, the proposed method along with state-of-the-art methods are evaluated on several public datasets. Finally, we give the conclusions of this paper in Section 6.

2. Related Works

We briefly summarize the representative methods of NRSfM, particularly the NRSfM method in [6,10].
In recent works, Dai et al. [6] proposed a method of NRSfM which adopted a low-dimensional subspace to model the non-rigid 3D shapes, which is similar to the principle of RPCA, proposed by Cand’es et al. [8]. The RPCA approach usually represents the noisy 2D data derived from a low-dimension subspace, where the clean data is recovered by the following objective function,
min D , E D * + λ E l , s . t . X = D + E ,
where D is the clean data of low-dimension subspace with low-rank constraint modeled by nuclear normal · * . E denotes residual noise, and X denotes corrupted data with noise. E l denotes error norm, when l = 1 it is the sparse error and l = 1 , 2 it denotes group sparse error. The error penalty parameter is noted by λ > 0 .
The NRSfM model proposed by Dai et al. [6] can be regarded as an extension of RPCA, and the overall model is described below,
min X , E X * + λ E 2 , s . t . W = R X # + E ,
where W R 2 F × N is the known 2D information of N points, which also can be seen in the sequences projected from 3D points coordinates noted by X # R 3 F × N , here, the number of 3D points is N and there are totally F frames. X R F × 3 N is a reshape of X # . The projection matrix is denoted by R R 2 F × 3 F , which can be pre-computed out of the 2D sequences by the methods in [6,23]. In this method, the l 2 norm is selected for the error E under the assumption of Gaussian noise. It can be replaced by other norms as in RPCA. It is called a “prior-free" method as there is no prior assumption on non-rigid framework or camera motions.
Zhu et al. [10] argued that Dai et al.’s method [6] assumed the data sampled from a subspace and suffered the same low accuracy as RPCA, when it is applied to complex and various motion reconstruction. Zhu et al. proposed a NRSfM method by modeling complex and various non-rigid motion as subspace set inspired from subspace clustering method, the low rank representation (LRR) [14,15], and the model is described below,
min X , Z , E Z * + λ E l + γ X * , s . t . W = R X # + E , X = X Z .
where X = X Z stands for subspace clustering constraint which automatically enforces structure of subspace set X, with Z, stands for a matrix of low rank coefficients. W = R X # + E constrains the 3D reconstruction out of 2D projections, that is, W to X. The penalty parameter for X * is γ and the penalty parameter of E l is λ , λ > 0 . It is shown that the method simultaneously reconstructs and clusters the 3D complex non-rigid motions X, which involved a subspace set by low-rank affinity matrix Z.
Recently, ordered subspace clustering (OSC) [16] and the ordered subspace clustering with block-diagonal priors (QOSC) [17] is proposed. We try to model the sequential property of sequence data in the view of subspace clustering.

3. The Proposed Model

In the real world, most of the motions are continuous and sequential, especially for the videos captured by a monocular camera. However, current methods about NRSfM do not utilize the sequential or ordered information embedded in the non-rigid motion data. As for this problem, the OSC [16] and the QOSC [17] give a proper way to model the sequential property. Motivated by OSC and QOSC, we introduce a penalty term to penalize the similarity between consecutive columns of the low rank representation Z from reconstructed 3D motion data X, thus we obtain the following NRSfM model:
min X , Z , E 1 2 X X Z F 2 + λ 2 Z * + λ 1 Z S 2 , 1 + λ 2 X * + λ 3 E 1 s . t . W = R X # + E ,
where S is a triangular matrix only consisting of −1, 1, and 0 values, with the diagonal elements being −1 and the second being 1,
S = 1 0 0 0 1 1 0 0 0 1 1 0 0 0 0 1 0 0 0 1 n × ( n 1 ) ,
which can make consecutive columns of Z alike. Thus Z S 2 , 1 seeks to preserve the sequential property of Z for penalty, which is determined by the inherent sequential property of the the motion data X. Additionally, the norm · 2 , 1 is used to maintain the sparsity. It is also denoted that the rigid constraint of X = X Z , the low rank self-representation, is regarded as a reconstruction error in the objective function.
To further obtain good clustering performance, we tried to improve the current methods by modeling the structure of the affinity matrix of the LRR representation Z. In subspace clustering, the ideal result of representation coefficients of inter-subspace items are all zeros and only items from the same subspace are non-zeros. Thus its certain permutation and affinity matrix drawn from different subspaces are block-diagonal. Therefore the clustering performance is improved by utilizing the block-diagonal prior. One reprehensive method is the subspace segmentation via quadratic programming (SSQP) [18], which introduces a quadratic term to force the block-diagonal feature for clustering, and it has been proved that SSQP satisfies the block-diagonal feature on the assumption of orthogonal linear subspaces [18]. Following this way, we revise the model in (4) by replacing the low rank constraint of the Z with a a quadratic term to obtain an affinity matrix with the block-diagonal feature,
min X , Z , E 1 2 X X Z F 2 + λ 2 Z T Z 1 + λ 1 Z S 2 , 1 + λ 2 X * + λ 3 E 1 s . t . W = Q X # + E , Z 0 , d i a g ( Z ) = 0 ,
where Z 0 , d i a g ( Z ) = 0 is set to get a feasible solution for Z as in [18].

4. Solutions

For the problem (4), the algorithm named alternating direction method of multipliers (ADMM) [24] is used to search for the optimized solution. Let U = Z S , then problem in Equation (4) is turned to the problem with the augmented Lagrangian with two constraints.
min X , Z , U , E 1 2 X X Z F 2 + λ 2 Z T Z 1 + λ 1 U 2 , 1 + γ 2 U Z R F 2 + λ 2 X * + λ 3 2 E 1 + γ 1 2 W R X # E F 2 + F , U Z R + G , W R X # E s . t . d i a g ( Z ) = 0 , Z 0 ,
where F and G are Lagrangian multipliers and γ , γ 1 are weight parameters for the term U = Z S and W = R X # + E . Now we can solve Equation (6) by the following four sub-problems for X, Z, U, and E when fixing other variables alternatively.
1. Fix Z, U and E, solve for X by
min X f ( X ) 1 2 X X Z F 2 + λ 2 X * + G , W R X # E + γ 1 2 W R X # E F 2 .
equivalently, we have
min X f ( X ) = λ 2 X * + γ 1 2 X ( X k ϑ f ( X k ) γ ) F 2 .
where ϑ f ( X k ) is the derivative of f ( X ) when X = X k , and Z = S V D ( C ) , C = X k ϑ f ( X k ) γ .
2. Fix X, U and E, solve for Z by
min Z f ( Z ) = 1 2 X X Z F 2 + λ 2 Z T Z 1 + F , U Z S + γ 2 U Z S F 2 , s . t . d i a g ( Z ) = 0 , Z 0 .
To be noted, Z is element-wise nonnegative, so (9) can be turned into:
f ( Z ) = 1 2 X X Z F 2 + λ 2 e T Z T Z e + F , U Z S + γ 2 U Z S F 2 ,
where e R n is a vector of all 1. So the model (9) is a classical convex quadratic problem which has many practical solutions. Here a simple and efficient solution via projected gradient is adopted.
Firstly, we compute the derivative of f ( Z ) for Z,
f ( Z ) = X T ( X X Z ) + λ 1 Z e F S T γ ( U Z S ) S T ,
where e R n × n is a matrix in which every element is 1. Then to obtain the optimized solution, for function f ( Z ) , the derivative is set to be zero and we have
X X T Z + Z ( λ 1 e + γ S S T ) = X T X + F S T + γ U S T .
3. Fix X, E and Z, and solve for U by
min U f 1 ( U ) = λ 2 U 2 , 1 + F , U Z S + γ 2 U Z S F 2 .
Equivalently, we have
min U f 1 ( U ) = λ 1 U 2 , 1 + γ 2 U ( Z S 1 γ F ) F 2 .
Denote Q = Z S 1 γ F , problem (8) can be solved in [14],
U i = Q i λ 1 γ Q i Q i if Q i > λ 1 γ 0 otherwise ,
where U i stands for the i-th column of U, Q i stands for the i-th column of Q respectively. Here is the closed-form solution.
4. Fix Z, X and U, solve for E by
min E λ 2 2 E 1 + G , W R X # E + γ 1 2 W R X # E F 2 ,
which is equivalent to
min E f 1 ( E ) = λ 2 E 1 + γ 2 E ( W R X # G γ ) F 2
This problem can be easily solved by the current sparse subspace clustering method.
Combining all the above updating formulas, we summarize the step for proposed OSC-NRSfM in Algorithm 1. The parameter setting refers to that in [14]. For Algorithm 1, usually we apply ADMM to find the optimized solution. By solving four sub-problems step by step, we can repeat the searching process to obtain the final solution.
Algorithm 1 Solving data representation of the proposed ordered subspace clustering (OSC)-non-rigid structure-from-motion (NRSfM).
Input: The input data X, maximal iteration number N, parameters λ , λ 1 , λ 2 , γ , γ 1 , constant ρ .
Output: The data representation Z.
 1:
Initialization: Z 0 = 0 , U 0 = 0 , E 0 = 0 , X 0 = 0 , t = 0 , G = 0 , F = 0 , ρ = 1.3 .
 2:
while t < N do
 3:
 Calculate X t by (7);
 4:
 Find Z t by solving (9);
 5:
 Find U t by solving (13);
 6:
 Find E t by solving (16);
 7:
 Update F F + γ ( U Z S ) ;
 8:
 Update G G + γ 1 ( W R X # E ) ;
 9:
 Update γ ρ γ , γ 1 ρ γ 1 ;
10:
 Update t t + 1 ;
11:
end while

5. Experiments and Analysis

To evaluate performances of state-of-the-art clustering methods together with our proposed method noted by OSC-NRSfM, we conducted experiments on representative datasets. First, we performed face clustering and expression clustering experiments on the BU-4DFE dataset [25]. Second, we widely evaluated our proposed method for complex non-rigid motion using MSR Action3D Dataset. Finally, we tested our method on the Utrecht multi-person motion (UMPM) [26] dataset, which contains 3D joint positions and 2D point tracks of real-world 2D projections out of videos. We compared the proposed OSC-NRSfM method with LRR [14], OSC [16], QOSC [17], CNRMS [10] and sparse subspace clustering (SSC) [13]. Here we used subspace clustering error (SCE) [16] and normalized mutual information (NMI) to measure clustering results, which are described as follows.
S C E = num . misclssified points total num . of points ,
where the denominator stands for the number of all samples and the numerator stands for the number of misclassified samples.
The parameters λ , λ 1 , λ 2 , λ 3 of our methods and the ones of the compared approaches were tuned experimentally according to the experimental results and the parameter setting analysis described in [10,14,16,17].

5.1. Face Clustering on Dynamic Face Sequence

The experiments in this section are face clustering with complex conditions on the BU-4DFE dataset [25]. The dataset has 101 subjects (the female/male ratio is 3/2), including different races. Each subject was required to complete six expressions (happy, surprise, sad, angry, fear and disgust). All 2D face images sequence and dynamic 3D face shapes of different subjects were collected simultaneously and some sample images with 2D feature tracks are shown in Figure 1. In this paper, we only used 2D face images sequences to test the proposed method, hence ASM was introduced to get 76 2D feature tracks for each face image. We randomly selected c = [ 2 , 3 , 5 , 8 , 10 , 20 , 30 ] subjects out of 101 persons to complete face clustering. For each subject, we took continuous 11 frames for each face expression, and totally 66 frames of 6 expressions are selected for each subject. Therefore, the data matrix for c subjects is X R 132 c × 76 . We repeated 30 times of tests, and the c subjects are selected randomly for each test. The optimal parameter settings were λ = 0.01 , λ 1 = 0.2 , λ 2 = 0.001 , λ 3 = 0.05 .
It is easy to observe from Table 1 that the proposed method OSC-NRSfM outperformed other methods, except for the case when the number of clusters increased. The results show that the modified methods improved the accuracy of clustering impressively if there were relatively more cluster centers.

5.2. Expression Clustering on Dynamic 3D Face Expression Sequence

Facial expressions are very important factors in communications. The BU-4DFE dataset includes 3D face expression sequences for 101 persons, and each person has six expression sequences. Therefore, the expression clustering aims to cluster the face image sequences to 6 categories regardless of the identities. In this experiment, we selected 6 expression sequence from c subjects of the 101 persons to test. For the i-th expression sequence of a subject, we picked continuous 11 frames. Then, 2D features of c subjects for i-th expression can be represented as X i , X i ( i = 1 , 2 , , 6 ) R 22 c × 76 . The total face images test dataset is X = [ X 1 , , X 6 ] . For one given c from the set c = [ 2 , 3 , 5 , 8 , 10 , 20 , 30 ] , we randomly selected c subjects to repeat each experiment 30 times and then calculated the average clustering performance as final experimental result. The optimal parameter settings were λ = 0.01 , λ 1 = 0.02 , λ 2 = 0.001 , λ 3 = 0.1 .
The expression clustering error rates on BU-4DFE are shown in Table 2. It can be seen that the performance of proposed OSC-NRSfM method outperforms the compared methods, which demonstrates our proposed method can deal with complex sequences data such as facial expressions sequence. We visualized the difference of affinity matrices of Z when the number of subjects was five in Figure 2. The block-diagonal property of affinity matrices Z demonstrated the block-diagonal constraint to Z is functional. The affinity matrices provided by the proposed method are obviously with block-diagonal features and are more numerous within block weights. The clustering results of Z are shown in Figure 3. Here the same color means the same class it belongs to.

5.3. Clustering on MSR Action3D Dataset

MSR-Action3D is a classical action dataset consist of the depth data. The dataset contains 20 kinds of actions for 10 subjects: high arm wave, hand catch, high throw, horizontal arm wave, two hand wave, hammer, hand clap, draw x, forward kick, draw tick, draw circle, forward punch, sideboxing, jogging, tennis serve, golf swing, tennis swing, bend, side kick, pick up and throw. Each action was performed three times by each subject. The sampling frequency was 15 frames/s, and the spatial resolution of each image was 640 × 480 . The dataset consists of 23,797 depth maps. Some samples are shown in Figure 4. Many actions were very similar despite clean backgrounds, so the dataset was challenging. To obtain the 2D motion, we utilized a real-time skeleton tracking algorithm [27] to get 20 joint positions in the depth image. In our experiment, we selected c = [ 2 , 3 , 5 , 8 , 10 ] kinds of actions from 21 kinds of actions randomly. For each selected action, we took continuous 8 frames, therefore, the data matrix is X R 16 c × 20 . For one given c, we randomly selected c actions to test 30 times so as to calculate the mean clustering errors. The parameter settings are λ = 0.01 , λ 1 = 0.1 , λ 2 = 0.001 , λ 3 = 0.1 . The experimental results are listed in Table 3 and Table 4, 3D action clustering error rates and NMI, which demonstrates that the proposed method improved the accuracy of clustering impressively especially when the number of subjects was large. The proposed method was not the fastest one, but it was relatively fast with better performance.

5.4. Clustering on UMPM Motion Dataset

The UMPM Benchmark consists of a set of human motion sequences collected in the real environment, which can be regarded as complex and various non-rigid motion due to several representative daily human actions and interactions of big range change of poses/shapes. Each motion sequences are performed by 1, 2, 3 or 4 persons, and acquired under four fixed viewpoints. The dataset includes the synchronized video recordings and 3D joints information, which were captured by motion capture device. This paper selected eight coherent motion sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } to evaluate our method. Here, the video named “ p 1 - t a b l e - 2 represents a video recorded by one person performing two actions around the table. The sample images and markers of UMPM dataset [26] are shown in Figure 5.
As each of the video was relatively large, about 5600 frames, we picked one frame every eight frames, therefore, in total 700 frames were used for testing. We used 15 virtual joint positions per subject as inputs, and then according to the given camera parameters, the corresponding 3D joint position may be reconstructed. In the following, our proposed method will report the ordered subspace clustering result via 3D reconstruction. Since the UMPM dataset contains the known 3D joint positions obtained directly from the motion capture markers. This paper conducts LRR on the 3D ground-truth, and the other methods are based on the 2D joint coordinates computed from the 3D joint positions. Table 5 shows the quantitative results on the accuracy of clustering. Table 6 shows the NMI results on clustering. In Table 5, the clustering error rate of the proposed method was lower than the other methods, especially, the error of our method was lower than LRR, which was based on the captured 3D joints. This demonstrates that the introduced 3D reconstruction information significantly enhances the accuracy of clustering. Figure 6 is the visual clustering results of p 3 - m e e t - 2 . Here the same color means the same class it belongs to. Small blocks in different colors mean clustering error. Table 7 shows the running time of different methods. The proposed method provides a balance of efficiency and accuracy.

6. Conclusions

The paper has proposed an ordered subspace clustering method for complex and various non-rigid motion via 3D reconstruction. In the proposed model, we reveal the sequential property and intrinsic structure of the complex and various non-rigid motion based on the reconstructed 3D information. Specially, inspired by QOSC and CNRMS, we formulate the block-diagonal structure and sequential constraint to 3D representation generated by CNRMS model so as to obtain good representation for clustering. We verified the proposed method OSC-NRSfM on three public datasets. The experimental results demonstrated that the proposed method of this paper outperforms state-of-the-art methods.

Author Contributions

Conceptualization, W.D. and Y.S.; Methodology, J.L.; Software, F.W. and W.D.; Validation, W.D., Y.S. and Y.H.; Formal analysis, J.L.; Investigation, F.W. and W.D.; Resources, Y.H.; Data curation, W.D.; Writing—original draft preparation, W.D., J.L. and F.W.; Writing—review and editing, Y.S., W.D. and Y.H.; Visualization, W.D.; Supervision, Y.S. and Y.H.; Project administration, J.L. and Y.H.; Funding acquisition, Y.S. and Y.H.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 61876012, 61672066, 61772049, 61602486, in part by the Beijing Educational Committee (KM201710005022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xiao, J.; Chai, J.; Kanade, T. A Closed Form Solution to Non-Rigid Shape and Motion Recovery. Int. J. Comput. Vis. 2006, 67, 233–246. [Google Scholar] [CrossRef]
  2. Torresani, L.; Hertzmann, A.; Bregler, C. Nonrigid Structure-from-Motion: Estimating Shape and Motion with Hierarchical Priors. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 878–892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Paladini, M.; del Bue, A.; Stosic, M.; Dodig, M.; Xavier, J.; Agapito, L. Factorization for Non-rigid and Articulated Structure Using Metric Projections. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2898–2905. [Google Scholar]
  4. Russell, C.; Fayad, J.; Agapito, L. Dense Non-rigid Structure from Motion. In Proceedings of the International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Zurich, Switzerland, 13–15 October 2012; pp. 509–516. [Google Scholar]
  5. Russell, C.; Yu, R.; Agapito, L. Video Pop-up: Monocular 3d Reconstruction of Dynamic Scenes. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 583–598. [Google Scholar]
  6. Dai, Y.; Li, H.; He, M. A Simple Prior-free Method for Non-Rigid Structure-from-Motion Factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 2018–2025. [Google Scholar]
  7. Dai, Y.; Li, H.; He, M. A Simple Prior-Free Method for Non-rigid Structure-from-Motion Factorization. Int. J. Comput. Vis. 2014, 107, 101–122. [Google Scholar] [CrossRef]
  8. Cand`es, E.; Li, X.; Ma, Y.; Wright, J. Robust Principal Component Analysis. J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
  9. Deng, H.; Dai, Y. Pushing the Limit of Non-rigid Structure-from-Motion by Shape Clustering. In Proceedings of the IEEE International Conference on Accoustic, Speech and Signal Processing(ICASSP), Shanghai, China, 20–25 March 2016; pp. 1999–2003. [Google Scholar]
  10. Zhu, Y.; Huang, D.; Torre, F.; Lucey, S. Complex Non-Rigid Motion 3D Reconstruction by Union of Subspaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 1542–1549. [Google Scholar]
  11. Xu, R.; Wunsch, D. Survey of Clustering Algorithms. IEEE Trans. Neural Netw. 2005, 16, 645–678. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Vidal, R. Subspace Clustering. Signal Process Mag. 2011, 28, 52–68. [Google Scholar] [CrossRef]
  13. Elhamifar, E.; Vidal, R. Sparse Subspace Clustering: Algorithm, Theory, and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Liu, G.; Lin, Z.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, G.; Yan, S. Latent Low-rank Representation for Subspace Segmentation and Feature Extraction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar]
  16. Tierney, S.; Gao, J.; Guo, Y. Subspace Clustering for Sequential Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 1019–1026. [Google Scholar]
  17. Wu, F.; Hu, Y.; Gao, J.; Sun, Y.; Yin, B. Ordered Subspace Clustering with Block-Diagonal Priors. IEEE Trans. Cybern. 2015, 46, 3209–3219. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, S.; Yuan, X.; Yao, T.; Yan, S.; Shen, J. Efficient Subspace Segmentation via Quadratic Programming. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA, 7–11 August 2011; pp. 519–524. [Google Scholar]
  19. Feng, J.; Lin, Z.; Xu, H.; Yan, S. Robust Subspace Segmentation with Block-diagonal Prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 3818–3825. [Google Scholar]
  20. Lu, C.; Lin, Z.; Yan, S. Correlation Adaptive Subspace Segmentation by Trace Lasso. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 3–6 December 2013; pp. 1345–1352. [Google Scholar]
  21. Kumar, S.; Dai, Y.; Li, H. Spatio-temporal Union of Subspaces for Multi-body Non-rigid Structure-from-motion. Pattern Recognit. 2017, 71, 428–443. [Google Scholar] [CrossRef]
  22. Kumar, S.; Cherian, A.; Dai, Y.; Li, H. Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 254–263. [Google Scholar]
  23. Tao, L.; Matuszewski, B.J. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Potland, OR, USA, 25–27 June 2013; pp. 1530–1537. [Google Scholar]
  24. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  25. Yin, L.; Chen, X.; Sun, Y.; Worm, T.; Reale, M. A High-resolution 3D Dynamic Facial Expression Dataset. In Proceedings of the IEEE International Conference Autom. Face Gesture Recognition, Amsterdam, The Netherlands, 17–19 September 2008; pp. 1–6. [Google Scholar]
  26. van der Aa, N.P.; Luo, X.; Giezeman, G.J.; Tan, R.T.; Veltkamp, R.C. UMPM Benchmark: a Multi-person Dataset with Synchronized Video and Motion Capture Data for Evaluation of Articulated Human Motion and Interaction. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 1264–1269. [Google Scholar]
  27. Zhou, Y.; Li, B.; Hong, R.; Wang, M.; Tian, Q. Discriminative Orderlet Mining for Real-time Recognition of Human-Object Interaction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3323–3331. [Google Scholar]
Figure 1. Sample images with 2D feature tracks from the BU-4DFE dataset.
Figure 1. Sample images with 2D feature tracks from the BU-4DFE dataset.
Applsci 09 01559 g001
Figure 2. The visualization results of affinity matrices Z for expression clustering on the BU-4DFE dataset.
Figure 2. The visualization results of affinity matrices Z for expression clustering on the BU-4DFE dataset.
Applsci 09 01559 g002
Figure 3. Clustering results of six expressions of five subjects on the BU-4DFE dataset.
Figure 3. Clustering results of six expressions of five subjects on the BU-4DFE dataset.
Applsci 09 01559 g003
Figure 4. Sample images from the MSR Action3D dataset.
Figure 4. Sample images from the MSR Action3D dataset.
Applsci 09 01559 g004
Figure 5. Sample images and marker information from the UMPM dataset [26].
Figure 5. Sample images and marker information from the UMPM dataset [26].
Applsci 09 01559 g005
Figure 6. Clustering results of “ p 3 - m e e t - 2 ”.
Figure 6. Clustering results of “ p 3 - m e e t - 2 ”.
Applsci 09 01559 g006
Table 1. Face clustering error rate (%) on the BU-4DFE dataset for 2,3,5,8,10,20,30 subject classes.
Table 1. Face clustering error rate (%) on the BU-4DFE dataset for 2,3,5,8,10,20,30 subject classes.
SubjectThe Error Rate (%)
CNRMSProposedLRROSCQOSCSSC
214.226.1634.600.680.3347.50
319.211.5335.461.870.2763.33
524.940.9627.373.922.1366.67
828.130.4128.385.644.8278.89
1026.240.2826.488.457.1678.78
2030.142.9334.209.186.6580.11
3034.735.6336.1410.107.6282.28
Table 2. Expression clustering error (%) on BU-4DFE.
Table 2. Expression clustering error (%) on BU-4DFE.
SubjectThe Error Rate (%)
CNRMSProposedLRROSCQOSCSSC
231.1415.2849.4728.6619.1252.32
332.0016.3560.5121.3020.2767.44
545.3420.1666.8726.4821.4777.33
850.9612.1370.2732.2324.8579.38
1055.868.7672.3032.4123.0879.44
Table 3. 3D action clustering error (%) on the MSR Action3D dataset.
Table 3. 3D action clustering error (%) on the MSR Action3D dataset.
ActionsThe Error Rate (%)
CNRMSProposedLRROSCQOSCSSC
231.8814.0638.1731.544.2945.08
327.3319.0551.6640.4212.9750.21
540.9326.7463.2651.6718.5360.30
844.1616.9266.5054.7427.3265.91
1054.1214.8566.8657.8927.7367.43
Table 4. 3D action normalized mutual information (NMI) on MSR Action3D dataset.
Table 4. 3D action normalized mutual information (NMI) on MSR Action3D dataset.
ActionsNormalized Mutual Information (NMI)
CNRMSProposedLRROSCQOSCSSC
20.02930.66740.10230.20020.81570.0571
30.05670.64250.18120.29540.79100.2236
50.09190.66060.27060.41970.70440.3158
80.11990.83480.36220.50070.73350.3908
100.13040.84280.41180.49290.75040.4203
Table 5. The clustering error (%) for eight sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } .
Table 5. The clustering error (%) for eight sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } .
Human Motion SequencesThe Error Rate (%)
CNRMSProposedLRROSCQOSCSSC
p 1 - t a b l e - 2 31.6325.8036.9633.8930.8233.61
p 1 - g r a b - 3 18.7415.9825.1820.6519.0954.45
p 1 - c h a i r - 2 22.5013.6927.6822.2421.3114.62
p 2 - s t a t i c s y n - 1 3.622.226.975.863.0116.17
p 4 - f r e e - 11 0.500.162.362.201.107.08
p 1 - o r t h o s y n - 1 30.3125.3246.2938.3930.4547.58
p 3 - b a l l - 1 13.3412.5025.5613.7814.1046.09
p 4 - m e e t - 2 15.2314.6628.7417.1014.9826.55
Table 6. The NMI for 8 sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } on UMPM dataset.
Table 6. The NMI for 8 sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } on UMPM dataset.
Human Motion SequencesNormalized Mutual Information (NMI)
CNRMSProposedLRROSCQOSCSSC
p 1 - t a b l e - 2 0.47270.46650.29260.45400.28040.3638
p 1 - g r a b - 3 0.38040.52860.46390.49440.45390.2403
p 1 - c h a i r - 2 0.47740.50660.35140.35660.36020.5904
p 2 - s t a t i c s y n - 1 0.70780.80290.45460.49740.31730.2518
p 4 - f r e e - 11 0.64110.96750.70320.76840.71920.5164
p 1 - o r t h o s y n - 1 0.31390.32440.31980.32020.25530.2175
p 3 - b a l l - 1 0.18420.30950.44050.28710.37930.1151
p 4 - m e e t - 2 0.46190.60300.47880.60490.51260.6010
Table 7. The running time for 8 sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } on UMPM dataset.
Table 7. The running time for 8 sequences { p 1 - t a b l e - 2 , p 1 - g r a b - 3 , p 1 - c h a i r - 2 , p 2 - s t a t i c s y n - 1 , p 4 - f r e e - 11 , p 1 - o r t h o s y n - 1 , p 3 - b a l l - 1 , p 3 - m e e t - 2 } on UMPM dataset.
Human Motion SequencesRunning Time (s)
CNRMSProposedLRROSCQOSCSSC
p 1 - t a b l e - 2 32,206.9878.2015.031041.21556.3618.02
p 1 - g r a b - 3 14,602.7572.2016.411022.66546.1119.05
p 1 - c h a i r - 2 23,847.5564.4113.75829.50453.0813.58
p 2 - s t a t i c s y n - 1 22,581.06620.34198.73784.54424.897.53
p 4 - f r e e - 11 38,383.29303.11110.44777.50429.4822.50
p 1 - o r t h o s y n - 1 14,616.1175.7588.78739.13396.987.97
p 3 - b a l l - 1 50,912.36287.6173.62780.59420.1318.02
p 4 - m e e t - 2 46,776.58397.761204.52943.67509.4422.36

Share and Cite

MDPI and ACS Style

Du, W.; Li, J.; Wu, F.; Sun, Y.; Hu, Y. Ordered Subspace Clustering for Complex Non-Rigid Motion by 3D Reconstruction. Appl. Sci. 2019, 9, 1559. https://doi.org/10.3390/app9081559

AMA Style

Du W, Li J, Wu F, Sun Y, Hu Y. Ordered Subspace Clustering for Complex Non-Rigid Motion by 3D Reconstruction. Applied Sciences. 2019; 9(8):1559. https://doi.org/10.3390/app9081559

Chicago/Turabian Style

Du, Weinan, Jinghua Li, Fei Wu, Yanfeng Sun, and Yongli Hu. 2019. "Ordered Subspace Clustering for Complex Non-Rigid Motion by 3D Reconstruction" Applied Sciences 9, no. 8: 1559. https://doi.org/10.3390/app9081559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop