Next Article in Journal
Recent Advances and Achievements in Nanomaterial-Based, and Structure Switchable Aptasensing Platforms for Ochratoxin A Detection
Previous Article in Journal
Theoretical Calculation of the Gas-Sensing Properties of Pt-Decorated Carbon Nanotubes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Matrices Factorization with Application to Missing Sensor Data Imputation

1
Software Institute, Sun Yat-Sen University, Guangzhou 510275, China
2
School of Economics and Commerce, South China University of Technology, Guangzhou 510006, China
3
Department of Computing Science, Umeå University, SE-901 87 Umeå, Sweden
4
Academy of Guangdong Telecom Co.Ltd, Guangzhou 510630, China
5
Department of Interventional Radiology, the First Affiliated Hospital of Sun Yat-Sen University, Guangzhou 510080, China
*
Author to whom correspondence should be addressed.
Sensors 2013, 13(11), 15172-15186; https://doi.org/10.3390/s131115172
Submission received: 7 October 2013 / Revised: 28 October 2013 / Accepted: 28 October 2013 / Published: 6 November 2013
(This article belongs to the Section Physical Sensors)

Abstract

: We formulate a multi-matrices factorization model (MMF) for the missing sensor data estimation problem. The estimation problem is adequately transformed into a matrix completion one. With MMF, an n-by-t real matrix, R, is adopted to represent the data collected by mobile sensors from n areas at the time, T1, T2, … , Tt, where the entry, Ri,j, is the aggregate value of the data collected in the ith area at Tj. We propose to approximate R by seeking a family of d-by-n probabilistic spatial feature matrices, U(1), U(2), … , U(t), and a probabilistic temporal feature matrix, V ∈ ℝd×t, where R j U ( j ) T T j. We also present a solution algorithm to the proposed model. We evaluate MMF with synthetic data and a real-world sensor dataset extensively. Experimental results demonstrate that our approach outperforms the state-of-the-art comparison algorithms.

1. Introduction

In this work, we study the following missing sensor data imputation problem: Let the matrix, R ∈ ℝn×t, consist of the data collected by a set of mobile sensors in spacial areas S1, S2, … Sn at time points T1 < T2 < … < Tt, where the entry, Ri,j, is the aggregate value collected by the sensors in Si at Tj. In particular, if there is no sensor in Si at time Tj, we denote the value of Ri,j as “ ⊥ ”, which indicates that it is missing. Our focus is to find the suitable estimations for the missing values in a given incomplete matrix, R. Results of this research could be helpful in recovering missing values in statistical analyses. For example, to predict floods, people usually place geographically distributed sensors in the water to continuously monitor the rising water levels. However, some data in a critical period of time might be corrupted, due to, e.g., sensor hardware failures. Such a kind of data needs be recovered to guarantee the prediction accuracy.

Many efforts have been devoted to the missing sensor data imputation problem. Typical examples include k nearest neighbor-based imputation [1], multiple imputation [2], hot/cold imputation [3], maximum likelihood and Bayesian estimation [4] and expectation maximization [5]. However, despite the various implementations of these methods, their main essence is based on the local consistency of the sensor data, i.e., the data collected at adjacent time points within the same spacial area should be close to each other, as well as the data collected at the same time from neighboring areas. We refer to them as local models. As is well known, these local models suffer from the cumulative error problem in scenarios where the missing ratio is high.

Matrix factorization (MF), as a global model, has caught substantial attention in recent years. Typically, in the Netflix rating matrix completion competition [6], some variations of the MF model, e.g., [7,8], achieved state-of-the-art performances, showing their potential to recover the missing data from highly incomplete matrices. On the other side, many well-studied MF models, such as non-negative matrix factorization [9], max margin matrix factorization [10,11], and probabilistic matrix factorization [7], are based on the i.i.d.assumption [12], which, in terms of our problem, implies that the neighborhood information among the data is disregarded and, hence, leaves vast room for improvement.

We in this paper, we propose a multi-matrices factorization model (MMF), which can be outlined as follows. For a matrix, X, denote Xj the jth column of X. Given a sensor data matrix, R, we seek a set of matrices, U(1), U(2), …, U(t) ∈ ℝd×n, and a matrix, V ∈ ℝ d×t, such that for i = 1, 2, …,t, U ( i ) T V i R i. Here, U(i) is referred to as the spatial feature matrix, in which the jth column, U(i),j, is the feature vector of area Sj at Ti. Similarly, V is referred to as the temporal feature matrix, in which the jth column, Vj, is the temporal feature vector of Tj. To predict the missing values in R, we first fit the matrices, U(1), U(2), …, U(t) (single sub-indexes of matrix mean columns) and V, with the non-missing values in R; then, for each Ri,j = “ ⊥ ”, we take its estimation as R ^ i , j = U ( j ) , i T V j.

The remainder of the paper is organized as follows: Section 2 summarizes the notations used in the paper. Section 3 studies the related work on matrix factorization. In Section 4, we present our multi-matrices factorization model. The algorithm to solve the proposed model is outlined in Section 5. Section 6 is devoted to the experimental evaluations. Finally, our conclusions are presented in Section 7, followed by a presentation of future work, acknowledgments and a list of references.

2. Notations

For a vector V = [v1, v2, …vn] ∈ ℝn, we use ‖V0, ‖V1 and ‖V2 to denote its 0-Norm, 1-Norm and 2-Norm, respectively, as follows:

-

V 0 = i = 1 n I ( v i 0 ), i.e., the number of nonzero entries of V;

-

V 1 = i = 1 n | v i |;

-

V 2 = i = 1 n v i 2.

For a matrix, X ∈ ℝn×m, we denote its Frobenius norm as X F = i = 1 n j = 1 m X i , j 2.

3. Matrix Factorization

The essence of the Matrix Factorization problem is to find two factor matrices, U and V, such that their product can approximate the given matrix, R, i.e., RUTV. As a fundamental model of machine learning and data mining, the MF method has achieved state-of-the-art performance in various applications, such as collaborative filtering [13], text analysis [14], image analysis [9,15] and biology analysis [16]. In principle, for a given matrix, R, the MF problem can be formulated as the optimization model below:

{ U * , V * } = min U , V Loss ( U T V , R )
where the lose function, Loss, is used to measure the closeness of the approximation, UTV, to the target, R. Usually, Loss(UTV, R) can be decomposed into the sum of the pairwise loss between the entries of UTV and R; that is, Loss ( U T V , R ) = i = 1 n j = 1 m loss ( ( U T V ) i , j , R i , j ). Some of the most used forms include the square loss (loss(x, y) = (x - y)2) [7,8,17], the 0-1 loss (loss(x, y) = 𝕀(x = y)) [11] and the divergence loss ( loss ( x , y ) = xlog x y x + y ) [9].

It is notable that for Equation (1), if {U*, V*} is a solution to it, then for any scalar, κ > 0, { κ U * , 1 κ V * } is also another solution; hence, problem (1) is ill-posed. To overcome this obstacle, various constraints on U and V are introduced, such as constraints on the entries [15], constraints on the sparseness [18,19], constraints on the norms [7,20] and constraints on the ranks [21,22]. All these constraints, from the perspective of the statistical learning theory, can be regarded as the length of the model to be fitted. According to the minimum description length principle [23,24], a smaller length means a better model; hence, most of them can be incorporated into Model (1) as the additional regularized terms, that is:

{ U * , V * } = min U , V Loss ( U T V , R ) + P ( U , V )
where the regularization factor, P(U, V), corresponds to the constraints on U and V .

As a transductive model, Model (2) has many nice mathematical properties, such as the generalization error bound [10] and the exactness [17,25]. However, as is well known, when compared with the generative model, one of the main restrictions of the transductive model is that it can hardly be used to describe the relations existing in the data. In particular, in terms of our problem, even though Model (2) may work well, it is laborious to express the dynamics of the data over time.

4. The Proposed Model

In this section, we elaborate on our multi-matrices factorization (MMF) approach. Given the sensor data matrix, R, in which the entry, Ri,j (1 ≤ in and 1 ≤ jt), is collected from Si at Tj, our goal is to find the factor matrices, U(1), U(2), …, U(t) ∈ ℝd×n and V ∈ ℝd×t, such that: for j = 1, 2, …, t,

R j U ( j ) T V j
where U(j) is regarded to be composed of the spatial features of all areas at Tj and V is treated as consisting of the temporal features of all time points. We denote the i th column of Uj as U(j),i, which corresponds to the spatial feature value of Si at Tj, and denote the j th column of V as Vj, which corresponds to the temporal feature value of Tj.

Taking advantage of the knowledge of the probability graph model, we assume that the dependent structure of the data in U(1), U(2), …, U(t), V and R is as illustrated in Figure 1. More specifically, we have the following assumptions:

  • Columns of U(j) (1 ≤ jt) are linearly independent, i.e.,

    P r ( U ( j ) ) = i = 1 n P r ( U ( j ) , i )

  • U(1),i (1 ≤ in) follows the same Gaussian distribution with a mean of zero and a covariance matrix σ U 2 I, i.e.,

    Pr ( U ( 1 ) , i σ U ) = ( 2 π σ U 2 ) d 2 exp { U ( 1 ) , i 2 2 2 σ U 2 }

  • U(j),i (1 ≤ in) are dependent in time order with the pre-specified priors, ζU and σU, i.e.,

    Pr ( U ( j ) , i ζ U , σ U ) = Pr ( U ( 1 ) , i ζ U , σ U ) × j = 2 t Pr ( U ( j ) , i U ( j 1 ) , i , ζ U , σ U )

    Moreover, for j > 1, we assume U(j),i is a Laplace random vector with location parameter U(j-1),i and scale parameter ζU, namely:

    Pr ( U ( j ) , i U ( j 1 ) , i , ζ U , σ U ) = 1 2 ζ U exp { | U ( j ) , i U ( j 1 ) , i | ζ U }

  • The columns of V are linearly dependent in time order with the pre-specified priors, ζV and σV, i.e.,

    Pr ( V ζ V , σ V ) = Pr ( V 1 ζ V , σ V ) × j = 2 t Pr ( V j V j 1 , ζ V , σ V )

    We also assume that, for j > 1:

    Pr ( V j V j 1 , ζ V , σ V ) = 1 2 ζ V exp { | V j V j 1 | ζ V }

  • The (i, j)th entry of R (1 ≤ in, 1 ≤ jt) follows Gaussian distribution with a mean of U ( j ) , i T V j and variance σ R 2, i.e.,

    Pr ( R i , j U ( j ) , i T V j , σ R 2 ) = ( 2 π σ R 2 ) 1 2 exp { ( R i , j U ( j ) , i T V i ) 2 2 σ R 2 }

Now, given R and the priors, σU, σV, σR, ζU, and ζV, let U = {U(1), U(2), …, U(t)}; below, we find a solution to the following equation:

{ U * , V * } = arg U , V maxPr ( U , V R , σ U , σ V , σ R , ζ U , ζ V )
First, applying Bayes' theorem, we have:
Pr ( U , V R , σ U , σ V , σ R , ζ U , ζ V ) = Pr ( U , V , R σ U , σ V , σ R , ζ U , ζ V ) Pr ( R σ U , σ V , σ R , ζ U , ζ V )
Since R is observed and σU, σV, σR, ζU and ζV are pre-specified, the denominator, Pr(RσU, σV, σR, ζU, ζV), can be treated as a constant. Therefore:
Equation ( 4 ) { U * , V * } = arg U , V maxPr ( U , V , R σ U , σ V , σ R , ζ U , ζ V )
Combing Assumptions (I.) ∼ (V.) and the dependency structure illustrated in Figure 1, we have:
Pr ( R , U , V σ U , σ V , σ R , ζ U , ζ V ) = Pr ( R U , V , σ U , σ V , σ R , ζ U , ζ V ) × Pr ( U , V σ U , σ V , σ R , ζ U , ζ V ) = Pr ( R U , V , σ R ) × Pr ( U σ U , ζ U ) × Pr ( V σ V , ζ V ) = i = 1 n j = 1 t Pr ( R i , j U ( j ) , i , V j , σ R ) × j = 1 t Pr ( U ( j ) σ U , ζ U ) × Pr ( V σ V , ζ V ) = i = 1 n j = 1 t Pr ( R i , j U ( j ) , i , V j , σ R ) × j = 1 t Pr ( U ( j ) , 1 σ U ) i = 1 n j = 2 t Pr ( U ( j ) , i U ( j 1 ) , i , ζ U ) × Pr ( V 1 σ V ) × j = 2 t Pr ( V j V j 1 , ζ V ) exp ( 1 2 σ R 2 i = 1 n j = 1 t ( U ( j ) , i T V i R i , j ) 2 ) ) × exp ( 1 2 σ U 2 i = 1 n U ( i ) , 1 2 2 ) × i = 1 n exp ( j = 2 t | U ( j ) , i U ( j 1 ) , i | ζ U ) × exp ( 1 2 σ V 2 V 1 2 2 ) × exp ( j = 2 t | V j V j 1 | ζ V )
Taking the logarithm on both sides and take the missing values into account, then we have:
Equation ( 5 ) { U * , V * } = arg U , V min { i = 1 n j = 1 t ( R i , j U ( j ) , i T V j ) 2 I ( R i , j ) + α j = 1 t U ( j ) , 1 2 2 + γ i = 1 n j = 2 t U ( j ) , i U ( j 1 ) , i 1 + β V 1 2 2 + λ j = 2 t V j V j 1 1 }
where α = σ R 2 σ U 2, γ = σ R 2 ζ U, β = σ R 2 σ V 2 and λ = σ R 2 ζ V are the regularization parameters.

As a supplement, we have the following comments on Model (6):

  • On the selection of the Gaussian prior: In our model, since no prior information is available for the columns of the matrices, U(1) and V, hence, according to the max entropy principle [26], a reasonable choice for them is the Gaussian prior distribution.

  • On the ability to formalize the dynamics of the sensor data: The ability to characterize the dynamics of the sensor data lies in the terms γ i = 1 n j = 2 t U ( j ) , i U ( j 1 ) , i 1 and λ j = 2 t V j V j 1 1. Obviously, for any two adjacent time points, Tj−1 and Tj, if the interval is small enough (namely, |TjTj−1| → 0), then for any area, Si, the values, Ri,j−1 and Ri,j, should be close to each other (namely, |Ri,jRi,j−1| → 0). This can been enforced by tuning the parameters, γ and λ (see the following elaboration).

    First of all, since for any x ∈ ℝn, ‖x2 ≤ ‖x1, we have:

    R i , j R i , j 1 2 = U ( j ) , i T V j U ( j 1 ) , i T V j 1 2 = ( U ( j ) , i U ( j 1 ) , i ) T V j + U ( j 1 ) , i T ( V j V j 1 ) 2 U ( j ) , i U ( j 1 ) , i 2 V j 2 + U ( j 1 ) , i 2 V j V j 1 2 U ( j ) , i U ( j 1 ) , i 1 V j 2 + U ( j 1 ) , i 2 V j V j 1 1
    Secondly, it is obvious that greater regularization parameters (i.e., α, β, γ and λ) result in smaller corresponding multipliers (i.e., j = 1 t U ( j ) , 1 2 2, V 1 2 2, i = 1 n j = 2 t U ( j ) , i U ( j 1 ) , i 1 and j = 2 t V j V j 1 1. In particular:
    γ U ( j ) , i U ( j 1 ) , i 1 0
    and:
    λ V j V j 1 1 0
    Hence, combining Equations (7)-(9), when |TjTj−1| → 0, we can simply take γ → ∞ and λ → ∞ and achieve ‖RjRj−12 → 0.

    On the other side, when |TjTj−1| → ∞, as is well known, the values in Rj and Rj−1 are regarded as being independent. In this case, we can take γ → 0 and λ → 0, allowing Rj to be irrelevant to Rj−1.

  • On the1 norm: It is straightforward to verify that, if we replace the ℓ1 terms in Equation (6) with the ℓ2 terms (equivalently, use the Gaussian distribution instead of the Laplace distribution in Assumptions (III.) and (IV.)), e.g., replacing ‖VjVj−11 with V j V j 1 2 2, ‖RjRj−12 can still be bounded via tuning the regularization parameters, γ and λ. The reason for adopting the ℓ1 norm here is two-fold: Firstly, as shown above, the ℓ1 terms can lead to the bounded difference norm, ‖RjRj−12, and hence, the proposed model accommodates the ability to characterize the dynamics of the sensor data; secondly, according to the recent emerging works on compressed sensing [27,28], under some settings, the behavior of the ℓ1 norm is similar to that of the ℓ0 norm. In terms of our model, this result indicates that the ℓ1 terms can restrict not only the magnitudes of the dynamics happening to the features, but also the number of features that changed in adjacent time points. In other words, with ℓ1 norms, our model gains more expressibility.

5. The Algorithm

Below, we present the algorithm to solve Model (6). We denote:

W = arg U , V min i = 1 n j = 1 t ( R i , j U ( J ) , i T V j ) 2 I ( R i , j ) + α j = 1 t U ( j ) , 1 2 2 + β V 1 2 2 + γ i = 1 n j = 2 t U ( j ) , i U ( j 1 ) , i 1 + λ j = 2 t V j V j 1 1
Apparently, W is convex with respect to U(j),i and Vj (1 ≤ in and 1 ≤ jt). Therefore, we can obtain the local minimum solution via coordinate descent [29].

First, we introduce the signum function, sgn, for a real variable, x:

sgn ( x ) = { 1 if x > 0 1 if x < 0 0 if x = 0
For X = [x1, x2, …xn]′ ∈ ℝn, we denote sgn(X) = [sgn(x1), sgn(x2), …, sgn(xn)]′.

Then, we calculate the partial subgradient of W with regard to U(j),i (1 ≤ in and 1 ≤ jt) as follows:

  • For j = 2, 3, …, t, define Fj,i,1 = γsgn(U(j),iU(j−1),i).

  • For j = 1, 2, …, t – 1, define Fj,i,2 = –γFj+1,i,1.

  • Let F1,i,1 = Ft,i,2= 0, and we have:

  • For i = 1, 2 …,n:

    W U ( 1 ) , i = 2 α U ( 1 ) , i 2 ( R i , 1 U ( 1 ) , i T V j ) V j I ( R i , j ) + F 1 , i , 1 + F 1 , i , 2

  • For i = 1,2 …,n and j = 2, 3, …, t:

    W U ( j ) , i = 2 ( R i , j U ( j ) , i T V j ) V j I ( R i , j ) + F j , i , 1 + F j , i , 2

Similarly, we calculate the partial subgradient of W with regard to Vj (1 ≤ jt):

  • For 2 ≤ jt, define Gj,1 = λsgn(VjVj−1).

  • For 1 ≤ jt – 1, denote Gj,2 = – Gj+1,1.

  • Let G1,1 = Gt,2 = 0, and we have:

    W V 1 = 2 β V 1 2 i = 1 n ( R i , 1 U ( 1 ) , i T V 1 ) U ( 1 ) , i I ( R i , 1 ) + G 1 , 1 + G 1 , 2
    and for j > 1:
    W V j = 2 i = 1 n ( R i , j U ( j ) , i T V j ) U ( j ) , i I ( R i , j ) + G j , 1 + G j , 2

Finally, with the results above, we present the solution algorithm in Algorithm 1.

6. Applications on Missing Sensor Data Imputation

In this section, we evaluate our approach through two large-sized datasets and compare the results with two state-of-the-art algorithms in terms of parametric sensitivity, convergence and missing data recovery performance. The following paragraphs describe the set-up, evaluation methodology and the results obtained. To simplify the parameter tuning, we set α = β and λ = γ in the algorithm implementation.

6.1. Evaluation Methodology

Three state-of-the-art algorithms are selected for comparison to the proposed MMP model. The first one is the k-nearest neighbor-based imputation model [1]. As a local model, for every missing entry, Ri,j, the knnmethod takes the estimation, i,j, as the mean of the k nearest neighbors to it. Let Sensors 13 15172i1(x) be the set consisting of the k non-empty entries to x; then:

R ^ i , j = 1 k R i , l N ( R i , j ) R i , l

The second algorithm is the probabilistic principle components analysis model (PPCA) [30,31], which has achieved state-of-the-art performance in the missing traffic flow data imputation problem [31]. Denote the observations of the incomplete matrix, R, as Ro. Let xN(0, I); to estimate the missing values, PPCA first fits the parameters μ and C with:

{ μ * , C * } = arg μ , C max Pr ( x R o ) ~ N ( x σ 2 ( I + σ 2 C C T ) 1 C R o , ( I + σ 2 C C T ) 1 )
where σ is the tunable parameter. Then, with Ro, μ* and C*, it takes the estimation of the missing values (denoted as Rm) as:
R ^ m = arg R m max N ( R m C T x , σ 2 I )

The third algorithm is the probabilistic matrix factorization model (PMF) [7], one of the most popular algorithms targeting the Netflix matrix completion problem. PMF first seeks the low rank matrices, U and V, that:

{ U * , V * } = arg U , V min i = 1 n j = 1 t ( R i , j U i T V j ) 2 I ( R i , j ) + α i = 1 n U i 2 2 + β j = 1 t V j 2 2
then for the missing entry Ri,j, it takes the estimation as
R ˆ i , j = U i T V j


Algorithm 1: The Multi-Matrices Factorization Algorithm.

Input: matrix R; number of the latent features, d; learning rates, η1, η2, η3 and η4; regularization parameters, α and λ; threshold ϵ.
Output: the estimated matrix, .
// Initialize U and V.
1Draw random vectors, U(1),1, U(1),2, …, U(1),n, V1N(0, I);
2for j = 2; jt; j + + do
3 Let Vj = Vj−1 + Z; here Z ∼ Laplace(0, 1);
4end
5for j = 2; jt; j + + do
6for i = 1; in; i + + do
7  Let U(j),i = U(j-1),i + Z; here Z ∼ Laplace(0, 1);
8end
9end
// Coordinate descent.
10 W 1 = i = 1 n j = 1 t ( R i , j U ( j ) , i T V j ) 2 I ( R i , j ) + α j = 1 t U ( j ) , 1 2 2 + β V 1 2 2 + γ i = 1 n j = 2 t U ( j ) , i U ( j 1 ) , i 1 + λ j = 2 t V j V j 1 1;
11
12W2 = inf;
13while |W2W1| > ϵ do
14W2 = W1;
15for i= 1, 2 …, n do
16  Let U ( 1 ) , i new = U ( 1 ) , i η 1 W 1 U ( 1 ) , i;
17end
18for j > 1 and i = 1, 2 …, n do
19  Let U ( j ) , i new = U ( j ) , i η 2 W 1 U ( j ) , i
20end
21 V 1 new = V 1 η 3 W 1 V 1;
22for j = 2 …, t do
23  Let V j new = V j η 4 W 1 V j;
24end
25 Replace all U(j),is with V j new s and Vjs with , recompute W1;
26end
27return , where R ^ j = U ( j ) T V j;

These three algorithms, as well as the proposed MMF, are employed to perform missing imputations for the incomplete matrix, R, on the same datasets.

The testing protocol adopted here is the Given X (0 < X < 1) protocol [32], i.e., given a matrix, R, only X percent of its observed entries are revealed, while the remaining observations are concealed to evaluate the trained model. For example, a setting with X = 10% means that the algorithm is trained with 10% of the non-missing entries, and the rest of the 90% non-missing ones are held and are to be recovered. In both of the experiments on synthetic and real datasets, the data partition is repeated five times, and the average results, as well as the standard deviations over the five repetitions are recorded.

Similar to many other missing imputation problems [1,3-5,7,13,33-35], we employ the root mean square error (RMSE) to depict the distance between the real values and the estimations: Let S = {s1, s2, … sn} be the test dataset and = {1, 2, … n} be the estimated set; here, i is the estimation of si. Then, the RMSE of the estimation is given by 1 n k = 1 n ( s k s ^ k ) 2.

6.2. Synthetic Validation

To conduct a synthetic validation of the studied approaches, we randomly draw a 100 × 10, 000 matrix R using the procedure detailed in Algorithm 2. The rows in R correspond to the areas, S1, S2, …, Sn, and the columns correspond to the time. Thus, Ri,j represents the data collected in Si at time Tj. Notably, the parameter, ri,j, in Algorithm 2 is used to control the magnitude of the variation happening to Si from Tj−1 to Tj. Combining lines 4 and 5, we have: for i ∈ [1, n] and j ∈ [1, t], | R i , j R i , j 1 R i , j 1 | = | r i , j | 0.1. This constraint ensures that the data collected in Si does not change too much over time Tj−1 to Tj.


Algorithm 2: Synthetic Data Generating Procedure.

1for i = 1, 2, …, n do
2 Draw Ri,1N(0, 1);
3for j = 2, 3, …, t do
4  Let ri,j ∼ Uniform(−0.1, 0.1);
5  Ri,jRi,j−1 + ri,jRi,j−1;
6end
7end

We first evaluate the sensitivity of the proposed algorithm to the regularization parameters, α and λ. Half of the entries in R are randomly selected as testing data and recovered using the remaining 50% as the training data. Namely, we take X = 50% in the Given X protocol. In the experiments, we first fix α = 0.01, tune λ via λ = 0.01 × 2n (n = 0, 1, … 7) and, then, do the reverse by changing α via α = 0.01 × 2n (n = 0, 1, … 7), but setting λ = 0.01. The average RMSEs with the same parameter settings on different data partitions are summarized in Figure 2.

In Figure 2, the RMSE-1 curve represents the recovery errors obtained by fixing α and changing the value of λ. The RMSE-2 curve corresponds to the errors with different α values and fixed λ. We can see that even when λ is expanded by more than 100 times (27 = 128), the RMSE still remains stable. A similar result also appears in the experiments on the parameter, α, where a significant change of the RMSE only occurs when n is greater than six, i.e., when α is expanded more than 60 times.

The second experiment we conduct is to study the prediction ability of the proposed algorithm, as well as that of the comparison algorithms. In the Given X protocol, we set X = 10%, 20%, …, 90% in sequence. Then, for each X value, we perform missing imputations via our algorithm and the comparison algorithms. In the all implementations, we set k = 5 for the knn model. As with the MF-based algorithms, we examine their performance with respect to the latent feature dimension d = 10 and d = 30 respectively. Furthermore, in the implementation of MMF, we fix α = λ = 0.1, η1 = η2 = η3 = η4 = 0.01. All results are summarized in Table 1.

As shown in Table 1, when X is large, e.g., X ≥ 50%, knn is competitive with the matrix factorization methods, while in the other situations, the MF methods outperform it significantly. In terms of the MF-based methods, we find that our algorithm outperforms PPCA significantly in all settings. The RMSEs of our algorithm are at most roughly 20% of that of PPCA. Specifically, for X ∈ [30%; 80%], the RMSEs of the proposed algorithm are even only 10% of that of PPCA. We can also observe that the parameter, d, has a different impact on the performances of our algorithm and PPCA: When d changes from 10 to 30, most of the RMSEs of PPCA increase evidently, while for our algorithm, the RMSEs are reduced by roughly 5%.

When compared with PMF, our algorithm also performs better in most of the settings: PMF achieves lower RMSEs than MMF only in two cases, in which d = 10, X = 60% and d = 10; X = 80% respectively. Another interesting finding is that the promotion of the feature number, d (from 10 to 30), has little impact on the performance of PMF.

We also exam the convergence speed of the proposed algorithm. In the missing recovery experiments conducted above, for each X setting, we record the average RMSE of the recovered results after every 10 iterations of all data partitions. We can see from Figure 3 that, for all X values, the errors drop dramatically in the first 20 iterations and remain stable after the first 100 iterations. We can conclude that the proposed algorithm converges to the local optimization solutions after around 100 iterations.

6.3. Application to Impute the Missing Traffic Speed Values

To evaluate the feasibility of the proposed approach on real-world applications, in this section, we conduct another experiment on a traffic speed dataset, which was collected in the urban road network of Zhuhai City [36], China, from April 1, 2011 to April 30, 2011. The data matrix, R, consists of 1,853 rows and 8,729 columns. Each row corresponds to a road, and each column corresponds to a five minute-length time interval. All columns are arranged in ascending order of time. An entry, Ri,j (1 ≤ i ≤ 1, 853,1 ≤ j ≤ 8729), in R is the aggregate mean traffic speed of the ith road in the jth interval. Since all the data in R are collected by floating cars [37], the value of Ri,j could be missing if there is no car on the i th road in the j th time interval. Our statistics show that in R, there are nearly half of the entries, i.e., eight million entries are missing values.

We perform missing imputation on matrix R using the studied algorithms with parameter settings k = 5 and d = 10. In the implementation of MMF, we fix α = 0.25, λ = 0.5, η1 = η2 = η3 = η4 = 0.5. We summarize all results in Table 2, from which both the feasibility and effectiveness of MMF are well verified. In detail, when X is large enough, e.g., X ≥ 80%, knn is competitive, while in the other cases, knn cannot work as well as the MF based algorithms. As for the MF algorithms, we see that the proposed MMF outperforms PPCA and PMF in all X settings. Particularly, when the observations are few (X = 10% and X = 20%), the errors of our algorithm reduce by 33% compared to those of PPCA and by 10% compared to those of PMF, respectively. When X > 20%, the RMSE differences between PPCA and our algorithm tend to be slight, but the overall errors of PPCA are roughly 3% ∼ 5% higher than those of MMF. For PMF, the RMSEs remain about 10% higher than MMF in all settings.

7. Conclusion

Missing estimation is one of the main concerns in current studies on sensor data-based applications. In this work, we formulate the estimation problem as a matrix completion one and present a multi-matrices factorization model to address it. In our model, each column, Rj, of the target matrix, R, is approximated by the product of a spatial feature matrix, U(j), and a temporal feature vector, Vj. Both Uj and Vj are time dependent, and hence, their product accommodates the ability to describe the time variant sensor data. We also present a solution algorithm to the factorization model. Empirical studies on a synthetic dataset and real sensor data show that our approach outperforms the comparison algorithms.

Reviewing the present work, it is notable that the proposed model only incorporates the temporal structure information, while the information on the spatial structure is disregarded, e.g., the data collected in two adjacent areas, Sk and Sl, should be close to each other. Hence, our next step is to extend our model with more complex structured data.

Acknowledgments

This work has been partially supported by National High-tech R&D Program (863 Program) of China under Grant 2012AA12A203.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. García-Laencina, P.J.; Sancho-Gómez, J.L.; Figueiras-Vidal, A.R.; Verleysen, M. K nearest neighbours with mutual information for simultaneous classification and missing data imputation. Neurocomputation 2009, 72, 1483–1493. [Google Scholar]
  2. Ni, D.; Leonard, J.D.; Guin, A.; Feng, C. Multiple imputation scheme for overcoming the missing values and variability issues in ITS data. J. Transport. Eng. 2005, 131, 931–938. [Google Scholar]
  3. Smith, B.L.; Scherer, W.T.; Conklin, J.H. Exploring imputation techniques for missing data in transportation management systems. Transport. Res. Record. J. Transport. Res. Board 2003, 1836, 132–142. [Google Scholar]
  4. Qu, L.; Zhang, Y.; Hu, J.; Jia, L.; Li, L. A BPCA Based Missing Value Imputing Method for Traffic Flow Volume Data. Proceedings of 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Neatherlands, 4–6 June 2008; pp. 985–990.
  5. Jiang, N.; Gruenwald, L. Estimating Missing Data in Data Streams. In Advances in Databases: Concepts, Systems and Applications; Springer: Berlin/Heidelberg, Germany, 2007; pp. 981–987. [Google Scholar]
  6. Netflix Prize. Avaiable online: http://www.netflixprize.com (accessed on 1 July 2013).
  7. Salakhutdinov, R.; Mnih, A. Probabilistic matrix factorization. Adv. Neural Inf. Process. Syst. 2008, 20, 1257–1264. [Google Scholar]
  8. Koren, Y. Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model. Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, USA; 2008; pp. 426–434. [Google Scholar]
  9. Seung, D.; Lee, L. Algorithms for non-negative matrix factorization. Adv. Neural Inf. Process. Syst. 2001, 13, 556–562. [Google Scholar]
  10. Srebro, N. Learning with Matrix Factorizations. Ph.D Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2004. [Google Scholar]
  11. Srebro, N.; Rennie, J.D.; Jaakkola, T. Maximum-margin matrix factorization. Adv. Neural Inf. Process. Syst. 2005, 17, 1329–1336. [Google Scholar]
  12. Independent and identically distributed random variables. Avaiable online: http://en.wikipedia.org/wiki/ndependent_and_identically_distributed_random_variables (accessed on 1 July 2013).
  13. Koren, Y.; Bell, R.; Volinsky, C. Matrix factorization techniques for rcommender systems. Computer 2009, 42, 30–37. [Google Scholar]
  14. Xu, W.; Liu, X.; Gong, Y. Document Clustering Based on Non-Negative Matrix Factorization. Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, Pisa, Italy, 28 July–1 August 2003; pp. 267–273.
  15. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar]
  16. Brunet, J.P.; Tamayo, P.; Golub, T.R.; Mesirov, J.P. Metagenes and molecular pattern discovery using matrix factorization. Proc. Natl. Acad. Sci. USA 2004, 101, 4164–4169. [Google Scholar]
  17. Candès, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar]
  18. Hoyer, P.O. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 2004, 5, 1457–1469. [Google Scholar]
  19. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 2010, 11, 19–60. [Google Scholar]
  20. Ke, Q.; Kanade, T. Robust L1Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–26 June 2005; pp. 739–746.
  21. Nati, N.S.; Jaakkola, T. Weighted Low-Rank Approximations. proceedings of the 20th International Conference on Machine Learning (ICML 2003), Washington, DC, USA, 21–24 August 2003; pp. 720–727.
  22. Abernethy, J.; Bach, F.; Evgeniou, T.; Vert, J.P. Low-Rank Matrix Factorization with Attributes; Technical Report; N-24/06/MM; Paris, France; September; 2006. [Google Scholar]
  23. Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998. [Google Scholar]
  24. Rissanen, J. Minimum Description Length Principle; Springer: Berlin, Germany, 2010. [Google Scholar]
  25. Candè s, E.J.; Tao, T. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar]
  26. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  27. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequencyinformation. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar]
  28. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  29. Bertsekas, D.P. Nonlinear Programming; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  30. Tipping, M.E.; Bishop, C.M. Probabilistic principal component analysis. J. Royal Stat. Soc. Ser. B Stat. Methodol. 1999, 61, 611–622. [Google Scholar]
  31. Qu, L.; Hu, J.; Li, L.; Zhang, Y. PPCA-based missing data imputation for traffic flow volume: A systematical approach. IEEE Trans. Intell. Transport. Syst. 2009, 10, 512–522. [Google Scholar]
  32. Marlin, B. Collaborative Filtering: A Machine Learning Perspective. Ph.D Thesis, University of Toronto, ON, Canada, 2004. [Google Scholar]
  33. Nguyen, L.N.; Scherer, W.T. Imputation Techniques to Account for Missing Data in Support of Intelligent Transportation Systems Applications; UVACTS-13-0-78; University of Virginia: Charlottesville: Charlottesville, VA, USA, 2003. [Google Scholar]
  34. Gold, D.L.; Turner, S.M.; Gajewski, B.J.; Spiegelman, C. Imputing Missing Values in Its Data Archives for Intervals under 5 Minutes. Proceedings of 80th Annual Meeting of Transportation Research Board, Washington, DC, USA, 7–11 January 2001.
  35. Shuai, M.; Xie, K.; Pu, W.; Song, G.; Ma, X. An Online Approach Based on Locally Weighted Learning for Real Time Traffic Flow Prediction. The 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM GIS 2008), Irvine, CA, USA, 5–7 November 2008.
  36. Zhuhai. Avaiable online: http://en.wikipedia.org/wiki/Zhuhai (accessed on 1 July 2013).
  37. Floating Car Data. Avaiable online: http://en.wikipedia.org/wiki/Floating_car_data (accessed on 1 July 2013).
Figure 1. The structure assumptions.
Figure 1. The structure assumptions.
Sensors 13 15172f1 1024
Figure 2. Empirical studies on parameter sensitivity.
Figure 2. Empirical studies on parameter sensitivity.
Sensors 13 15172f2 1024
Figure 3. Empirical studies on convergence speed.
Figure 3. Empirical studies on convergence speed.
Sensors 13 15172f3 1024
Table 1. Recovery errors on the synthetic dataset (mean ± std).
Table 1. Recovery errors on the synthetic dataset (mean ± std).
10%20%30%40%50%60%70%80%90%
knn13.41 ± 0.626.80 ± 0.194.44 ± 0.063.12 ± 0.052.27 ± 0.021.81 ± 0.091.72 ± 0.011.69 ± 0.051.62 ± 0.06

d = 10PPCA17.09 ± 2.0820.24 ± 1.3022.75 ± 1.3223.96 ± 2.1920.79 ± 1.0016.67 ± 1.3418.68 ± 0.8111.98 ± 0.705.11 ± 3.10
PMF3.23 ± 0.233.33 ± 0.193.29 ± 0.123.34 ± 0.073.29 ± 0.091.69 ± 0.041.83 ± 0.021.81 ± 0.031.85 ± 0.04
MMF3.07 ± 0.072.21 ± 0.102.14 ± 0.091.98 ± 0.061.93 ± 0.061.92 ± 0.041.75 ± 0.031.84 ± 0.021.80 ± 0.03

d = 30PPCA19.65±3.0122.48 ± 0.8624.86 ± 1.5423.99 ± 0.6122.67 ± 0.4920.22 ± 1.4617.53 ± 0.7014.23 ± 1.7011.14 ± 1.72
PMF3.20 ± 0.213.32 ± 0.133.35 ± 0.093.36 ± 0.113.31 ± 0.071.78 ± 0.041.81 ± 0.021.79 ± 0.021.86 ± 0.11
MMF3.06 ± 0.052.17 ± 0.082.07 ± 0.051.94 ± 0.031.74 ± 0.031.62 ± 0.021.64 ± 0.031.65 ± 0.011.69 ± 0.03
Table 2. Recovery errors on the transportation dataset (mean ± std).
Table 2. Recovery errors on the transportation dataset (mean ± std).
10%20%30%40%50%60%70%80%90%
knn40.47 ± 0.0231.79 ± 0.0125.41 ± 0.0221.35 ± 0.0018.33 ± 0.0015.89 ± 0.0113.73 ± 0.0111.67 ± 0.009.45 ± 0.02
PPCA17.90 ± 0.0117.36 ± 0.0113.00 ± 0.0212.25 ± 0.0111.47 ± 0.0111.31 ± 0.0311.19 ± 0.0211.14 ± 0.0411.16 ± 0.10
PMF14.41 ± 0.0112.83 ± 0.0312.43 ± 0.0112.33 ± 0.0212.36 ± 0.0112.35 ± 0.0112.13 ± 0.0011.99 ± 0.0311.96 ± 0.02
MMF11.79 ± 0.0211.51 ± 0.0111.43 ± 0.0111.05 ± 0.0211.05 ± 0.0111.01 ± 0.0010.83 ± 0.0110.69 ± 0.0110.70 ± 0.02

Share and Cite

MDPI and ACS Style

Huang, X.-Y.; Li, W.; Chen, K.; Xiang, X.-H.; Pan, R.; Li, L.; Cai, W.-X. Multi-Matrices Factorization with Application to Missing Sensor Data Imputation. Sensors 2013, 13, 15172-15186. https://doi.org/10.3390/s131115172

AMA Style

Huang X-Y, Li W, Chen K, Xiang X-H, Pan R, Li L, Cai W-X. Multi-Matrices Factorization with Application to Missing Sensor Data Imputation. Sensors. 2013; 13(11):15172-15186. https://doi.org/10.3390/s131115172

Chicago/Turabian Style

Huang, Xiao-Yu, Wubin Li, Kang Chen, Xian-Hong Xiang, Rong Pan, Lei Li, and Wen-Xue Cai. 2013. "Multi-Matrices Factorization with Application to Missing Sensor Data Imputation" Sensors 13, no. 11: 15172-15186. https://doi.org/10.3390/s131115172

Article Metrics

Back to TopTop