Next Article in Journal
Low-Cost Photolithographic Fabrication of Nanowires and Microfilters for Advanced Bioassay Devices
Previous Article in Journal
Omnidirectional Underwater Camera Design and Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data Fault Detection in Medical Sensor Networks

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, No.10 Xitucheng Road, Haidian District, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(3), 6066-6090; https://doi.org/10.3390/s150306066
Submission received: 20 November 2014 / Revised: 23 February 2015 / Accepted: 27 February 2015 / Published: 12 March 2015
(This article belongs to the Section Sensor Networks)

Abstract

:
Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M). Its mechanism includes: (1) use of a dynamic-local outlier factor (D-LOF) algorithm to identify outlying sensed data vectors; (2) use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3) the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

1. Introduction

Wireless sensor networks (WSNs) have been widely used in medical applications. Body sensors are designed to be implanted in or adhere to the human body to monitor physiological parameters for a long term. For example, clinicians can monitor heart rate, blood glucose, or the body temperature of patients at any time [1,2]. Clinicians then diagnose the state of illness according to accurate sensed readings.
How to transmit and verify measurement data in a timely way has received more attention from researchers. The changing status of illness requires physiological readings to be reported frequently. What’s more, data inaccuracy due to limited resources (e.g., fluorescent lighting may cause data errors in pulse oximeters), or incorrect placement on the patient’s body will seriously influence the diagnosis of clinicians. e.g., piezoelectric sensors for snoring monitor the snore waveform of patients under sleeping, anaesthesized or sober conditions. The belt or medical adhesive tape may fall off and result in detection failure because patients always turn over.
Recently, some references have researched medical sensor detection problems [3,4,5]. They classify the states of patients as normal and abnormal, and then analyze readings’ abnormity according to intervals of physiological parameters to detect and isolate erroneous nodes, but these approaches have some drawbacks: (1) they detect data abnormities caused by sensor faults only when peoples’ states are stable, but it’s difficult to determine and judge when fault readings are mixed with abnormal physiological data; (2) in fact, physiological indicators of patients may be not be synchronously changing at the same time. For example, the heart rate can increase rapidly, while temperatures may slowly increase after a few minutes. Out-of-sync changes of medical attributes will cause a decrease in detection accuracy. Faulty measurements from sensors negatively influence the measured results and lead to diagnosis errors. Furthermore, they may threaten the life of a patient after alerting emergency personnel. Therefore, an important task is to detect abnormal and faulty measurements that deviate from real observations, and to distinguish sensor faults from real emergency situations in order to reduce false alarms.
In the paper, we mainly focus on detecting faulty medical sensors which generate faulty measurements mixed with abnormal physiological data, and accordingly increase the detection accuracy rate. We propose a Data Fault Detection in Medical body monitoring networks (DFD-M) method. Our innovations are:
(1)
We firstly use a dynamic-LOF algorithm to identify outlying data vector. We find the new added objects which only influence parts of objects’ local outlier factors (LOF) in the original dataset. Finding these related objects whose LOF values have changed, the dynamic-LOF algorithm narrows down the scope of the detected objects’ nearest neighborhood to be more sensitive to outliers. What’s more, through finding three-levels of influenced objects, it can reduce the time complexity when the size of dataset is increasing dynamically.
(2)
After locating an outlying sensed data vector, we use a fuzzy linear regression model based on trapezoidal fuzzy numbers to predict suspected readings in the data vector. In order to better fuzzify the prediction values, we construct an objective function of the sum of errors related to a fuzzy number expectation. These approaches help to embody a more precise relationship between prediction values and readings suspected to be faulty.
(3)
Propose a novel judgment criterion for fault state according to the fuzzy prediction value. We summarize 15 relative position relationships between the fuzzy prediction results and the normal intervals of the corresponding physiological parameters, and accordingly conclude whether a sensor is at fault.
The rest of the paper is organized as follows: Section 2 describes some related works about fault detection in WSNs. Section 3 introduces the proposed DFD-M mechanism and involved algorithms, including dynamic-LOF, fuzzy linear regression process, and determination criterions. The simulation results in Section 4 demonstrate our algorithm’s efficiency and superiority. In Section 5, we conclude the paper.

2. Related Works

Typical faults in WSNs are link faults and data faults. For faulty link detection, transmission and reception will be influenced by destructive nodes. Link faults will cause network bottlenecks and partitioning. Most approaches use probes to fetch feedback information about faulty paths. Researchers analyze probe feedback according to network symptoms, and infer network topology and link states. A link scanner (LS) [6] collects hop counts of probe messages. It provides a blacklist containing all possible faulty paths. In probabilistic reasoning, each link generates two probe records. A link is deemed to fail to send probes when the collection record is mismatched with the expectation.
In istributed data fault detection, a sensor can determine its fault state through its neighbors’ monitoring values [7]. Sensors only know one-hop neighbors’ states instead of global information. Sensors should send their readings and determined tendency states periodically. When more than half of one-hop good neighbors are deemed it to be possibly good, then it maybe fault-free. The detection accuracy lies in the fault probability of sensors and network topology. Ding et al. [8] proposed a faulty identification method with lower computational overhead compared with [7]. If the difference between readings is large or large but negative, then the sensor is deemed as faulty.
Krishnamachari and co-workers proposed in [9] a distributed solution for the canonical task of binary detection of interesting environmental events. They explicitly take into account the possibility of measurement faults and develop a distributed Bayesian scheme for detecting and correcting faults. Each sensor node identifies its own status based on local comparisons of sensed data with some thresholds and dissemination of the test results [10]. Time redundancy is used for tolerating transient sensing and communication faults.
In the centralized mode, MANNA architecture [11] adopts a manager to control and manage a global vision of the wireless sensor network. Every node will check its energy level and send a message to the manager/agent whenever there is a state change. The manager obtains the coverage map and energy level of all sensors based upon the collected information. To detect node failures, the manager sends GET operations to retrieve the node state. Without hearing from the nodes, the manager will consult the energy map to check its residual energy. However, this approach requires an external manager to perform the centralized diagnosis and the communication between nodes and the manager is too expensive for WSNs.
In medical sensor networks, readings are typically accumulated and transmitted to a central device [4]. The device stores and processes data and judges illnesses. Focusing on diagnosis errors influenced by faulty measurements, [12] represents sensor fault and patient anomaly detection and classification. The approach classifies sensed readings into normal and abnormal, and then uses regression prediction to distinguish faulty readings from physiological abnormal readings. It can reduce the frequency of false alarm triggered by inconsistent monitoring.
Reliable and light weight are also objectives in the detection of faults caused by sensors. The paper [13] proposes an online detection algorithm for isolating incorrect readings. To reduce false alarms, it firstly uses a discrete Haar wavelet decomposition and Hampel filter for detecting spatial deviations. Then a boxplot is adopted for temporal analysis. Miao et al. [14] presented an online lightweight failure detection scheme named Agnostic Diagnosis (AD), which is motivated by the fact that the system metrics of sensors usually exhibit certain correlation patterns. In [3], the authors present an online solution for fault detection and isolation of erroneous nodes in body sensor networks to provide sustained delivery of services despite encountered perturbations. It assumes that the sensors are either permanently faulty or fault free. Breunig et al. proposed an outlier detection method based on density and assign to each object a local outlier factor (LOF), which is the degree to which the object is outlying [15]. In [16], authors propose a fault detection and isolation algorithm in pervasive motion monitoring with two motion sensors, but in real life, sensors are not always perfectly synchronized. The poor constructions of sensors or monitoring program can generate missing data or unsynchronized data.
The dynamic changing illness of patients and out-of-sync variation of medical attributes influence the performance of data fault detection for medical sensors, so we further promote detection accuracy, reduce time complexity, and resolve the practical problem of out-of sync changing of sensed attributes.

3. Data Fault Detection of Body Sensors

We firstly assume physiological parameters have correlations between them. For example, heart rate (HR) is measured as the number of intervals in an electrocardiogram (ECG) signal. The respiration rate (RR) is proportional to the heart rate, but varies for different individuals. An additional one degree centigrade body temperature adds around 10–15 beats per minute to the heart rate [2]. In addition, episodes of low blood sugar (hypoglycemia) can trigger an arrhythmia, which often behaves like a racing heart [5]. In conclusion, respiration rate and body temperature are all positively correlated with heart rate.
Physiological parameters are correlated in time and space, and the correlation must be exploited to identify and isolate faulty measurements, in order to ensure reliable operation and accuracy diagnosis results. Usually, there is no spatial and temporal correlation among monitored attributes for faulty measurements. Based on the above theory, we introduce the data fault detection of body sensors, and focus on detecting faulty medical sensors so as to determine and judge the fault readings.
We firstly give some definitions. Physiological readings are described as a matrix X = (Xji), in which j represents measuring time, and i is sensor i. The sequence Xi = (X1i, X2i, X3i, …, Xti) represents measured values of sensor i from time T1 to Tt. The vector Xj = (Xj1, Xj2, …, Xjn) represents all of physiological parameters on time Tj.
The DFD-M mechanism is described as follows:To For reduce computational complexity and enhance detection average accuracy, a dynamic-LOF algorithm is applied in the increasing datasets. Based on this, it narrows down the scope of the detected objects’ nearest neighborhood to be more sensitive to outliers. Then, it finds out three levels of influenced objects and narrows the range of updating LOF values so as to improve the outlier detection efficiency (see Section 3.1).
Step 2: For an outlying vector, it needs to determine how many values are abnormal. Firstly, we define physiological normal and abnormal intervals of patients. For example, physiological normal HR is in the range (50, 130). Next, if all of readings are all in their normal or abnormal intervals and the corresponding vector is determined to be an outlier, no sensor is faulty, so this condition must be caused by a patient’s changing illness state. Otherwise, classify all of readings in the outlying vector into physiological normal and abnormal sets. The set which has more elements (supposed to be set A) will be regarded as reliable inputs of a fuzzy linear regression model. The outputs are the prediction values corresponding to elements in another set (supposed to be set B). As physiological indicators of patients may not synchronously changing at the same time, the prediction results only deduce that some sensors in set B are suspected to be faulty (see Section 3.2).
Step 3: For any reading y to be suspected, it must be a real number. While in the fuzzy regression process, the output is a fuzzy prediction value Y p ˜ = ( y a , y b , y l , y r ) . Its lower bound is (yayl), and upper bound is (yb + yr). We need to analyze the difference between Y p ˜ and y according to the corresponding normal intervals. If y is closer to Y p ˜ according to the determination criterion (see Section 3.3), it means the sensor corresponding with the reading y is good, otherwise, it is faulty.

3.1. Dynamic-LOF Algorithm

An exact definition of an outlier depends on the data structure and detection methods. A data object is either an outlier or not. Breunig et al. proposed an outlier detection method based on density and assign to each object a local outlier factor (LOF), which is the degree to which the object is outlying [15]. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood, so the key idea of LOF is to compare the local density of an object’s neighborhood with that of its neighbors.
Our detection targets are dynamic time-series readings which are dynamic and constantly updated. Since the density-based LOF algorithm cannot detect contextual anomalies, the LOF values of all objects will be recalculated once the dataset changes. When the size of dataset increases, it will take a lot of time to frequently update the LOF values of all the objects in dataset, so the density-based LOF algorithm may not be suitable for the dynamic increment dataset. We find that newly added objects can only influence parts of objects’ LOF values in the original dataset. We only need to find out these related objects whose LOF values have changed, and recalculate their LOF values. Thus the efforts in recalculating LOF values of all the objects can be reduced, so in this paper, we propose a dynamic-LOF algorithm to identify outlying data vectors. We find out the three levels of influenced objects and recalculate these nodes’ new LOF values. Besides, a small modification is made to narrow down the scope of the detected objects’ nearest neighborhood, which can increase the detection accuracy, and then more outliers are detected.
The core idea of dynamic-LOF algorithm is, on the one hand, a small modification in k-distance, which makes our algorithm achieve higher detection average accuracy. On the other hand, finding out three levels of influenced objects and narrowing the range of updating LOF values to improve the outlier detection efficiency. The vector Xj = (Xj1, Xj2, …, Xjn) are all of sensed physiological readings at time Tj is regarded as an object in multidimensional space. The size of dataset D continually increases over time. Firstly we obtain LOF values of all objects in dataset D, and then establish an initial knowledge base. All the objects in the initial knowledge base are considered normal. The dynamic-LOF algorithm only needs to update the LOF values of the newly added object and other objects which are influenced by the new one.
Assume o, o', p, q, s, x, y, z to denote objects in a dataset D. Each object in dataset is assigned a local outlier factor. The larger the LOF is, the greater the chance is of an object being an outlier. We use the notation d(s, o) to denote the distance between objects s and o. s is an object in D. We take mean distance of object s, denoted as mk-distance to replace k-distance in the original LOF algorithm. It indicates the mean distance from s to its k-nearest objects.
Definition 1. 
k-distance and k-distance neighborhood of an object s.
For any positive integer k, the k-distance of object s, denoted as k-distance(s) is defined as the distance d(s, o) between s and an object o ϵ D such that:
(1)
For at least k objects o' ϵ D{s} it holds that d(s, o') ≤ d(s, o), and
(2)
For at most k − 1 objects o' ϵ D{s} it holds that d(s, o') < d(s, o).
Then the k-distance neighborhood of s is Nk(s) = {q ϵ D\{s}|d(s, q) ≤ k-distance(s)}. These objects q are called the k-nearest neighbors of s.
Definition 2. 
mk-distance of an object s.
For any positive integer k, the mk-distance of object s is:
m k d i s t a n c e ( s ) = o N k ( s ) d ( s , o ) | N k ( s ) |
Definition 3. 
mk-distance neighborhood of an object s.
Given mk-distance of s, the mk-distance neighborhood of s contains every object whose distance from s is not greater than mk-distance, i.e., Nmk(s) = {q ϵ D\{s}|d(s, q) ≤ mk-distance(s)}. Each object q in Nmk(s) is also in Nk(s).
Definition 4. 
Reachability distance of an object s with respect to object o is:
r d i s t m k ( s , o ) = m a x { m k d i s t a n c e ( o ) , d ( s , o ) }
Definition 5. 
Local reachability density of an object s is:
l r d m k ( s ) = 1 ( o N m k ( s ) r d i s t m k ( s , o ) | N m k ( s ) | )
Definition 6. 
Local outlier factor of an object s is:
L O F m k ( s ) = o N m k ( s ) l r d m k ( o ) l r d m k ( s ) | N m k ( s ) |
The outlier factor of object s captures the degree to which we call s an outlier. It is the average of the ratio of the local reachability density of s and those of its mk-distance neighbors. It is easy to see that the lower local reachability density is, and the higher the local reachability densities of its mk-distance neighbors are, the higher is the LOF value of s.
By using the above formulas to calculate the LOFmk values of all objects in the dataset, the scope of nearest neighborhood of each object can be narrowed down, so our improved algorithm is more sensitive to outliers, and can achieve higher detection average accuracy.
According to the steps above, it’s clear that the LOF value of each object depends on the local reachability density and its k-nearest neighbors. When there are any newly added, deleted, or updated objects, the LOF values of partially related objects would be influenced. In the environment of dynamic increment dataset, updating the LOF values of all the objects in dataset frequently will cost a great deal of temporal and spatial resources, but we note that only part of the related objects will be influenced by the changes of dataset, so we only need to find out these related objects whose LOF values have changed, and recalculate LOF values of these objects. Assume that an added object is p. According to Definition 1 to Definition 6, we also have the following definitions to find the three levels of influenced objects:
Definition 7. 
The first-level influenced objects. Given a new added object p, the first-level influenced objects of p contains every object x whose distance from p is not greater than k-distance(x), and the first-level influenced objects can be defined as:
F 1 ( p ) = { x | ( x D \ { p } ) d ( x , p ) k d i s t a n c e ( x ) }
The new object p makes Nk(x) change, and then leads to the subsequent change of Nmk(x), lrdmk(x) and LOFmk(x).
Definition 8. 
The second-level influenced objects. The second-level influenced objects of p contains every object y whose distance from x (object in F1(p)) is not greater than mk-distance(y), and the second-level influenced objects can be defined as:
F 2 ( p ) = { y | ( y D \ F 1 ( p ) ) ( x F 1 ( p ) ) d ( y , x ) m k d i s t a n c e ( y ) }
These second-level influenced objects remain Nmk(y) unchanged, but should recalculate lrdmk(y) and LOFmk(y) due to the change of mk-distance(x).
Definition 9. 
The third-level influenced objects. The third-level influenced objects of p contains every object z whose distance from y (object in F2(p)) is not greater than mk-distance(z), and the third-level influenced objects is:
F 3 ( p ) = { z | ( z D \ { F 1 ( p ) F 2 ( p ) } ) ( y F 2 ( p ) ) d ( z , y ) m k d i s t a n c e ( z ) }
The third-level influenced objects only LOFmk(z) changed. Based on the abovementioned analysis, we find that because of the addition of object p, there are only three levels influenced objects need to recalculate their LOFmk values. Other objects’ LOFmk values remain unchanged.
Definition 10. 
The set of influenced objects whose LOFmk values to be recalculated is F.
F = F 1 ( p ) F 2 ( p ) F 3 ( p )
In the Dynamic LOF, we firstly obtain the k-distance neighborhood Nk, mk-distance neighborhood Nmk, local reachability density lrdmk and local outlier factor LOFmk of all the objects in dataset D. We put the new objects into the knowledge base, and find in turn the three level influenced objects based on the new knowledge base. Finally, we update Nk, Nmk, lrdmk and LOFmk of these objects, while the LOFmk values of other objects remain unchanged. Finally, the LOFmk value of each incoming new object will be calculated according to the unceasing updating of the knowledge base. If the LOFmk value of a new object is smaller than a given threshold (an empirical value, equal to 2.0), it is normal. Otherwise it is outlying. Algorithm 1 describes the process of finding the first-level influenced objects after adding a new object p into dataset D.
Algorithm 1. Find First Level Objects (D, F1(p), p).
0: Initialize F 1 ( p )
1: for d ( x , p ) k d i s t a n c e ( x ) do // The new object p is in the k-distance neighborhood of object x
2: input x into F 1 ( p ) ; // Construct the set of the first-level influenced objects
3: end for
4: input p into D
5: input p into F 1 ( p ) // p is also contained in F 1 ( p )
F1(p) indicates the set of the first-level influenced objects (including p). If p is in the k-distance neighborhood of object x, then the object x is a first-level influenced object, and should be put into F1(p). In the end, all of objects in F1(p) should be recalculated the values of Nk, Nmk, lrdmk and LOFmk, so does object p, so we also put object p into F1(p). Algorithms 2 and 3 describe the process of constructing the sets of F2(p) and F3(p).
Algorithm 2. Find Second Level Objects (D, F2(p), F1(p))
0: Initialize F 2 ( p )
1: for all objects x in F 1 ( p ) \{p} do
2: for all objects y in D\ F 1 ( p ) && x is in N m k ( y ) do
3: input y into F 2 ( p ) ; // Construct the set of the second-level influenced objects
4: end for
5: end for
Algorithm 3. Find Third Level Objects (D, F 3 ( p ) , F 2 ( p ) ).
0: Initialize F 3 ( p )
1: for all objects y in F 2 ( p ) do
2: for all objects z in D\{ F 1 ( p ) F 2 ( p ) } && y is in N m k ( z ) do
3: input z into F 3 ( p ) ; // Construct the set of the third-level influenced objects
4: end for
5: end for
Algorithm 4 describes the update process of N k ( x ) and N m k ( x ) , where x is a first-level influenced object.
Algorithm 4. Update K-Distance ( F 1 ( p ) ).
0: for all objects x in F 1 ( p ) \{p} do
1:  if d ( x , p ) = k d i s t a n c e ( x ) then
2:  input p into N k ( x ) ;
3:  else if ( d ( x , p ) < k d i s t a n c e ( x ) && there are less than (k-1) objects in N k ( x ) then
4:  input p into N k ( x ) ;
5: else if d ( x , p ) < k d i s t a n c e ( x ) && there are (k-1) objects in N k ( x ) then
6:   remove the farthest neighbor in N k ( x ) ;
7:   input p into N k ( x ) ;
8:   recalculate k d i s t a n c e ( x ) ;
9:   recalculate m k d i s t a n c e ( x ) ;
10:   recalculate N m k ( x ) ;
11: else break;
12: end if
13: end for
According to the different positions which p inserts into, the influence of Nk(x) differs. There are three different cases as follows:
Situation 1. In Figure 1a, d(x, p) = k-distance(x), that is p falls on the circle of the k-distance neighborhood of object x. Put p into Nk(x) directly.
Situation 2. In Figure 1b, d(x, p) < k-distance(x), that is p falls within the circle of the k-distance neighborhood of x. If there are less than (k-1) objects within the circle, then put p into Nk(x).
Situation 3. In Figure 1c, d(x, p) < k-distance(x), p falls within the circle of the k-distance neighborhood of x. If there are exactly (k − 1) objects within the circle, firstly remove the farthest neighbor in Nk(x), then put p into Nk(x) and recalculate k-distance(x).
After updating of Nk(x) and k-distance(x), N m k ( x ) and mk-distance(x) can also be recalculated.
Figure 1. (a) Situation 1; (b) Situation 2; (c) Situation 3.
Figure 1. (a) Situation 1; (b) Situation 2; (c) Situation 3.
Sensors 15 06066 g001
Algorithm 5 describes the updating process of l r d m k ( x ) and l r d m k ( y ) , where x is a first-level influenced object and y is a second-level influenced object. Algorithm 6 describes the updating process of L O F m k ( x ) , L O F m k ( y ) , and L O F m k ( z ) , where x is a first-level influenced object, y is the second-level influenced object and z is the third-level influenced object.
Algorithm 5. Update LRD ( F 1 ( p ) , F 2 ( p ) )
0: for all objects x in F 1 ( p ) do
1:  recalculate l r d m k ( x ) ;
2: end for
3: for all objects y in F 2 ( p ) do
4:  recalculate l r d m k ( y ) ;
5: end for
Algorithm 6. Update LOF ( F 1 ( p ) , F 2 ( p ) , F 3 ( p ) )
0: for all objects x in F 1 ( p ) do
1: recalculate L O F m k ( x ) ;
2: end for
3: for all objects y in F 2 ( p ) do
4: recalculate L O F m k ( y ) ;
5: end for
6: for all objects z in F 3 ( p ) do
7: recalculate L O F m k ( z ) ;
8: end for

3.2. The Fuzzy Linear Regression Process

Considering the possible anomalies of sensed readings, and the uncertain relationships among them, it’s quite hard to reflect the fuzzy relationships by the simple linear regression which may cause prediction errors between the regression values and the actual sensed values. It’s better to characterize the output variable and regression coefficients by fuzzy numbers.
For medical sensor readings, we consider that the normal sensor readings from several days ago always show similarities, whereas abnormal sensor readings of adjacent times may deviate from each other, so the physiological parameters of a patient at a given time are closely associated with the historical data of adjacent times (which may be within several hours) instead of readings from several days ago. It also means that the impacts of historical data on the estimated outputs are more important at nearer monitoring times.
Based on the limitation mentioned above, we propose a linear regression model based on trapezoidal fuzzy number to predict a more appropriate fuzzy value for the suspected reading. In this regression model, we regard the minimum sum of regression error as a new objective function, and propose a method to obscure the sensor data using the expected value of trapezoidal fuzzy number. In addition, our proposed regression model has given adequate consideration to the different impacts of historical sensor data. By constructing the minimum sum of regression error and fuzzifying readings, we achieve more precise estimated outputs.
Construct the following fuzzy linear function:
Y j ˜ = A 0 ˜ + A 1 ˜ x j 1 + A 2 ˜ x j 2 + + A n ˜ x j n
where n is the number of independent variables, j is j-th data vector on time Tj which is involved in regression modeling. In Equation (9), the prediction value Y j ˜ and regression coefficient   A i ˜ ( i = 0 , 1 , , n ) are fuzzy values, and x j i is i-th measured real number of j-th vector. Define Y j ˜ and A i ˜ are trapezoidal fuzzy numbers, so Y j ˜ = ( y a j , y b j , y l j , y r j ) , A i ˜ = ( a i , b i , l i , r i ) , the membership function of Y j ˜ is defined as follows:
μ y ( y j ) = { y j y a j + y l j y l j , y a j y l j < y j < y a j 1 , y a j y j y b j y b j + y r j y j y r j , y b j < y j y b j + y r j 0 , o t h e r w i s e
where:
y a j = i = 1 n a i x j i + a 0
y b j = i = 1 n b i x j i + b 0
y l j = i = 1 n l i | x j i | + l 0
y r j = i = 1 n r i | x j i | + r 0
These historical sensed readings are real numbers, and must be fuzzified for calculating the corresponding fuzzy regression coefficients A i ˜ ( i = 0 , 1 , , n ) . We construct an optimized prediction model of trapezoidal fuzzy numbers so the prediction value is closer to the true value. To resolve the problem of fuzzifying historical values, we introduce a fuzzy number expectation which is a real number with reflecting the values of fuzzy number in average meaning. The expectation of trapezoidal fuzzy number [16] is:
E A ˜ = 2 a + 2 b l + r 4
We construct u sets of data to calculate the fuzzy coefficients using measured data. As physiological readings are cyclical, the value of u is the total number of readings that pick out faulty data in a period. Assume that there are totally m groups of measured samples. The following formula represents the m-th sample group in one period:
{ Y 1 m , x 11 , x 12 , , x 1 n Y 2 m , x 21 , x 22 , , x 2 n Y um , x u 1 , x u 2 , , x un
where u is the number of samples in one group, n is the number of independent variables. For independent variables x j i ( i = 1 , 2 , , n ,   j = 1 , 2 , , u ) in the m-th group, we keep the precise real values unchanged and use them to calculate the fuzzy coefficients. For output variables Y j h ( j = 1 , 2 , , u ,   h = 1 , 2 , , m ) , regard the historical measured data at the same instant but in different groups as multi-measured results. Let y b j and y a j be the maximum and minimum of Y j h ( h = 1 , 2 , , m ) respectively:
y bj = max { Y j 1 , Y j 2 , Y j 3 , , Y jm }
y aj = min { Y j 1 , Y j 2 , Y j 3 , , Y jm }
The highest and lowest dependent variables in the samples are viewed as parameters of a and b, so only parameters of y l j and y r j corresponding to Y j m ˜ are changing, and thus, the output variables Y j m ˜ ( j = 1 , 2 , , u ) have been fuzzified:
{ Y 1 m ˜ , x 11 , x 12 , , x 1 n Y 2 m ˜ , x 21 , x 22 , , x 2 n Y um ˜ , x u 1 , x u 2 , , x un
The least square method is a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the residuals of the points from the curve, so we borrow the ideas from the least square method to ensure the minimal sum of differences of evaluation precision R, it can be defined as follows:
R = j = 1 u ( E Y j m ˜ Y j m Y j m ) 2
where Y j m is a precise prediction value on m-th sample group. Then E Y j m ˜ is calculated as follows:
E Y j m ˜ = 2 y b j + 2 y a j y l j + y r j 4
Besides, for each tuple of sensor data, the fuzzy linear regression requires the estimated fuzzy number to contain the observed data with more than the degree of fitting λ j ( 0 λ j 1 ) which is a constant chosen by the decision-maker:
μ y ( y j ) λ j
Namely:
{ y j y a j + y l j y l j λ j , y a j y l j < y j < y a j y b j + y r j y j y r j λ j , y b j < y j y b j + y r j
For different readings, λ j is designed as different membership. In general, the closer to the prediction instant, the more important the readings are, and the higher the corresponding membership is. The parameters to be solved are:
M i n   R = j = 1 u ( E Y j m ˜ Y j m Y j m ) 2
s . t .   Y j m = ( y a j , y b j , y l j , y r j )
y a j = m i n { Y j 1 , Y j 2 , Y j 3 , , Y j m }
y b j = m a x { Y j 1 , Y j 2 , Y j 3 , , Y j m }
y a j = i = 1 n a i x j i + a 0
y b j = i = 1 n b i x j i + b 0
y l j = i = 1 n l i | x j i | + l 0
y r j = i = 1 n r i | x j i | + r 0
(   i = 1 n a i x j i + a 0 ) ( 1 λ j ) ( i = 1 n l i | x j i | + l 0 ) y j
( i = 1 n b i x j i + b 0 ) + ( 1 λ j ) ( i = 1 n r i | x j i | + r 0 ) y j y l j > 0 , y r j > 0 , j = 1 , 2 , , u , 0 λ j 1
Expectation E Y j m ˜ can be linearly described by y l j and y r j . The object function R is a quadratic function and all of restriction conditions are linear. Obviously, this is a nonlinear programming problem with one single objective function and several linear restrictions. In order to find the optimal solution to this problem and reduce the computation cost, we firstly transfer this constrained nonlinear programming problem into unconstrained nonlinear programming problem.
Consider the restrictions in this nonlinear programming model, Equations (28)–(33) can be rewritten as follows:
h 1 = y a j i = 1 n a i x j i + a 0
h 2 = y b j i = 1 n b i x j i + b 0
h 3 = y l j i = 1 n l i | x j i | + l 0
h 4 = y r j i = 1 n r i | x j i | + r 0
h i = 0 , i = 1 , 2 , 3 , 4
g 1 =   y j ( (   i = 1 n a i x j i + a 0 ) ( 1 λ j ) ( i = 1 n l i | x j i | + l 0 ) )
g 2 = ( i = 1 n b i x j i + b 0 ) + ( 1 λ j ) ( i = 1 n r i | x j i | + r 0 ) y j
g i = 0 , i = 1 , 2
Then we construct the following unconstrained nonlinear programming model which is equivalent to the original constrained nonlinear programming model:
j = 1 u ( E Y j m ˜ Y j m Y j m ) 2 + i = 1 4 | h i | + i = 1 2 | m i n ( 0 , g i ) |
To find the optimal solution for this model, many traditional solutions have been proposed, such as the Lagrange multiplier method or Conjugate Gradient method. Meanwhile, several heuristic algorithms such as Genetic Algorithms (GA) [17,18,19] also play an important role in solving this problem, because it is an efficient method to deal with the nonlinear programming problem with high-complexity and multi-parameters.
We use a genetic algorithm to get the optimal solution to Equation (42), so as to ensure the minimal sum of differences R when each membership of prediction variable is not lower than λ j . Then we get the fuzzy representation of historical sensed readings and the least square estimation A i ^ ˜ = ( a i ^ , b i ^ , l i ^ , r i ^ ) that is related to the fuzzy regression coefficients A i ˜ ( i = 0 , 1 , , n ) . The normal data will be involved in the prediction after getting the fuzzy coefficients. Finally, we get the fuzzy prediction value of the given sensor.

3.3. Fault Judgment Criterion

We define the following fault judgment criterion. As shown in Table 1, the black trapezoid indicates the prediction result of the fuzzy number Y p ˜ . λ is the similarity membership degree and 0.5 < λ < 1 . ( a l ) and ( b + r ) are the lower bound and upper bound of the fuzzy number, a and b are corresponding values of the membership degree λ . The red line indicates the normal interval of the corresponding physiological parameter. l o w and h i g h are the lower bound and upper bound of the normal interval. y indicates the actual sensed readings.
As shown in Table 1, whether a sensor is good or faulty depends on different relative position relationships between the prediction result of the fuzzy number and the normal intervals of the corresponding physiological parameter. In addition, it depends on which interval the actual sensed readings lie in. The undecided state indicates that the state of the corresponding sensor cannot be determined in this detection round because physiological indicators of patients may not synchronously change at the same time. It may be caused by a detection delay or a very slow change of the physiological parameter. We continue to detect the undecided faulty nodes in successive detection rounds. If the undecided state of the same sensor remains unchanged after the following several rounds of detection, then it will be viewed as faulty. The undecided state in judgment rule is suitable for various readings’ out-of-sync status.
Table 1. The relationships between Y p ˜ and y.
Table 1. The relationships between Y p ˜ and y.
The Relationships between Y p ˜ and YDifferent Intervals Which the Actual Sensed Readings Lie in
Normal IntervalAnomalous Interval
Sensors 15 06066 i001
  •   l o w y h i g h , fault
  •   y < l o w , fault
  • high < y < a undecided
  •   a y b good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i002
  •   l o w y < a l , fault
  •   a l y h i g h , undecided
  •   y < l o w , fault
  •   h i g h < y < a , undecided
  •   a y b , good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i003
  •   l o w y h i g h , undecided
  •   y < a l , fault
  •   a l y < l o w , undecided
  •   h i g h < y < a , undecided
  •   a y b , good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i004
  •   l o w y < a l , fault
  •   a l y < a , undecided
  •   a y h i g h , good
  •   y < l o w , fault
  •   h i g h < y b , good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i005
  •   l o w y < a , undecided
  •   a y h i g h , good
  •   y < a l , fault
  •   a l y < l o w , undecided
  •   h i g h < y b , good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i006
  •   l o w y h i g h , good
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y < l o w , good
  •   h i g h < y b , good
  •   b < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i007
  •   l o w y < a l , fault
  •   a l y < a , undecided
  •   a y b , good
  •   b < y h i g h , undecided
  •   y < l o w , fault
  •   h i g h < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i008
  •   l o w y < a , undecided
  •   a y b , good
  •   b < y h i g h , undecided
  •   y < a l , fault
  •   a l y < l o w , undecided
  •   h i g h < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i009
  •   l o w y b , good
  •   b < y h i g h , undecided
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y < l o w , good
  •   h i g h < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i010
  •   l o w y h i g h , undecided
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y b , good
  •   b < y < l o w , undecided
  •   h i g h < y b + r , undecided
  •   b + r < y , fault
Sensors 15 06066 i011
  •   l o w y < a l , fault
  •   a l y < a , undecided
  •   a y b , good
  •   b < y b + r , undecided
  •   b + r < y h i g h , fault
  •   y < l o w , fault
  •   h i g h < y
Sensors 15 06066 i012
  •   l o w y < a , undecided
  •   a y b , good
  •   b < y b + r , undecided
  •   b + r < y h i g h , fault
  •   y < a l , fault
  •   a l y < l o w , undecided
  •   h i g h < y , fault
Sensors 15 06066 i013
  •   l o w y b , good
  •   b < y b + r , undecided
  •   b + r < y h i g h , fault
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y < l o w , good
  •   h i g h < y , fault
Sensors 15 06066 i014
  •   l o w y b + r , undecided
  • b + r < y h i g h , fault
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y b , good
  •   b < y < l o w , undecided
  •   h i g h < y , fault
Sensors 15 06066 i015
  •   l o w y h i g h , fault
  •   y < a l , fault
  •   a l y < a , undecided
  •   a y b , good
  •   b < y < l o w , undecided
  •   h i g h < y , fault
We also find that if the actual sensed readings lie in the interval that corresponds to similarity membership degree λ , and then the sensor is regarded as good. If the actual output sensor data is greater than the larger one of ( b + r ) and h i g h , or less than the smaller one of ( a l ) and low, then the sensor is faulty. Otherwise, the sensor is considered as undecided.

4. Simulation Results

In this section, we analyze the performance results of DFD-M and typical algorithms. We obtain the variations of physiological readings including Heart Rate (HR), Respiration Rate (RR), Body Temperature (BT) and Oxygen Saturation (SpO2) from medical sensors. The HR fluctuates from 40 to 140 bpm, which may be caused by movements or physiological abnormalities. RR fluctuates around 20 bpm, and is approximately proportional to HR. SpO2 fluctuates from 90 to 100, and only in a rare case that SpO2 is lower than 90. With a few exceptions, BT basically remains stable around 37 °C. To simulate sensor failure, we artificially insert some failure data into the monitoring data.
All the algorithms are implemented in Matlab R2010a. Our simulations are divided into three parts: (1) simulations for the D-LOF algorithm compared with the Kernel-LOF (Ker-LOF) [20] and the original LOF [15] to verify the detection accuracy rate and time complexity; (2) simulations for fuzzy linear regression based on trapezoidal fuzzy numbers (FLR-Tra), simple linear regression (SLR) and fuzzy linear regression based on triangle fuzzy numbers (FLR-Tri) to demonstrate the evaluation error; (3) simulations for the whole detection performances of DFD-M, the fault detection algorithm (DA-J48) proposed in [11], and using FLR-Tri and SLR to replace the regression model in our proposed algorithm.

4.1. Simulations for LOF Values

In this part, we show the results of detection accuracy rate and time complexity. To simulate the dynamic increment of dataset, 83% instances are in the initial knowledge base, and 17% are viewed as new updated objects. In Figure 2, the simulation settings are k = 10 (in the denotation k-distance), and the maximal size of dataset is 5   × 10 4 with 15 outliers. As a given threshold of LOF m k is 2.0, LOF values of most of instances fluctuate around 1.0. The 15 instances whose LOF values are greater than 2.0 can be completely detected.
Figure 2. LOF values of objects with 15 outliers (k = 10).
Figure 2. LOF values of objects with 15 outliers (k = 10).
Sensors 15 06066 g002
As parameter k (in the notation k-distance) influenced the performance of algorithms, the Figure 3 shows the detection rate for outliers with different k when size of dataset is 2   × 10 4 and the number of outliers is 40. When setting k = 20, D-LOF has a perfect detection rate with 99.1%, while Ker-LOF is 98.7% and original LOF is 95.5%. Changing values of k, the detection rates for outliers accordingly decline. But our algorithm still remains a higher performance compared with others.
Figure 3. Detection rate for outliers with different k values.
Figure 3. Detection rate for outliers with different k values.
Sensors 15 06066 g003
To further analyze the influences caused by k value and the sizes of datasets, we set datasets be two-dimensional instances whose sizes are respectively 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5 with × 10 4, while there are 25, 35, 45, 55, 65, 75, 85, 95, 105, 115 outliers randomly distributed in these datasets respectively. Figure 4 shows the detection rates for outliers with different k when increasing the sizes of datasets in our algorithms. The optimized relations with the highest detection rates are respectively: 1   × 10 4 with k = 10, 2   × 10 4 with k = 20, 2   × 10 4 with k = 20, 4 × 10 4 with k = 30, and 5 × 10 4 with k = 40. And the detection rates are respectively 99.8%, 99.1%, 97.7%, 96.3% and 94.2% in these situations. For example, the best rates are respectively 2.3% and 1.8% higher than the worst rates in situations of 4 × 10 4 with k = 30 and 5 × 10 4 with k = 40.
Figure 4. Detection rate for outliers with different k.
Figure 4. Detection rate for outliers with different k.
Sensors 15 06066 g004
Figure 5 shows the running time of D-LOF, Ker-LOF and original LOF. We vary the values of parameter k related to the number of neighbors in the domain of an object. The time taken by these algorithms increases as the size of dataset increases, while D-LOF has a much lower running time than others. When k = 10 and the size of dataset is 5   × 10 4 with 105 outliers, run time of D-LOF is 56 s, which is 53.3% lower than that of original LOF, and 49.2% lower than that of Ker-LOF. With the same dataset size, when k = 40, running time of D-LOF is 81 s. It is 61.9% lower than that of original LOF, and 56.7% lower than Ker-LOF. The time complexity is sharply reduced because only three levels of influenced objects update their LOF values. We also find that when the size of dataset is unchanged, the smaller the value of k is, the faster the outliers are detected out.
Figure 6 shows the detection rate of D-LOF compared with Ker-LOF and original LOF when the values of k are set as the best optimized according to Figure 3 and Figure 4. When the size of the dataset is smaller, they almost achieve 100% detection rate. As the size of dataset is increasing, all of detection rates slowly decline. Because D-LOF is more sensitive to outliers, it is clear that of D-LOF performs better than the other two and achieves a higher detection rate for different dataset sizes. Even though the size of the dataset is 5   × 10 4 with 115 outliers, 109 outliers can be detected by D-LOF with a detection rate 94.8%, while that of the original LOF is 87.8%, and for Ker-LOF it is 92.4%.
Figure 5. Run time with different k.
Figure 5. Run time with different k.
Sensors 15 06066 g005
Figure 6. Detection rate with different optimized k.
Figure 6. Detection rate with different optimized k.
Sensors 15 06066 g006

4.2. Simulations for Evaluation Error R

In this part, we calculate the differences of evaluation precision for FLR-Tra in our algorithm compared with SLR and FLR-Tri. We represent the sum of differences evaluates precision R (defined in III. B section) of the three regression models in Figure 7. It shows the trends from rising to decline with the increasing number of historical sample groups in the linear regression model. That is because, in the initial stage, an increase in the number of historical samples leads to a significant increase of R . When the number of historical sample is 20, the three regression models achieve similar evaluation precision differences, that is 0.0175, 0.0168, and 0.0162 of SLR, FLR-Tri and FLR-Tra. The FLR-Tra achieves the top point when the number of samples is 80. When the number of historical samples is 100, the sum of differences precision evaluation of FLR-Tra reduces to 0.0143, which is 39.1% lower than that of FLR-Tri and 70.8% lower than that of SLR. That is, as the number of historical samples becomes even larger, the fuzzified historical values get closer to their actual values, make our regression more accurate, and then R declines.
Figure 7. The sum of differences of evaluation precision R.
Figure 7. The sum of differences of evaluation precision R.
Sensors 15 06066 g007
In Figure 8, the mean of the sum of differences evaluates precision R ¯ has an obvious decrease for FLR-Tra and FLR-Tri, while that of SLR rises at the initial stage, and then declines sharply. FLR-Tra achieves the best performance on R ¯ .
Figure 8. The mean of the sum of difference of evaluation precision R.
Figure 8. The mean of the sum of difference of evaluation precision R.
Sensors 15 06066 g008

4.3. Simulations for Whole Detection Performances of DFD-M

Alarms are reported when readings exceed the normal range. The main task of DFD-M is to distinguish data failure from alarms. To simplify for visualization, we only adopt readings of RR and HR in the results of Figure 9, which shows the alarm reporting scene without fault detection. There are in total 19 alarms raised presented in vertical red lines. It reports exceptions for HR and RR. Figure 10 shows the alarms reported by DFD-M. Only nine alarms remain in this chart, which excludes the other 10 alarms caused by sensor data faults.
Figure 9. Undetected alarms.
Figure 9. Undetected alarms.
Sensors 15 06066 g009
Figure 10. Alarms by DFD-M.
Figure 10. Alarms by DFD-M.
Sensors 15 06066 g010
To further evaluate the performance of DFD-M, we add new two types of readings: blood glucose and blood pressure. When the size of the data is increasing, false alarms inevitably happen. In this part, we mainly analyze the detection accuracy rate and false alarm rate. The latter is the ratio of number of real readings that are misjudged as false to the sum of readings. Figure 11 shows the detection accuracy rate for different data sizes when varying the data dimensions. The smaller the dataset size and dimension are, the higher the detection rate is. The lowest accuracy is 84.6% with 6 dimensions and data size of 5   × 10 4. Figure 12 shows the average of detection accuracy rate with increasing dimensions. DFD-M achieves almost 100% with two data attributes. With the increasing data dimensions, all of the accuracy rates of these algorithms decline slightly. When adding up to six dimensions, the average accuracy rate of DFD-M reaches a high level with 97.2%. We use FLR-Tri and SLR to replace the regression model in our proposed algorithm. The detection accuracy using FLR-Tri is 95.1% and only 91.3% in SLR with six dimensions. While DA-J48 has the lowest detection rate 90.5%.
Figure 11. Detection accuracy rate for different data sizes of DFD-M.
Figure 11. Detection accuracy rate for different data sizes of DFD-M.
Sensors 15 06066 g011
Figure 12. Detection accuracy rate.
Figure 12. Detection accuracy rate.
Sensors 15 06066 g012
Figure 13 shows the false alarm rate for different data sizes when varying data dimensions. The averages of false alarm rates of DFD-M are respectively 2.33%, 3.06%, and 5.58% for different data sizes, while that of DAJ-48 are respectively 4.3%, 6.26% and 9.8%.
Figure 13. False alarm rate for different data sizes.
Figure 13. False alarm rate for different data sizes.
Sensors 15 06066 g013
Figure 14. False alarm rate.
Figure 14. False alarm rate.
Sensors 15 06066 g014
Even though the dimensions increase to six and the size of dataset is 5   × 10 4, DFD-M still has a lower false alarm rate of 11.2%, 45.3% lower than that of DAJ-48. Figure 14 shows the average of false alarm rates for the abovementioned four algorithms. When adding dimensions, the false alarm rates of these four algorithms increase slowly. When adding up to five dimensions, the false alarm rate of DFD-M is only 3.2%, while that of FLR-Tri is 4.4%, SLR is 7.1% and DA-J48 is 5.5%. When adding up to six dimensions, the false alarm rate of DFD-M is 4.9%, which is 18.3% lower than that of FLR-Tri, 55.5% lower than that of SLR, and 50.4% lower than that of DA-J48. The above experimental results show that DFD-M achieves more superior performance than the others.

5. Conclusions

The paper proposes a medical sensor fault detection mechanism for data failure called DFD-M. It firstly identifies outlying data vectors, and then uses an improved fuzzy linear regression model to predict the reasonable range for outlying data, and finally it analyzes the relationships between the fuzzy prediction results and the normal intervals by using a novel fault state judgment criterion. The simulations demonstrate that DFD-M has a higher detection accuracy rate and lower false alarm rate that other similar algorithms.

Acknowledgments

This work was partly supported by NSFC (61401033), Ph.D. Programs Foundation of Ministry of Education of China (No. 20110005110011), Fundamental Research Funds for the Central Universities (No. 2014RC1102), and Beijing Higher Education Young Elite Teacher Project (YETP0474).

Author Contributions

Yang Yang and Zhipeng Gao were responsible for researching and designing the mechanisms presented in the paper; Qian Liu was involved in simulating and testing the algorithms; Xuesong Qiu and Luoming Meng edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berset, T.; Romero, I.; Young, A.; Penders, J. Robust heart rhythm calculation and respiration rate estimation in ambulatory ECG monitoring. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Hong Kong, China, 5–7 January 2012; pp. 400–403.
  2. Rotariu, C.; Manta, V. Wireless system for remote monitoring of oxygen saturation and heart rate. In Proceedings of the Federated Conference on Computer Science and Information Systems (FedCSIS), Wroclaw, Poland, 9–12 September 2012; pp. 193–196.
  3. Mahapatro, A.; Khilar, P.M. Online Fault Detection and Recovery in Body Sensor Networks. In Proceedings of the World Congress on Information and Communication Technologies (WICT), Mumbai, India, 11 December 2011; pp. 407–412.
  4. Kim, D.J.; Suk, M.H.; Prabhakaran, B. Fault Detection and Isolation in Motion Monitoring System. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August 2012; pp. 5234–5237.
  5. Nguyen, L.; Su, S.; Nguyen, H.T. Effects of hyperglycemia on variability of RR, QT and corrected QT intervals in Type 1 diabetic patients. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Janpa, 3–7 July 2013; pp. 1819–1822.
  6. Ma, Q.; Liu, K.; Xiao, X.; Cao, Z.; Liu, Y. Link Scanner: Faulty Link Detection for Wireless Sensor Networks. In Proceedings of the INFOCOM, Turin, Italy, 14–19 April 2013; pp. 2688–2696.
  7. Jiang, P. A new method for node fault detection in wireless sensor networks. Sensors 2009, 9, 1282–1294. [Google Scholar] [PubMed]
  8. Ding, M.; Chen, D.; Xing, K.; Cheng, X. Localized fault-tolerant event boundary detection in sensor networks. In Proceedings of the INFOCOM, Miami, FL, USA, 13–17 March 2005; pp. 902–913.
  9. Krishnamachari, B.; Iyengar, S. Distributed Bayesian algorithms for fault-tolerant event region detection in wireless sensor networks. IEEE Trans. Comput. 2004, 53, 241–250. [Google Scholar] [CrossRef]
  10. Lee, M.H.; Choi, Y.H. Fault detection of wireless sensor networks. Comput. Commun. 2008, 31, 3469–3475. [Google Scholar] [CrossRef]
  11. Ruiz, L.B.; SiqueiraI, G.; Oliveira, L.B.; Wong, H.C.; Nogueira, J.M.S.; Loureiro, A.A.F. Fault management in event-driven wireless sensor networks. In Proceedings of the ACM International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems, Venice, Italy, 4–6 October 2004; pp. 149–156.
  12. Salem, O.; Guerassimov, A.; Mehaoua, A.; Marcus, A.; Furht, B. Sensors Fault and Patient Anomaly Detection and Classification in Medical Wireless Sensor Networks. In Proceedings of the International Conference on Communications (ICC), Budapest, Hungary, 9–13 June 2013; pp. 4373–4378.
  13. Salem, O.; Yaning, L.; Mehaoua, A. A Lightweight Anomaly Detection Framework for Medical Wireless Sensor Networks. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 7–10 April 2013; pp. 4358–4363.
  14. Miao, X.; Liu, K.; He, Y.; Liu, Y.; Papadias, D. Agnostic diagnosis: Discovering silent failures in wireless sensor networks. In Proceedings of the INFOCOM, Shanghai, China, 10–15 April 2011; pp. 1548–1556.
  15. Markus, M.B.; Hans, P.K.; Raymond, T.NG.; Jorg, S. LOF: Identifying density-based local outliers. In Proceedings of the ACM SIGMOD Internatioanl Conference on Management of Data, Dallas, TX, USA, 16 May 2000; pp. 93–104.
  16. Zeynep, S.E.; Etrugrul, K. A fuzzy regression and optimization approach for setting target levels in software quality function deployment. Softw. Qual. Control 2010, 18, 323–339. [Google Scholar] [CrossRef]
  17. Krishnakumar, K. Micro-genetic algorithms for stationary and non-stationary function optimization. In Proceedings of the International Society for Optics and Photonics, Los Angeles, CA, USA, 14 January 1990; pp. 289–296.
  18. Goldberg, D.E.; Richardson, J. Genetic algorithms with sharing for multimodal function optimization. In Proceedings of the International Conference on Genetic Algorithms, Hillsdale, NJ, USA, 28–31 July 1987; pp. 41–49.
  19. Abd-El-Wahed, W.F.; Mousa, A.A.; El-Shorbagy, M.A. Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems. J. Comput. Appl. Math. 2011, 235, 1446–1453. [Google Scholar] [CrossRef]
  20. Bao, L.; Xiao, Y.; Yu, P.S.; Hao, Z.; Cao, L. An efficient approach for outlier detection with imperferct data labels. IEEE Trans. Knowl. Eng. 2014, 26, 1602–1616. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Yang, Y.; Liu, Q.; Gao, Z.; Qiu, X.; Meng, L. Data Fault Detection in Medical Sensor Networks. Sensors 2015, 15, 6066-6090. https://doi.org/10.3390/s150306066

AMA Style

Yang Y, Liu Q, Gao Z, Qiu X, Meng L. Data Fault Detection in Medical Sensor Networks. Sensors. 2015; 15(3):6066-6090. https://doi.org/10.3390/s150306066

Chicago/Turabian Style

Yang, Yang, Qian Liu, Zhipeng Gao, Xuesong Qiu, and Luoming Meng. 2015. "Data Fault Detection in Medical Sensor Networks" Sensors 15, no. 3: 6066-6090. https://doi.org/10.3390/s150306066

Article Metrics

Back to TopTop