1. Introduction
Recognizing limb position, movement trajectories and general human activity, using inertial measurement units (IMUs) is a highly active field of research in pervasive computing [
1,
2,
3,
4]. Individual accelerometers, gyroscopes and magnetometers, and body sensor networks are increasingly being used in applications related to fitness and wellness [
5,
6,
7,
8], and health. From monitoring professional athletic performance [
9,
10,
11] and tracking fitness goals [
12], to applying diagnostic, symptom monitoring, and rehabilitation protocols [
13,
14,
15,
16,
17], sensing wearable devices are starting to cross the barrier from purely scientific research applications to the consumers and clinical practice.
All IMU-based wearable sensing applications, to be successful outside a research setting, need to address two issues, robust performance and wearability. Although using high quality materials can help in achieving robust performance, handling wearability issues can be more elusive. Wearable devices must be easy to attach and keep on and it should be highly intuitive to properly mount them on the body part they were intended for, without deteriorating or affecting their performance characteristics. This is particularly important when the context of usage of a wearable sensing device is medical. Health applications have a significantly lower tolerance for error, compared to casual fitness and wellness applications, and justifiably so.
One method manufacturers use to address the issue of wearability is designing specific mounting accessories [
18,
19] for their products and ensuring that the proper use of the sensing device depends as little as possible on the wearer. Another way to help with proper placement of the wearable sensing device on the body is providing detailed instructions of use, with graphical representations. All these reduce the risk of improper placement and can help to comply with regulatory requirements, however, there still remains significant probability that a user will wear the device misaligned or poorly attached, which is a threat both to the integrity of the device itself, but most importantly to the data it will record and, hence, its performance as a sensing apparatus. Specially designed accessories and passive controls, e.g., detailed installation guidelines, are useful but mitigate the problem only up to a certain extent. If the sensing device itself has particularities regarding its attachment, such as, placement labels in a network of identical sensors, the risk of misplacement remains high.
The issue can be further mitigated using active controls, which would entail designing wearables that have minimal placement restrictions, and, particularly regarding body sensor networks, avoiding hard to read and easy to misinterpret labels and/or complex calibration phases. To accomplish this, the individual sensing devices in a body sensor network should be placement-agnostic, which essentially means that any device of the network should be able to be placed on any of the designated places, without affecting the performance of the said network. In commercial or medical applications, where the activity and motion characteristics detection must be conducted outside the bounds of a predefined set of allowed movements and supervision, and even worse, in settings where movement disorders could be present and could further contaminate the user’s motor patterns, the need for placement-agnostic sensing devices is even greater.
Methods exist that use machine learning algorithms to automatically detect the actual placement of the sensing devices during different activities [
20]. However, the accuracy of those methods usually goes only up to 90%. Although for a research paper, this level of accuracy could be considered high, it is not acceptable for medical use. As already mentioned, the challenge is even greater when the devices are intended to be used by patients with movement disorders, such as Parkinson’s disease (PD). Those people have abnormal movement patterns both as a result of their symptoms, e.g., parkinsonian gait [
21], and as a result of medication side-effects, i.e., the so-called “chorea”, dyskinetic unintentional movements, possibly affecting all limbs and torso. An automatic identification of sensor placement should be optimized to compensate for such abnormal and random movements, when the intended use of the body sensor network is to monitor patient activity and disease manifestations.
There exist online methods for automatic detection of the placement of unlabeled sensing devices, i.e., methods that rely on algorithms that are executed on the sensing devices themselves, while they are being worn. These methods are usually used to adapt the sensors’ mode of operation depending on the placement [
20] and optimize power consumption or sampling frequency. However, most of the times these methods end up cropping the raw signal, making it unrecoverable in case of an error and misclassification. Of course, when the purpose of the sensing device is to detect events in real time, there is a risk and benefit trade-off. However, in the case of symptom quantification and monitoring disease progression, where identification of events in real time is not required, collecting the raw signal in a resolution as high as possible, regardless of the sensor placement, and actually determining the placement during post-processing can be particularly beneficial both in achieving high-accuracy placement detection and in extracting as much information as possible from the signal concerning the activity or movement patterns under investigation.
Algorithms for the identification of sensor body placement have been published before. In [
20], the authors present a method to derive the location of an acceleration sensor based on the identification of walking regions within the signal and using a selection of predefined potential device positions. In [
22] the authors also identify walking regions as a first step, but they have more broad definitions of potential device positions, namely forearm, upper arm, head, thigh, shin, and waist. In [
23], the authors used a body sensor network comprising 17 inertial sensors and designed a decision tree classifier to identify their positioning, using 6-s walking trials. In [
24], the authors use a broad set of movements to train the classifier and define potential classes as waist, right and left wrist, right and left arm, and right and left ankle. The set of movements are stand to sit, sit to lie, bend to grasp, kneeling right, look back, turn clockwise, step forward and backward, and finally, jumping. In [
25], the authors present a method based on random forest classifiers to predict a mobile device’s position on the body, by analyzing acceleration data. The potential body positions are head, chest, upper arm, waist, forearm, thigh, and shin. The first step in their method is to identify the activity by considering two groups, namely static, i.e., standing, sitting, lying, and dynamic activities, i.e., climbing, jumping, running, and walking. Finally, in [
26] the authors take a different approach by allowing their on-body sensor position detection method to support discovering new positions, apart from those already defined as supported by the algorithm.
In [
27], we presented a simple and robust rule-based method for identifying on-body sensor position, embedded in a medical device, the PDMonitor
®, which consists of five identical IMU-based sensing devices, designed and developed to monitor patients with PD. The method consisted of a simple, rule-based algorithm to identify during post-processing the exact placement of the five sensing devices, given that they should be placed on both wrists, shanks, and waist. The system has specially designed mounting accessories and detailed instructions for using the wearables. However, the sensing devices themselves lack any markings or labels, allowing the patients to attach them in any of the pre-defined positions. Even if the sensing devices had labels on them, proper placement would not be guaranteed because the device is meant to be used by PD patients with possible cognitive impairment for unsupervised monitoring at home. Therefore, the proper attachment is assisted by the properly designed accessories and the placement identification, happens in post-processing, after the recording session is over, when the user docks them into their proprietary base for charging and data uploading.
In this work, we further discuss the on-body sensor placement identification algorithm previously presented [
27] and describe how it works in cases where the user wears different combinations of the sensing devices and not all five, although this goes against the current intended use of the PDMonitor
® system (London, UK). We also present an evaluation of the algorithm, using the system in 61 PD patients and 27 age-matched healthy control subjects. The results for the 88 recording sessions performed by the participants, showed that the sensing device placed on the waist was identified correctly 87 times (99%), those on the legs 88 times (100%), the one on the left hand 86 times (98%), and finally the sensing device placed on the right hand 85 times (97%).
3. Data Collection
The device was worn by 61 patients and 27 age-matched healthy control subjects, during a multi-site clinical study. The study sites were the Technische Universität Dresden in Germany, the University Hospital of Ioannina in Greece and the Fondazione Ospedale San Camillo IRCCS, in Venice Italy. The study was approved by all site Ethics Review Boards, with decision numbers EK 384092017, 30 November 2017 for Germany, 23535, 20 August 2017 and 22434, 2 September 2017 for Greece, and 81A/CESC, 23 January 2018 for Italy.
All subjects wore the device while inside the clinic, both performing standardized tests (i.e., UPDRS screening), and moving freely as they would at home, where the device is supposed to be primarily used. The subjects wore the device for at least two hours, as required by its specifications, producing long enough signals for the algorithm to perform as expected.
The sensing devices worn were labeled, with tags BD, LL, RL, LH, and RH for Body, Left Leg (shank), Right Leg (shank), Left Hand (wrist), and Right Hand (wrist), respectively. All the appropriate accessories were used and each subject has approximately five (5) hours of recordings collected, totaling in about 440 h.
4. Results
The identification of the on-body sensor placement through the execution of the algorithm described in
Section 2.2 was very accurate. Without investigating the cases further, the automated result for the 88 recording sessions performed by the participants, showed that the BD sensor was identified correctly 87 times (99%), the LL and RL sensor 88 times (100%), the LH 86 times (98%), and finally the RH sensor 85 times (97%). That essentially means that the algorithm misclassified once the BD sensor as a RH, and twice the LH as RH. For participant 101022, the algorithm mistook the BD sensor for the RH sensor and vice versa. For participants 20203 and 20226, the algorithm mistook the LH sensor for the RH sensor and vice versa. It would make sense to further examine the data collected by those subjects to identify potential specific disadvantages of the algorithm or edge cases.
Digging deeper into the misclassification cases, regarding participant 101022, the values of orientation changes, as they are defined in
Section 2.2.1, for the wrist-worn sensors are 2 and 0, which is very low. This is particularly alarming by itself, because such a low number of changes is bound to lead to misclassification. As discussed in
Section 2.3.4, the first step of the algorithm is to identify wrist-worn sensors as those with the most orientation changes. When the value of at least one is zero, it is impossible to identify it properly, except only by pure luck. For the particular patient, the value of orientation changes for four sensors was zero, and only the sensor worn on the left wrist had 2 changes. That sensor was identified properly in the following steps of the algorithm, but for the right wrist sensor the first one in the processing pipeline of all those with no orientation changes identified was randomly chosen, i.e., the BD sensor. From that point on, no results of the algorithm could be considered valid, although the other features seemed to have performed well, discriminating properly the shank sensors. The case of participant 101022 was particularly difficult for the algorithm to properly identify the wrist-worn sensors because the participant suffered from severe rigidity and significant bradykinesia in the upper limbs. In other similar cases of subjects with severe rigidity the algorithm performed properly but in this particular subject there was also an injury on the left wrist, with a bandage wrapped around it, which forced the one wrist-worn sensor to be placed significantly higher on the arm, further than the wrist, compared to the other. These factors contributed to a zero number of orientation changes, which never happened in the cases of proper sensor placement identification, even regarding participants with more severe rigidity and arm bradykinesia. This finding shows that the correct placement of the sensors is very important for this algorithm to perform well.
Regarding participant 20203, the first step of the algorithm worked properly, isolating the wrist sensors as those with the highest number of orientation changes (i.e., 80 and 89 compared to zero for sensors placed on shanks and body). The following step also worked properly, correctly identifying the body sensor as the one with the lowest average gyroscope energy when that is higher than 70 deg/s. The difference of gyroscope energy for axis
while standing, properly distinguished between right and left for the shanks. The problem for that patient occurred in the final step of the algorithm, where the correlation between axes
and
of the gyroscope was not estimated as positive for the left arm and negative for the right arm, as it should. Reviewing the files of this particular participant, which was a healthy subject, revealed that the wrist-worn sensors were not attached as instructed on the dorsal side facing outwards, but instead the sensors were attached to the inner part of the wrist (ventral side), facing towards the body when the participant stood with arms hanging down (
Figure 9). It can, therefore, be safely considered that in this case the algorithm worked properly but the result was reversed because of the incorrect placement of the sensors by the participant.
Finally, regarding participant 20226, a similar error could have occurred. The algorithm performed well in identifying wrist-worn sensors, body and left and right shank but the correlation between axes
and
of the gyroscope, as it is defined in
Section 2.2.3 was negative for both wrist-worn sensors, leading in a confusion between the two. There are no photos to help confirm whether the devices were properly attached. It is possible that the participant failed to attach the wrist sensors properly, however it is also possible that in this case the algorithm failed to identify right and left wrist-worn sensors properly.
Following the per-case analysis the results reported above have changed, increasing the accuracy of the algorithm as follows:
BD: 87 correct classifications out of 88 sessions (misclassified case 101022), resulting in 98.86% accuracy;
LL: 88 correct classifications out of 88 sessions, resulting in 100% accuracy;
RL: 88 correct classifications out of 88 sessions, resulting in 100% accuracy;
LH: 87 correct classifications out 88 sessions (probably misclassified case 20226), resulting in 98.86% accuracy;
RH: 86 correct classifications out 88 sessions (misclassified case 101022 and probably misclassified case 20226), resulting in 97.72% accuracy.
The average accuracy considering all five sensors is 99.1%.
5. Discussion
The algorithm presented in this paper is a simple, lightweight, and robust implementation to identify on-body IMU sensor placement, based on easy to interpret transparent rules and calculations, that could very easily be executed in an embedded platform with limited resources. The method is intended to run after the collection of data, provided all sensors record the same data. The only prerequisite is that the positions on the body for sensor placement are predefined, because it uses specific characteristics of motion patterns of the targeted limbs.
The algorithm was developed as a means to identify sensor placement during post-processing of signals recorded by a medical device, intended to be used by Parkinson’s disease patients at home, with no supervision by a healthcare professional. This, by itself, poses two important performance requirements; the expected accuracy of the outcome is high, as it is implemented on a medical device, and the signal characteristics used to calculate the result must not be affected by the expected impaired motion patterns of the users.
The evaluation of the performance of the algorithm in 88 subjects showed an average accuracy of 99.1%, with 100% of the sensors placed on the arms, 98.86% of the sensors placed on the body and 98.29% of the sensors placed on the legs being correctly identified. Although these numbers can still be improved, they are considered acceptable.
Compared to other implementations, the method proposed and evaluated in this work is very robust because it does not need to identify specific activities or types of activities within the signal to actually have regions of interest to use for the its next steps. In [
20,
22,
23,
24,
25], the first step is to identify particular activities to be able to move forward with placement identification. In the case of our implementation, the user could be mostly lying on the bed or sitting on a chair while wearing the device. Although those cases present difficulties for our method as well, it is most likely to be successful in identifying sensor placement correctly than the other proposed ones. The approach of [
26] is actually not relevant to our method because the device on which it is implemented, the PDMonitor
®, being a medical device, has strict instructions for use defining proper use, which does not allow for alternative positions to the ones defined.
The accuracy of the proposed method herein is the highest compared to the other implementations. This is particularly important because it is implemented on a medical device, where the error tolerance is very low. The average accuracy of 99.1% achieved during the evaluation using 88 subjects is very high compared to best of the other approaches, which are 90% [
20], 89% [
22], 97.5% [
23], 98.8% [
24], and 89% [
25].
An advantage of the proposed algorithm is its simplicity and that it requires limited resources to run. The calculations on which it relies can be easily performed on computational units of low capacity, even embedded systems, if required. The platform for which it was developed could not support other similar implementations utilizing compute-intensive methods, such as deep learning networks.
Another advantage of the algorithm is that, although it was developed for a system with a fixed context, meaning that the body positions on which the sensing devices are to be mounted are predefined, it can be executed in a modular way, accommodating different configurations of simultaneously used sensing devices.
A very important limitation of the algorithm presented is that it heavily depends on the correct placement of the sensors on the predefined body parts, according to the instructions for use of the device. This can be considered acceptable, given it is used in a medical device, which, by nature, has strict instructions and context of use. The simplicity of the algorithm does not allow it to compensate for improper sensor placement. However, it is still a problem of classifying input data into one of predefined classes. If one of the sensors is placed in an unknown position it will be misclassified. Another disadvantage of the algorithm, but acceptable as it is part of its design is the identification of sensor positions during post processing and not online. As already discussed, this is in complete accordance with the intended use of the device for which the algorithm was developed.
In the future, with more data being available from the clinical trials where the PDMonitor
® is being used, we plan to further improve the performance of the proposed algorithm by optimizing the handcrafted parameters shown in
Table 1, which, although are extracted based on data analysis and extensive trial and error and serve their purpose, were not specifically investigated at this stage.
This algorithm is part of a product, which is being improved through clinical trials and real-world data. As new evidence is generated, future versions are expected to be optimized and validated to perform even better and ultimately achieve 100% accuracy.