2.1. Gait Analysis through sEMG
The EMG is the measurement of the electrical activity of a muscle. EMG actually gives only minor information about the contraction of individual muscles, but it provides a visible indication of the muscle activity. The information provided can be extremely interesting when considering the time instance in which a muscle activates [8
Many methods are available for EMG. The least invasive one, but also one of the most subject to interference, is the surface EMG (sEMG). In fact, first of all, it is not always possible to distinguish one muscle’s activity from that of the adjacent ones (the so-called “cross-talk” phenomenon [8
]). Moreover, since there is a thick layer of skin covering the muscle, the voltage signal arrives at the electrodes with a significant attenuation that must be compensated with specific preamplification techniques.
However, the sEMG is the preferable way of performing gait activity monitoring over a long time period as it only requires the patient to have some surface electrodes on specific parts of its body.
In the literature, it is possible to find some examples of gait characterization through sEMG-based systems. For example, in [15
], the authors propose a gait cadence analysis performed through sEMG. The system is in charge of acquiring electrode signals and processes them according to a first-order statistic, in order to characterize the periodicity of gait in healthy patients.
], a more complicated analysis involving deep learning is presented. In fact, still employing sEMG, the author provides a method to identify and classify two sub-phases of gait (proper stance and swing phases) based on an artificial neural network approach.
], another gait sub-phase recognition system is proposed, with an inferential algorithm for studying and characterizing the sub-phases. The great advantage of this latter approach is its reduced cost, since it is based on Arduino Mega 2560 for signal processing, and it is wearable, thereby allowing one to perform non-invasive remote analysis of gait.
2.2. Multi-Board Acquisition Systems
This work presents a wearable multi-board acquisition system for offline activity monitoring. In the literature, other systems with the same purpose are present, with similar characteristics but with some issues that our design aims to mitigate.
], a wearable system for long-term monitoring of gait is presented. The board has the advantage of being an “add-on” to the patient, unlike others described in the literature that require the patient to wear specific shoes or other specifically designed clothes. Salarian et al.’s work, instead, allows one to equip the patient with the system and remove it when the monitoring is over. The board employed for testing is equipped only by gyroscopes. Nevertheless, the authors claim to have reached interesting results. Note that the proposed system works for offline monitoring, thanks to the fact that the motes store their revelations on 8 MB memory cards.
Laerhoven et al. [18
] focused their attention on how to reduce the power consumption for wearable devices of this type. Toward that purpose, they first check for the presence of motion using a set of tilt switches, and then activate an accelerometer to track the motion itself. This latter sensor, indeed, drains far more current than a tilt switch. Moreover, since tilt switches furnish a binary output, it is possible to reduce the computational power required to reveal the presence of motion. Actually, due to the nature of the tilt switches, this system does not guarantee a reduced number of false positive or false negative events of motion, thereby preventing the achievement of high levels of accuracy.
In 2011, Cancela et al. [19
] presented a distributed wearable system for online gait analysis in PD made with five 3-axial accelerometers put on limbs and a gyroscope with accelerometer put on the belt. Zigbee protocol is used as the telecommunications standard between the motes and to communicate with a personal computer (PC) that receives and stores the data. The main issues of the approach arise from the fact that online monitoring can be power hungry, as already discussed. Nevertheless, it is useless to collect and process data immediately in long-term monitoring. Moreover, the proposed system requires a PC that must be specifically equipped with algorithms and telecommunication standards transceivers in order to work and collect data, and this may be an obstacle for doctors and patients.
Finally, Oniga et al. [20
] developed a solution for studying motion activity/inactivity for a subject in the long term. Wearable motes equipped with only gyroscopes and 3-axis accelerometers are considered and coordinated by a smartphone, in charge of receiving and processing data, before storing them in a persistent online database. The proposed solution is an online monitoring system. It is still capable of reducing some sources of power consumption, thanks to intelligent (triggered) activation of the motes by the smartphone. Nevertheless, the patients must have a smartphone equipped with a specific application and a persistent internet connection to store data on the remote database, accessible to the doctor.
The system presented in this paper allows a more complete analysis of systems of the type previously described in terms of completeness of the measurements that can be made thanks to the variety of sensors used, and by the simplicity and versatility of use. The approach ensures a longer battery life as it minimizes the computational complexity of the synchronization process. Moreover, it allows an easy reconfiguration and expansion of the group of units without any change in the settings, as the synchronization protocol is quite simple and does not involve any bidirectional communication.
2.3. Time Synchronization Protocols
The main issue when analyzing multi-unit (i.e., distributed) systems consists of the physical clocks synchronization. The sub-elements themselves of a multi-board system are not synchronized. It is mandatory that each and every subsystem has the same vision of the time domain, as the units are meant to capture events that later must be cross-correlated. Even assuming the theoretical possibility that every device turns on at at the same time, it is well known that the natural clock drift due to environmental changes may lead to time mismatches in the run-time phase. As a consequence, the need for a time synchronization algorithm arises.
Time synchronization protocols belong either to the distributed protocols (DPs) (consensus-based) or to the centralized protocols (CPs) classes. In the DPs, the decision on the global system time is made by specific algorithms aiming to solve the problem of consensus. The latter, in distributed systems, calls to the single nodes to “agree” on a given property, decision, or quantity [21
] (in our case, time). A consensus algorithm has the following properties:
Termination: after ending, every node makes a decision;
Agreement: every couple of nodes agrees on the same decision;
Validity: every decided value is a proposed one;
Integrity: every node makes a decision at least once.
Obviously, a time synchronization algorithm based on consensus guarantees a high level of accuracy, at the cost of long computation time needed to reach the consensus itself. As a consequence, a DP could be unacceptably heavy in time-critical systems and in fast-changing-events observing systems.
In CPs, the decision is made by a single node, called the leader, that we will identify as the BAN coordinator in our context. Note that backup-leaders can be considered to implement fault tolerance. Some algorithmic solutions are directly inheritable from classical distributed systems theory, while other techniques are specially designed for BANs and other kinds of ad-hoc networks.
The simplest example is constituted by the algorithm proposed by Cristian in [22
]. This algorithm operates in two stages: first, the peer node N
asks the coordinator C
to be time-synchronized; then, after, the coordinator sends a message containing its local time (that will become the network global time)
. Once the peer has received this message, it sets its local time to be:
an estimation of the actual communication delay detailed in Section 3.2.2
. The algorithm furnishes a good approximation when the random components of
are negligible when compared to the deterministic ones.
Another solution is presented in [23
], the so-called Berkeley algorithm, adopted in Unix 4.3 BSD. Here, the coordinator observes message exchange containing local clock values
still to estimate
for each node i
, as in the previous case. Then, it averages the clock values. Instead of sending the decided time, the coordinator sends to each node how much it should increase or decrease its clock according to the global decided time.
The presented algorithms are more suitable than the consensus-based synchronization algorithms, but still, they face a time latency in which the event cannot be located in a precise global time instant, that is, until the whole network agrees on a specific timestamp. This issue suggests the need for a new class of algorithms, specifically designed for BANs, that is, more time bound. In fact, since BANs for event sensing must cope with physical quantity measurements which can vary very rapidly, or have to deal with strict energy constraints [24
], it is necessary to introduce new strategies.
Let us consider the class of broadcast-based time synchronization algorithms. In this case, the BAN coordinator periodically sends the same message to all the BAN nodes (i.e., a broadcast message). Once each node has received the message, they adjust their internal time according to the specific algorithm procedures.
The reference broadcast synchronization (RBS) algorithm proposed in [24
] offers a solution in which the leader sends a broadcast message to all the nodes. These latter register the time in which they received the message as a function of their local clock and inform the other nodes about their local time computation. A global time consensus is reached in a way similar to Berkeley algorithm—that is, once the relative time difference computation in each node is known. Unfortunately, a simpler approach with faster convergence would be needed in some applications.
A lightweight solution is represented by the Flooding Time Synchronization Protocol (FTSP) [25
]. FTSP sends the MAC layer timestamp to all the nodes in the flooding mode. Each node that received the message computes the local drift as a function of the global time and aligns its local time to it. Linear regression is used for compensating clock skewing and drifting. The main con of the approach is the large amount of exchanged information [26
In this scenario, our algorithm is a simplified version of the fusion of RBS and FTSP protocols. In fact, we eliminated the clock skewing and drifting compensations needed in FTSP. Furthermore, the message sent in the broadcast will contain the value of the real-time clock (RTC) as payload, instead of taking the time value from the MAC layer, as in the RBS protocol.