Next Article in Journal
An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks
Next Article in Special Issue
Trust and Privacy Solutions Based on Holistic Service Requirements
Previous Article in Journal
Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks
Previous Article in Special Issue
Event-Based Control Strategy for Mobile Robots in Wireless Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks

Neuro-Bioengineering Research Group, Miguel Hernandez University, Avda. de la Universidad W/N, 03202 Elche, Spain
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 30571-30583; https://doi.org/10.3390/s151229818
Submission received: 27 October 2015 / Revised: 1 December 2015 / Accepted: 2 December 2015 / Published: 4 December 2015
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2015)

Abstract

:
This paper presents a novel kinematic reconstruction of the human arm chain with five degrees of freedom and the estimation of the shoulder location during rehabilitation therapy assisted by end-effector robotic devices. This algorithm is based on the pseudoinverse of the Jacobian through the acceleration of the upper arm, measured using an accelerometer, and the orientation of the shoulder, estimated with a magnetic angular rate and gravity (MARG) device. The results show a high accuracy in terms of arm joints and shoulder movement with respect to the real arm measured through an optoelectronic system. Furthermore, the range of motion (ROM) of 50 healthy subjects is studied from two different trials, one trying to avoid shoulder movements and the second one forcing them. Moreover, the shoulder movement in the second trial is also estimated accurately. Besides the fact that the posture of the patient can be corrected during the exercise, the therapist could use the presented algorithm as an objective assessment tool. In conclusion, the joints’ estimation enables a better adjustment of the therapy, taking into account the needs of the patient, and consequently, the arm motion improves faster.

1. Introduction

Robot-aided neuro-rehabilitation therapies have become an interesting field in the robotics area. There are several devices, such as exoskeletons, prosthesis or end-effector configuration robots, developed for this purpose [1,2]. They are able to help and assist the shortcomings of human beings. Post-stroke patients usually lose limb mobility due to the impairment in motor activity. Rehabilitation in this field takes an important role when it comes to improving the motor and proprioceptive activity [3,4]. In terms of the activities of daily living (ADL), the total or partial recovery of the upper limbs is the most important part in early rehabilitation. End-effector configuration robots are the most common devices used in these therapies. They are easily adapted to and easy to use by patients with different diseases.
These robots provide objective information about the trajectory followed by the end effector and the improvement in the motor recovery. However, they are not able to measure and control the arm movements. The progress in the arm joints, i.e., the range of motion (ROM), is an important parameter in these kinds of therapies. This estimation requires non-invasive wearable sensors, which must be easy to place onto the patient’s arm and must be extended to a clinical environment. Visual feedback of the arm configuration is studied in some rehabilitation therapies, though the arm joints cannot be measured [5,6]. This estimation can be accurately performed with optoelectronic systems based on motion tracking, even though they cannot be adapted to a rehabilitation environment [7,8]. In 2006, Mihelj developed a method to estimate the arm joints through two accelerometers placed onto the upper arm [9]. Then, Papaleo et al. improved this method using a numerical integration through the augmented Jacobian in order to estimate the arm configuration with only one accelerometer [10,11]. This algorithm performs a kinematic reconstruction of the simplified human arm model with seven degrees of freedom (DoFs) assuming that the shoulder is fixed during the therapy. Due to the loss of motor function, shoulder movements cannot be avoided by the patient, and therefore, this assumption cannot be always accomplished. Thus, it is necessary to measure shoulder movements in order to correct the position of the patient during the activity. This compensation can be detected and categorized through the fusion of a depth camera with skeleton tracking algorithms [12]. However, to compute the kinematic reconstruction, the position and orientation of the shoulder with respect to the robot are necessary.
This paper presents a kinematic reconstruction algorithm of human arm joints assuming a simplified model with five DoFs. Furthermore, this method is able to estimate the shoulder movement, i.e., its position and orientation. It is based on the inverse kinematics through the pseudo-inverse of the Jacobian [13]. The end-effector planar robot, called “PUPArm”, with three DoFs (see Figure 1), designed and built by Neuro-Bioengineering Research Group (nBio), Miguel Hernández University of Elche, Spain, is used [14]. The accuracy of the estimated joints with respect to the real arm joints, measured through a tracking camera, is studied. In addition, the ROMs on 50 healthy subjects performing a therapy activity are evaluated in two different cases: trying not to move the shoulder during the exercise and following the movement with the trunk to reach the goal.
Figure 1. PUPArm robot.
Figure 1. PUPArm robot.
Sensors 15 29818 g001

2. Algorithm Description

2.1. Human Arm Kinematic Chain

The human arm is a complex kinematic chain that can be defined as the contribution of several robotic joints. The arm was defined as a chain of nine rotational joints by Lenarčič and Umek [15]. Only seven DoFs take part in this experiment: a spherical joint in the shoulder; an elbow joint; and a spherical joint in the wrist; as is shown in Figure 2a. On the other hand, the PUPArm robot fixes two kinds of movements: the ulnar-radial deviation and the flexion-extension of the hand; thus, abduction-adduction ( q 1 ), flexion-extension ( q 2 ) and internal-external rotation ( q 3 ) of the shoulder, flexion-extension ( q 4 ) of the elbow and pronation-supination ( q 5 ) of the forearm comprise the kinematic chain linked through two segments: the upper arm ( l u ) and the forearm ( l f ). The Denavit–Hartenberg (DH) parameters of the arm are shown in Table 1, and their reference systems are shown in Figure 2b.
Table 1. DH parameters of the kinematic arm chain.
Table 1. DH parameters of the kinematic arm chain.
i θ i d i a i α i
1 π / 2 + q 1 00 π / 2
2 3 π / 2 + q 2 00 π / 2
3 q 3 l u 0 π / 2
4 π / 2 + q 4 00 π / 2
5 q 5 l f 00
Figure 2. Human arm joints. (a) Simplification of human arm joints with seven DoFs; (b) Denavit–Hartenberg (DH) coordinate systems of the arm with five DoFs.
Figure 2. Human arm joints. (a) Simplification of human arm joints with seven DoFs; (b) Denavit–Hartenberg (DH) coordinate systems of the arm with five DoFs.
Sensors 15 29818 g002

2.2. Integration Method

The inverse kinematics of the human arm during the exercise is based on the numerical integration through the pseudo-inverse of the Jacobian (J) [10]. The necessary devices to estimate the arm joints are: the end-effector robot; an accelerometer placed onto the upper arm and a magnetic angular rate and gravity (MARG) device placed onto the shoulder. Instantaneous joint velocities may be assessed as:
q ˙ = J 1 ( q ) { v d ˙ + K · e r r }
being v d ˙ the Cartesian vector of the hand velocity and e r r the error committed due to the numerical integration. It should be noted that v d ˙ is the hand velocity vector with respect to the shoulder, estimated through the MARG and the accelerometer. To minimize this error, a 7 × 7 gain matrix K is added to this Equation [13]. Then, the current arm joints are computed as:
q ( t k + 1 ) = q ( t k ) + q ˙ ( t k ) Δ t
where q ( t k ) is the previous estimated joints, q ˙ ( t k ) is the joint velocity vector obtained through Equation (1) and Δ t is the sampling time. On the other hand, the initial arm joints are necessary to begin the integration method; their computation is explained in Section 2.6.

2.3. Accelerometer Orientation

If slow movements are assumed, the orientation of the accelerometer can be estimated in any position of the arm within the reachable workspace of the robot. When joints q 1 to q 5 are equal to zero, the reference position of the arm is set; a visual representation of this position is shown in Figure 2b. The acceleration acquired in the reference orientation of the accelerometer regarding the gravity, which is shown in Figure 3a, is:
a c c 0 V g = 0 1 0
Figure 3. (a) Reference orientation of the accelerometer and the MARG. (b) Plane Π shaped by the X axis and Y axis of a c c 0 R ˜ a c c .
Figure 3. (a) Reference orientation of the accelerometer and the MARG. (b) Plane Π shaped by the X axis and Y axis of a c c 0 R ˜ a c c .
Sensors 15 29818 g003
Moreover, at any random position of the arm, a c c 0 V g can be computed through the applied rotation to the accelerometer ( a c c 0 R a c c ) as:
a c c 0 V g = a c c 0 R a c c a c c V g
being a c c V g the acceleration at this random position regarding the gravity.
Equation (4) has infinite rotation matrices over the gravity vector, though one possible solution may be computed as:
a c c 0 R ˜ a c c = I + M + M 2 1 cos θ sin 2 θ
with:
M = 0 V 3 V 2 V 3 0 V 1 V 2 V 1 0 V = a c c 0 V g × a c c V g s i n ( θ ) = V c o s ( θ ) = a c c 0 V g · a c c V g
Thereby, a plane can be shaped by the X axis and Y axis of a c c 0 R ˜ a c c (plane Π). This plane only contains the elbow point (E), but the correct orientation of the accelerometer must also contain the shoulder (S) and the wrist (W) points. Thus, the rotation angle (θ) is defined as the angle between the known wrist point and the new wrist point ( H ^ ˜ ), contained in the plane Π, when it is rotated around the gravity vector (g) placed in E (see Figure 3b). Therefore, H ^ ˜ , expressed in terms of θ, can be defined as:
W ^ ˜ = g · W ^ g + cos θ W ^ g · W ^ g sin θ g × W ^
where W ^ = W E / W E and g = 0 0 1 T . Then, θ can be obtained solving the following equation:
d W ^ ˜ , Π = A Π W ^ ˜ x + B Π W ^ ˜ y + C Π W ^ ˜ z + D Π A Π 2 + B Π 2 + C Π 2 = 0
having the plane Π computed as follows:
P ˜ a c c x = a c c 0 R ˜ a c c 1 0 0 T P ˜ a c c y = a c c 0 R ˜ a c c 0 1 0 T S P ˜ a c c y ¯ = P ˜ a c c y S P ˜ a c c x P ˜ a c c y ¯ = P ˜ a c c y P ˜ a c c x A Π B Π C Π = S P ˜ a c c y ¯ × P ˜ a c c x P ˜ a c c y ¯ D Π = A Π B Π C Π T · S
Two possible solutions are obtained through Equation (8) and, therefore, two values of a c c 0 R a c c . The correct solution is one for which the Z axis is in the same direction as the cross product between the elbow-wrist segment and elbow-shoulder segment due to the reference position of the accelerometer. Finally, the rotation of the accelerometer regarding the robot is computed as:
r R a c c = r R a c c 0 · a c c 0 R a c c
being r R a c c 0 the reference orientation of the accelerometer concerning the robot (see Figure 3a). This orientation is required to estimate the elbow orientation and shoulder position during the exercise.

2.4. MARG Orientation

The orientation of magneto-inertial devices is usually based on Kalman filtering [16]; nevertheless, they can be quite complicated, and an extended Kalman filter is needed to linearize the problem. The orientation filter to measure the rotation of the MARG of Madgwick et al. is used in this algorithm [17]. The magnetic distortion that may be introduced by external sources, including metal furniture and metal structures within a building, is performed in this filter [18]. Furthermore, the orientation algorithm requires an adjustable parameter (β) that can be adjusted to the requirements of this exercise. Hence, the value of this parameter ( β = 5 ) was established after a “trial and error” approach tested before the experiment, taking into account the features of the exercises.
This filter measures the reference quaternion of the device with respect to the Earth reference system, defined by the gravity vector and the Earth’s magnetic field lines. However, the rotation of the Earth concerning the robot is unknown. If the MARG is placed in a known orientation with respect to the robot ( M 0 R q ^ ), the acquired transformation defines the Earth frame relative to the sensor frame ( E M 0 q ^ ), and therefore, the reference transformation between the robot and the Earth is known as:
E R q ^ = M 0 R q ^ E M 0 q ^
Therefore, every rotation of the MARG is defined in the workspace as:
M R q ^ = E R q ^ E M q ^ *
where E M q ^ is the current value of the sensor. In this way, the shoulder orientation is estimated during the exercise.

2.5. Elbow and Shoulder Location

The hand, as was said before, is tightly attached to the end effector of the robot, and the ulnar-radial deviation and flexion-extension of the hand remain constant. Hence, the transformation matrix between the hand and the end effector ( r T w ) is known, and therefore, the elbow position may be obtained as:
r P e = r T w * 0 0 l f 1 T
The orientation of the elbow, since the rotation matrix between the elbow and the accelerometer orientation ( a c c 0 R e ) is known (see Figure 3a), may be calculated as:
r R e = r R a c c · a c c 0 R e
with r R a c c the rotation matrix computed through Equation (10). Thus, the transformation of the elbow relative to the robot remains:
r T e = r R e r P e 0 0 0 1
On the other hand, one of the most important points of this algorithm is the ability to estimate the shoulder position and orientation during the exercise. The shoulder position can be processed easily through Equation (15) as:
r P s = r T e * 0 l u 0 1 T
Whilst the orientation of the MARG relative to the robot is known by Equation (12), its rotation matrix r R M is directly obtained [19]. Thus, the shoulder orientation is estimated as:
r R s = r R M · M 0 R s
where r R M is the current rotation of the sensor with respect to the robot and M 0 R s the reference position of the MARG relative to the shoulder (see Figure 3a). Hence, the transformation of the shoulder relative to the robot remains:
r T s = r R s r P s 0 0 0 1
Finally, since the elbow and the shoulder location are instantaneously known, the initial conditions and the integration method can be performed.

2.6. Initial Conditions

In this algorithm, since it is based on a numerical integration, the initial conditions are required. The locations of the three main points, namely the shoulder ( r T s ), the elbow ( r T s ) and the wrist ( r T s ), are known. The shoulder joints ( q 1 , q 2 and q 3 ) are directly related to the matrix s T e = r T s 1 · r T e , defined in the previous section, and they can be acquired by the spherical joint method [13]. This matrix, in terms of the corresponding joints, can be expressed by DH parameters shown in Table 1 as:
s 0 T s 3 = s 0 T s 1 · s 1 T s 2 · s 2 T s 3 = c 1 s 3 c 3 s 1 s 2 c 2 s 1 c 1 c 3 + s 1 s 2 s 3 l u c 2 s 1 s 1 s 3 + c 1 c 3 s 2 c 1 c 2 c 3 s 1 c 1 s 2 s 3 l u c 1 c 2 c 2 c 3 s 2 c 2 s 3 l u s 2 0 0 0 1
having s i = sin q i and c i = cos q i , i = { 1 , 2 , 3 } . If the transformation matrix s 0 T s 3 is defined as:
s 0 T s 3 q 1 , q 2 , q 3 = n x n y n z p x o x o y o z p y a x a y a z p z 0 0 0 1
two possible solutions of the shoulder joints are obtained; if q 2 0 π :
q 1 = atan 2 n y , o y q 2 = atan 2 a y , n y 2 + o y 2 q 3 = atan 2 a z , a x
and if q 2 π 0 :
q 1 = atan 2 n y , o y q 2 = atan 2 a y , n y 2 + o y 2 q 3 = atan 2 a z , a x
Thereby, the elbow joint ( q 4 ) is directly determined with the cosine law as:
q 4 = arcsin l u 2 + l f 2 | | H S | | 2 2 l u l f
and its homogeneous matrix remains:
s 3 T s 4 = sin q 4 0 cos q 4 0 cos q 4 0 sin q 4 0 0 1 0 0 0 0 0 1
Thus, the transformation matrix between the systems s 0 and s 4 can be computed. The known matrix s T h = r T s 1 · r T h defines the transformation between the system s 0 and s 5 . On the other hand, the last joint, q 5 , is defined with the DH parameters as:
s 4 T s 5 q 5 = sin q 5 cos q 5 0 0 cos q 5 sin q 5 0 0 0 0 1 0 0 0 0 1
and therefore, q 5 is estimated as:
q 5 = atan 2 n x , o x
Finally, two possible configurations of the arm joints are found, even though only one solution is possible. Due to the limits of the arm joints, π / 2 π / 2 , only one solution accomplishes this restriction, and the initial position of the arm is assessed. This method can produce abrupt changes in the estimated arm joints caused by possible perturbations in the accelerometer that might lead to a non-anatomical position. Hence, since the new position depends on the latest position and the sample time, the integration method for real-time reconstruction is the best way to overcome the aforementioned drawbacks following Equations (1) and (2).

3. Results and Discussion

3.1. Experimental Exercises

With the aim of studying the arm joint estimation algorithm, with K = d i a g { 1 . 5 , 1 . 5 , . . . 1 . 5 } N/ms (chosen by the “trial and error” approach tested before the experiment), two different experiments were performed. The first exercise was to compute the algorithm accuracy in terms of the arm joints and the position of the shoulder, performed by four healthy subjects. Then, a rehabilitation exercise with two different trials was performed by 50 healthy subject (aged between 20 and 72) to test the behavior of the presented algorithm. In both cases, the length of the upper arm was measured from the lateral side of the acromion to the proximal radius head, in the elbow joint. From the proximal radius head to the radial styloids, the distal part of the radius, the forearm length was measured [20]. Moreover, both experiments are performed under the same activity: 3D roulette, which may be seen in Figure 4. The activity consisted of taking a box from the perimeter and placing it in the center of the screen; hand movements are symbolized as a wrench (see Figure 4). One movement is considered when the subject goes from the center of the roulette to the perimeter and returns again to the center.
Figure 4. Subject wearing the sensors, the accelerometer and the MARG, grasping the end effector of the robot and performing the 3D roulette activity.
Figure 4. Subject wearing the sensors, the accelerometer and the MARG, grasping the end effector of the robot and performing the 3D roulette activity.
Sensors 15 29818 g004
A magneto-inertial sensor, developed by Shimmer©, is tightly attached onto the upper arm and onto the shoulder to compute the kinematic reconstruction algorithm. The real position of the arm is computed with a six DoF optical tracking camera Optitrak V120: Trio, developed by NaturalPoint®. Specific parts attached to the hand, upper arm and forearm with retro-reflective markers were developed for this purpose. Information about the subjects who carried out the validation experiment are shown in Table 2; they performed three trials of the same exercise.
Table 2. Main subject data from the validation experiment.
Table 2. Main subject data from the validation experiment.
IDAgeGenderForearm Length (m)Upper Arm
121Male0.230.32
251Female0.210.33
332Male0.250.31
431Male0.210.33
In the second experiment, two different trials of the same activity were performed. The first trial was intended not to move the shoulder while the exercise was being conducted, i.e., without compensation with the trunk. However, the participants were asked to follow the hand movements with the shoulder in the second exercise. Each trial consisted of 24 movements.

3.2. Algorithm Validation

The mean error committed, in terms of root mean square error (RMSE) and standard deviation, is shown in Figure 5a. The mean RMSE of the joints is 0.047 rad with a standard deviation of 0.013 rad. Otherwise, the error committed on the shoulder position estimation, which may be found in Figure 5b, shows the mean RMSE committed, less than 0.87 cm, and the standard deviation, around 0.83 cm. The good results show that the error committed is small (it is hardly noticeable by the human eye), and therefore, the accuracy of the presented algorithm with respect to the real arm movements is high. A kinematic reconstruction of the arm joints and the estimation of shoulder position acquired from both methods through the presented algorithm (red dotted line) and the direct reconstruction (blue line) are pictured in Figure 6.
Figure 5. Error committed in the reconstruction algorithm. (a) Mean RMSE (blue bar) of the joints committed by the subjects and standard deviation (gray bar); (b) Mean RMSE (blue bar) of the shoulder position committed by the subjects and the standard deviation (gray bar): x, left/right movements; y, forward/backward movements; z, up/down movements.
Figure 5. Error committed in the reconstruction algorithm. (a) Mean RMSE (blue bar) of the joints committed by the subjects and standard deviation (gray bar); (b) Mean RMSE (blue bar) of the shoulder position committed by the subjects and the standard deviation (gray bar): x, left/right movements; y, forward/backward movements; z, up/down movements.
Sensors 15 29818 g005
Figure 6. Joints and shoulder movements estimated through the algorithm (dotted red line) and measured through the optoelectronic system (blue line) of a subject during an exercise.
Figure 6. Joints and shoulder movements estimated through the algorithm (dotted red line) and measured through the optoelectronic system (blue line) of a subject during an exercise.
Sensors 15 29818 g006

3.3. Arm Joint Range

In this experiment, the ROM between both trials, with and without compensation with the trunk, is studied. Furthermore, the shoulder movement is compared to its real position, acquired with the optoelectronical system mentioned before. To compare both groups, statistical analysis is performed through the t-test for paired data for each ROM. Joints 1 to 4 show significant differences ( p 0 . 05 ), but nevertheless, Joint 5, as the subject wrist is attached to the end effector of the robot, does not show significant differences ( p = 0 . 064 ).
The estimated ROM in the exercise without compensation and with compensation is shown in Figure 7a, and the error committed might be seen in Figure 7b. It should be noted that the error committed in each joint for both exercises is smaller than six degrees. On the other hand, the ROM estimated for the trial without compensation is larger than that from the other trial. This result was expected, because the shoulder compensation affects the joint range. However, the ROMs of Joint 5 are similar, because the pronation-supination of the forearm is not affected when the compensation is performed.
Figure 7. Representation of both trials: without compensation (orange bar) and with compensation (blue bar), and the standard deviation (gray bar) in terms of arm joints. (a) Estimated range of motion (ROM); (b) Error committed between the real ROM and the estimated ROM.
Figure 7. Representation of both trials: without compensation (orange bar) and with compensation (blue bar), and the standard deviation (gray bar) in terms of arm joints. (a) Estimated range of motion (ROM); (b) Error committed between the real ROM and the estimated ROM.
Sensors 15 29818 g007
The accuracy of the shoulder position, taking into account the whole population (N = 51), is shown in Figure 8a. The estimated shoulder position with respect to the real shoulder location in a compensation trial performed by one subject can be seen in Figure 8b.
Figure 8. Shoulder movement in the compensation trial. (a) Mean RMSE (blue bar) committed by the population and the standard deviation (gray bar): x, left/right movements; y, forward/backward movements; z, up/down movements; (b) Estimated movement through the proposed algorithm (dotted red line) and the direct movement (blue line) performed by one subject.
Figure 8. Shoulder movement in the compensation trial. (a) Mean RMSE (blue bar) committed by the population and the standard deviation (gray bar): x, left/right movements; y, forward/backward movements; z, up/down movements; (b) Estimated movement through the proposed algorithm (dotted red line) and the direct movement (blue line) performed by one subject.
Sensors 15 29818 g008

4. Conclusions

In this paper, a kinematic reconstruction of the upper limbs during robot-aided rehabilitation with planar robots taking into account shoulder movements is presented. The estimated arm joints are very accurate with respect to the real position of the arm. Thus, the arm joint improvements of the patient can be measured objectively, and a better adaptation of the therapy to the patient needs can be also performed.
The measurement of the shoulder movement can be also computed accurately. To the best of our knowledge, this feature is not included in the previous algorithms where the shoulder is assumed to be fixed, even when little movements cannot be avoided during the exercise. This feature helps the therapist to correct the patient’s posture during exercise for faster improvement in terms of arm mobility.
In summary, the arm joints’ improvement may be included as a new objective assessment parameter in addition to the motor and proprioceptive activity and assessments scales, which are, by definition, subjective, as the Fugl–Meyer assessment [21].

Acknowledgments

This work was supported by the European Commission under FP7-ICT Contract 231143 (ECHORD (European Clearing House for Open Robotics Development)).

Author Contributions

N.G.-A., F.J.B and L.D.L. conceived of and designed the experiments. A.B.-M. and S.E. performed the experiments. A.B.-M. drafted the paper. A.B.-M., J.M.C. and J.A.D. analyzed the data. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nef, T.; Mihelj, M.; Colombo, G.; Riener, R. ARMin-robot for rehabilitation of the upper extremities. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006.
  2. Tang, Z.; Zhang, K.; Sun, S.; Gao, Z.; Zhang, L.; Yang, Z. An upper-limb power-assist exoskeleton using proportional myoelectric control. Sensors 2014. [Google Scholar] [CrossRef] [PubMed]
  3. Lum, P.S.; Burgar, C.G.; Shor, P.C.; Majmundar, M.; der Loos, M.V. Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke. Arch. Phys. Med. Rehabil. 2002, 83, 952–959. [Google Scholar] [CrossRef] [PubMed]
  4. Badesa, F.; Morales, R.; Garcia-Aracil, N.; Alfaro, A.; Bernabeu, A.; Fernandez, E.; Sabater, J. Robot-assisted rehabilitation treatment of a 65-year old woman with alien hand syndrome. Biomed. Robot. Biomech. 2002. [Google Scholar] [CrossRef]
  5. Cameirao, M.; Badia, S.; Oller, E.; Verschure, P. Neurorehabilitation using the virtual reality based Rehabilitation Gaming System: methodology, design, psychometrics, usability and validation. J. NeuroEng. Rehabil. 2010. [Google Scholar] [CrossRef] [PubMed]
  6. Wittmann, F.; Lambercy, O.; Gonzenbach, R.R.; van Raai, M.A.; Hover, R.; Held, J.; Starkey, M.L.; Curt, A.; Luft, A.; Gassert, R. Assessment-driven arm therapy at home using an IMU-based virtual reality system. Rehabil. Robot. 2015. [Google Scholar] [CrossRef]
  7. Klopčar, N.; Lenarčič, J. Kinematic model for determination of human arm reachable workspace. Meccanica 2005, 40, 203–219. [Google Scholar] [CrossRef]
  8. Rab, G.; Petuskey, K.; Bagley, A. A method for determination of upper extremity kinematics. Gait Posture 2002. [Google Scholar] [CrossRef]
  9. Mihelj, M. Human arm kinematics for robot based rehabilitation. Robotica 2006, 24, 377–383. [Google Scholar] [CrossRef]
  10. Papaleo, E.; Zollo, L.; Garcia-Aracil, N.; Badesa, F.; Morales, R.; Mazzoleni, S.; Sterzi, S.; Guglielmelli, E. Upper-limb kinematic reconstruction during stroke robot-aided therapy. Med. Biol. Eng. Comput. 2015, 53, 815–828. [Google Scholar] [CrossRef] [PubMed]
  11. Kreutz-Delgado, K.; Long, M.; Seraji, H. Kinematic analysis of 7 DOF anthropomorphic arms. Robot. Autom. 1990. [Google Scholar] [CrossRef]
  12. Taati, B.; Wang, R.; Huq, R.; Snoek, J.; Mihailidis, A. Vision-based posture assessment to detect and categorize compensation during robotic rehabilitation therapy. Biomed. Robot. Biomech. 2012. [Google Scholar] [CrossRef]
  13. Siciliano, B.; Sciavicco, L.; Villani, L.; Oriolo, G. Robotics: Modelling, Planning and Control; Springer-Verlag London: London, UK, 2009. [Google Scholar]
  14. Badesa, F.J.; Llinares, A.; Morales, R.; Garcia-Aracil, N.; Sabater, J.M.; Perez-Vidal, C. Pneumatic planar rehabilitation robot for post-stroke patientes. Biomed. Eng. Appl. Basis Commun. 2014. [Google Scholar] [CrossRef]
  15. Lenarčič, J.; Umek, A. Simple model of human arm reachable workspace. IEEE Trans. Syst. Man. Cybern. 1994, 24, 1239–1246. [Google Scholar] [CrossRef]
  16. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  17. Madgwick, S.; Harrison, A.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. Rehabil. Robot. 2011. [Google Scholar] [CrossRef]
  18. Bachmann, E.; Yun, X.; Peterson, C. An investigation of the effects of magnetic variations on inertial/magnetic orientation sensors. Robot. Autom. 2004. [Google Scholar] [CrossRef]
  19. Kuipers, J.B. Quaternions and Rotation Sequences; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
  20. Mazza, J.C. Mediciones antropométricas. Estandarización de las Técnicas de medición, Actualizada según Parámetros Internacionales. Available online: http://g-se.com/es/journals/publice-standard/articulos/mediciones-antropometricas.-estandariza-cion-de-las-tecnicas-de-medicion-actualizada-segun-parametros-internacionales-197 (accessed on 9 October 2015).
  21. McCrea, P.H.; Eng, J.J.; Hodgson, A.J. Biomechanics of reaching: Clinical implications for individuals with acquired brain injury. Disabil. Rehabil. 2002, 24, 534–541. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Bertomeu-Motos, A.; Lledó, L.D.; Díez, J.A.; Catalan, J.M.; Ezquerro, S.; Badesa, F.J.; Garcia-Aracil, N. Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks. Sensors 2015, 15, 30571-30583. https://doi.org/10.3390/s151229818

AMA Style

Bertomeu-Motos A, Lledó LD, Díez JA, Catalan JM, Ezquerro S, Badesa FJ, Garcia-Aracil N. Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks. Sensors. 2015; 15(12):30571-30583. https://doi.org/10.3390/s151229818

Chicago/Turabian Style

Bertomeu-Motos, Arturo, Luis D. Lledó, Jorge A. Díez, Jose M. Catalan, Santiago Ezquerro, Francisco J. Badesa, and Nicolas Garcia-Aracil. 2015. "Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks" Sensors 15, no. 12: 30571-30583. https://doi.org/10.3390/s151229818

Article Metrics

Back to TopTop