Next Article in Journal
A Trajectory Tracking Control Strategy of 4WIS/4WID Electric Vehicle with Adaptation of Driving Conditions
Next Article in Special Issue
Comparison of Spray Deposition, Control Efficacy on Wheat Aphids and Working Efficiency in the Wheat Field of the Unmanned Aerial Vehicle with Boom Sprayer and Two Conventional Knapsack Sprayers
Previous Article in Journal
Characteristics of Deformation and Damping of Cement Treated and Expanded Polystyrene Mixed Lightweight Subgrade Fill under Cyclic Load
Previous Article in Special Issue
Algorithm for Base Action Set Generation Focusing on Undiscovered Sensor Values
Open AccessArticle

Power Assist Control Based on Human Motion Estimation Using Motion Sensors for Powered Exoskeleton without Binding Legs

1
Graduate School of Engineering, University of Fukui, Fukushi 910-8507, Japan
2
Faculty of Engineering, University of Fukui, Fukushi 910-8507, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(1), 164; https://doi.org/10.3390/app9010164
Received: 14 November 2018 / Revised: 20 December 2018 / Accepted: 25 December 2018 / Published: 4 January 2019
(This article belongs to the Special Issue Advanced Mobile Robotics)

Abstract

In this study, we propose a novel power assist control method for a powered exoskeleton without binding its legs. The proposed method uses motion sensors on the wearer’s torso and legs to estimate his/her motion to enable the powered exoskeleton to assist with the estimated motion. It can detect the start of walking motion quickly because it does not prevent the motion of the wearer’s knees at the beginning of the walk. A nine-axis motion sensor on the wearer’s body is designed to work robustly in very hot and humid spaces, where an electromyograph is not reliable due to the wearer’s sweat. Moreover, the sensor avoids repeated impact during the walk because it is attached to the body of the wearer. Our powered exoskeleton recognizes the motion of the wearer based on a database and accordingly predicts the motion of the powered exoskeleton that supports the wearer. Experiments were conducted to prove the validity of the proposed method.
Keywords: powered exoskeleton; motion sensor; machine learning powered exoskeleton; motion sensor; machine learning

1. Introduction

Powered exoskeletons are nowadays used in various fields, such as agriculture, and medical and welfare services [1,2,3,4]. They have a wide variety of applications in numerous fields. A powered exoskeleton in the field of rehabilitation [5,6,7,8] has low output power for safety assistance. On the other hand, the assisting power used to transport heavy baggage tends to be high [9,10]. Powered exoskeletons have also been developed for workers in a nuclear power plant [11,12]. We have also been developing a powered exoskeleton for workers who transport heavy baggage in a nuclear power plant. In case of a nuclear hazard, workers need to wear radiation-protective equipment that weighs approximately 40 kg. Moreover, the worker is supposed to carry a heavy exploration robot, such as the PackBot [13], around in a nuclear reactor for efficient exploration. The target of our study is to develop a powered exoskeleton that is used to support workers in a nuclear plant who need to wear heavy radiation-protective equipment and carry an exploration robot.
Several approaches have been proposed to control powered exoskeletons. One is based on myoelectric signals measured by electromyography (EMG) sensors [8,14,15,16]. It estimates human intentions by measuring the action potential of the muscles, and the powered exoskeleton assists human action according to the intention. Since action potential occurs approximately 50 ms before the relevant muscle contracts, the powered exoskeleton enables rapid power assistance. However, it can easily be affected by human sweat in a hot environment. It is thus unsuitable for our intended application because workers often sweat in a radiation-protective equipment [17]. The equipment is composed of highly airtight materials so that temperature and humidity inside the equipment are high.
Another approach is based on a force sensor/switch. Berkeley Lower Extremity Exoskeleton (BLEEX) [18,19,20,21] uses pressure sensors that measure the force between the shoe of the powered exoskeleton and the foot of the wearer to control the exoskeleton according to the configuration of the foot relative to the ground. Sano et al. [22,23] also proposed using force sensors attached to the bottom of the wearer’s feet to detect the pressure between the shoes of the exoskeleton and the ground. These exoskeletons control joint angle and angular velocity based on the given state of the leg, such as “stance phase” or “swing phase”, estimated by the force switches/sensors. We have examined this approach. We place a force sensor on the wearer’s back to measure the weight of the load on it. The powered exoskeleton controls itself to generate assist torque based on the floor’s reaction force. The powered exoskeleton assists the worker to carry the load according to the measured weight of the load. It has an advantage of not being affected by human sweat. However, we found that this approach cannot distinguish among similar motions, such as “walking forward” and “walking backward”. This means that the approach restricts the motion that can be assisted, and the motion needs to be designed in advance. Actually, the potential user needs to do many motions, including walking forward and backward, squat, going up and coming down stairs, run, one-leg standing, and so on. In this research, we focus on only three motions, standing upright position, walking forward, and walking backward. The motion “walking backward” is necessary because we suppose that it is hard to turn around in case that the passageway in the nuclear power plant is narrow. Furthermore, the outputs of a force sensor tend to be noisy, especially at the time of impact. Reactive assist control based on force sensors tends to be jerky due to noise. Low-pass filters can be applied but slow down the assist control. Moreover, they are likely to cause hardware trouble because of repeated impact during the walk because they are likely attached to the bottom of the foot of the powered exoskeleton.
We adopted the PLL-01 [12], designed and developed by Activelink Co., Ltd., Japan, for our study. Its major feature is that it does not bind the legs of the wearer. It binds only the wearer’s shoulders and feet so that he/she can move his/her legs freely at the beginning of the motion because there is room to move knees due to the redundancy in the link structure of the human body, even if the joints of the exoskeleton are fixed. Other popular powered exoskeletons often bind the upper and lower legs tightly to links of the exoskeletons. Consequently, the wearer must push the exoskeleton intentionally until it estimates the motion of the wearer and begins assistance according to the estimated motion. Our powered exoskeleton enables the wearer to move his/her legs freely at the beginning of the motion, so that motion sensors on his/her legs and torso can detect motion and quickly start assistance according to the estimated motion. Liu et al. [24] proposed a powered exoskeleton that does not bind the legs of the wearer. However, they bound lightweight, rigid bars to the wearer’s legs and measured his/her joint angles using magnetic rotary encoders. Even if the rigid bars are lightweight, they restrict the motion of the wearer. It is well known that the human joint is not a hinge joint. The center of rotation of the human joint changes during bending and extension. Therefore, rigid bars with hinge joints are not suitable for measuring human motion because they restrict human motion. Researchers have also reported motion recognition systems using nine-axis motion sensors for powered exoskeletons [25]. Seven sensors are attached to the wearer’s trunk and legs to estimate his/her motion based on hidden semi-Markov models. Such systems can estimate the wearer’s motion; however, the only experiments on it were conducted without the user wearing a powered exoskeleton.
We propose a novel approach for power assist control of powered exoskeletons based on human motion estimation using the nine-axis motion sensors. The motion sensor can measure the wearer’s motion in a high-temperature and humid environment. Our powered exoskeleton does not bind the wearer’s legs, unlike other popular powered exoskeletons, such that he/she can move his/her legs freely at beginning of the motion. Therefore, it can quickly detect the start of walking motion. Our method estimates the wearer’s motion using a motion sensor and controls the exoskeleton based on the estimated motion. Our motion estimation and assist control are based on a motion database of the wearer and the powered exoskeleton. The database consists of sequential output data from the motion sensors attached to the wearer and the joint angles at the waist and knees of the powered exoskeleton during specific motions. An advantage of the proposed method is that it can recognize several motions of the wearer that are challenging for other similar methods. Another advantage is its low cost. Motion sensors are cheaper than commercially available load cells and are robust such that they avoid repeated impact during a walk because they are attached to the wearer’s limbs. A force sensor or force switch embedded into the bottom of the foot can be easily broken because of the direct impact with the floor during the walk. This paper shows the effectiveness of the proposed method through experiments with a powered exoskeleton.

2. Powered Exoskeleton without Binding Legs

Figure 1 and Figure 2 show the powered exoskeleton, designed and developed by Activelink Co., Ltd., Nara City, Japan [12], used in this research. It consisted of four geared motors and rotary encoders at the knee and hip joints. Their joint angles were controlled by PID controllers. There was no motor at the ankle joints. The degrees of freedom of the joints are shown in Figure 1a. The powered exoskeleton bound a wearer only at his/her shoulders and feet. There was no binding at the limbs of the upper and lower legs, as in conventional powered exoskeletons. The wearer could move his/her knees freely, especially at the beginning of the motion. The powered exoskeleton was designed to have comparatively small torque, at most 50 Nm, at each joint so that the back-drivability ensured safety in case of loss of control. Therefore, the powered exoskeleton was not designed to support all loads on the wearer, but only part of it.
Figure 2 shows a wearer attaching five nine-axis motion sensors as well as their positions and coordinates. The x-axis was upward, the y-axis was horizontal, and the z-axis was in the forward direction. The wearer attached them to the chest and the upper and lower legs. The powered exoskeleton can distinguish the motions “walking forward” and “walking backward” based on the outputs of the motion sensors. The algorithm proposed by Sebastian Madgwick [26] was adopted to calculate the posture of the motion sensor in this paper. This method used acceleration, angular velocity, and geomagnetism measured by the motion sensors to calculate posture. Three force sensors were attached to the powered exoskeleton. One was on the back, and the others were on the feet. Figure 3 shows the shoe designed for the wearer, the foot of the powered exoskeleton, and the force sensor attached to both. The force sensor on the back measured the load on the shoulder of the wearer. The force sensors on the feet measured the interactive force between the feet of the wearer and those of the exoskeleton. The axes of force sensors are depicted in Figure 1a. These sensors measured load and moment along the three directions. The force sensors were used only for the evaluation of our proposed method.

3. Leg Control Based on Human Motion Prediction Using Motion Sensor

Figure 4 shows the overview of the proposed controller using motion sensors for our powered exoskeleton. The powered exoskeleton recognizes the wearer’s motion to assist him/her. It estimates in advance by a few hundred milliseconds the future joint angles of the powered exoskeleton according to the recognized motion to assist the wearer in real time. The motion estimation and the calculation of the desired joint angles of the powered exoskeleton are based on a motion database compiled in advance. This database is composed of sequential data of the wearer’s motion, the label of the motion, and the corresponding leg motion of the powered exoskeleton. The joint angles of the legs of the powered exoskeleton are controlled to be estimated based on the database by PID controllers.
The database includes sequential data from motion sensor attached to the wearer as feature vectors, each with motion class label “standing”, “walking forward”, or “walking backward”, and the joint angles of the powered exoskeleton as the wearer exhibited the relevant motion. The data of the motion sensors are angles θ = ( θ r t , θ r l , θ l t , θ l l ) , angular velocities θ ˙ = ( θ ˙ r t , θ ˙ r l , θ ˙ l t , θ ˙ l l , θ ˙ u b ) , and acceleration rates a = ( a r t , a r l , a l t , a l l , a u b ) . Indices r t , r l , l t , l l , and u b indicate the upper-right leg, the lower-right leg, the upper-left leg, the lower-left leg, and the torso, respectively. The joint angle of the powered exoskeleton is defined as ϕ = ( ϕ r w , ϕ r k , ϕ l w , ϕ l k ) . Indices r w , r k , l w , and l k indicate the right hip, the right knee, the left hip, and the left knee of the powered exoskeleton, respectively.
The wearer’s motion dataset is defined by piecewise sequences of x t = ( a t , θ t ˙ , θ t ) , where t is the time index. Sequential motion data ( x 1 , x 2 , ) are segmented using a window of size m into sequence data X t = ( x t , x t + 1 , , x t + m ) . The sequence dataset is assigned one of the three motion category indices of “standing” c s , “walking forward” c w , and “walking backward” c b . It is also assigned the joint angles of the powered exoskeleton at time t + Δ t , ϕ t + Δ t . A dataset in the motion database is composed of the wearer’s motion dataset, the motion category, and the joint angles of the powered exoskeleton, ( X t , c i , ϕ t + Δ t ) , where c i is one of c s , c w , and c b . The database D is composed of the set of datasets D = { ( X t 0 , c i 0 , ϕ t + Δ t 0 ) , ( X t 1 , c i 1 , ϕ t + Δ t 1 ) , } .
The powered exoskeleton recognizes the wearer’s motion using the k-nearest neighbors method on the database D. We choose the k-nearest neighbors algorithm because it is one of the non-parametric methods that do not make some specific assumption about the motion of the human or the powered exoskeleton and it is the simplest algorithm and works in real time for our application. Motion data of the wearer at time t are defined as x t = ( a t , θ t ˙ , θ t ) . Query sequential data with window size m are defined as q X = ( x t , x t + 1 , , x t + m 1 ) . They calculate the normalized Euclidean distance d i between X t i and q X t , where i is the data index in database D. It chooses k datasets from the database D with the smallest distances based on a normalized d i , and collects the set of motion category IDs c = ( c 1 , c 2 , , c k ) , each of which is one of the motion categories c s , c w , and c b . Then, the k-nearest neighbor algorithm outputs most of the motion category in c . The term “majority” indicates the motion category with the maximum number of category indices in the nearest neighbor set of the motion category indices c. For example, if the number of the nearest datasets with the motion category index c s is higher than that of datasets with motion category indices c w and c b , c s is said to be in the majority. The normalized distance d i is calculated as below:
d i 2 = ( q X t c X t ) Σ 1 ( q X t c X t ) T
where T indicates the transpose and Σ is a variance matrix with variance vector σ on the diagonal. The variance vector σ is the vector whose components are the variances of the corresponding components of X t , which used for database D.
It estimates the appropriate joint angles of the powered exoskeleton at time t + Δ t , ϕ t + Δ t d , according to the estimated motion of the wearer based on the k-nearest neighbor algorithm. For example, if the estimated motion is “standing”, it chooses only the datasets whose motion categories c i is c s for the estimation of ϕ t + Δ t d . An overview of the estimation of the appropriate joint angles ϕ t + Δ t d is provided in Figure 5 and the algorithm is shown in Algorithm 1. The input to the joint motor u t is calculated by Equation (2):
u t = k p ( ϕ t + Δ t d ϕ t ) k d ( ϕ t + Δ t d ϕ t ) ( ϕ t + Δ t 1 d ϕ t 1 )
where ϕ t d is the desired joint angle and ϕ t is the actual joint angle of the powered exoskeleton at time t. k p , k i , and k d are the proportional, integral, and differential gains, respectively.
Algorithm 1 Wearer’s motion estimation, and calculation of joint angle of powered exoskeleton
  • load database D = { ( X t 0 , c i 0 , ϕ t + Δ t 0 ) , ( X t 1 , c i 1 , ϕ t + Δ t 1 ) , }
  • acquire sequential motion data of the wearer X t
  • c = k n n ( X t , D ) : components of c is c s , c w or c b
  • if majority of c is c s then
  •  recognizes the current motion as “standing” motion
  • Φ = k n n ( X t , D | c i = c s ) : Φ = ( ϕ t + Δ t 1 , ϕ t + Δ t 2 , , ϕ t + Δ t k )
  • end if
  • if majority of c is c w then
  •  recognizes the “walking forward” motion
  • Φ = k n n ( X t , D | c i = c w ) : Φ = ( ϕ t + Δ t 1 , ϕ t + Δ t 2 , , ϕ t + Δ t k )
  • end if
  • if majority of c is c b then
  •  recognizes the “walking backward” motion
  • Φ = k n n ( X t , D | c i = c b ) : Φ = ( ϕ t + Δ t 1 , ϕ t + Δ t 2 , , ϕ t + Δ t k )
  • end if
  • ϕ t + Δ t d = 1 k i = 1 k ϕ t + Δ t i
  • return ϕ t + Δ t d

4. Comparative Methods

We evaluated the assistive performance of the proposed method with two comparable methods. We avoided the use of EMG instrument due to its drawbacks for our application, as mentioned in Section 1. We also did not rely on the foot force sensors or foot switches because repeated impact during a walk often breaks them. Therefore, a gravity compensation method was adopted as a comparative method. Unfortunately, we found that it was challenging to reduce the load on the wearer’s shoulder in the following experiments. Therefore, we also adopted a foot force switch method as the other comparative method.

4.1. Gravity Compensation Method

The gravity compensation control method was inspired by work by Sano et al. [23]. The powered exoskeleton proposed by Sano et al. [23] had only hip joints and no knee joints. Therefore, we modified the method as follows: The powered exoskeleton generated the torques of the joints of the hips and knees, τ h and τ k , so that it counteracted the effect of gravity on the body of the exoskeleton. τ h and τ k were controlled by Equations (3) and (4), respectively.
τ k = k k 1 2 l l m l g sin ϕ l
τ h = k h 1 2 l u m u g sin ϕ u + m l g l u sin ϕ u + 1 2 l l sin ϕ l
where ϕ u , ϕ l , l u , l l , m u , and m l are the posture angles with respect to the force of gravity on, the lengths of, and the masses of the upper and lower legs of the powered exoskeleton, respectively. The definitions are shown in Figure 1. g is gravitation acceleration, and k k and k h are the control gains. The method tends to keep the body in upright position to lift the load to the shoulder.

4.2. Force Switch-Based Method

The other method is based on the idea of the force switch algorithm proposed by Sano et al. [22]. They [22] developed a method to estimate the wearer’s motion, walking or standing, based on the force switch on the feet of the wearer. The algorithm required two or three steps for estimation and generates torques on the joints based on a pre-defined pattern for the walking motion. Unfortunately, we found that the original method needed time to estimate the walking motion, and the wearer needed to push his/her feet actively while the powered skeleton tried to retain posture before motion estimation was accomplished. Following motion estimation, the powered exoskeleton began assisting the estimated motion of the wearer. However, we found that many parameters needed to be tuned for appropriate power assistance.
Therefore, we designed a simplified algorithm in place of the force switch-based method. Since force sensors are attached to the wearer’s feet in our powered exoskeleton, we used them as force switches instead. The powered exoskeleton recognizes the “swing phase” and the “stance phase” according to the difference between the left and right force values. When the difference is small, it recognizes that both feet are in “stance phase”. Otherwise, if the force value of the left foot is greater than the right, it recognizes that the left leg is in the “swing phase” and the right leg is in the “stance phase”, and vice versa.
Following the recognition of the phase of each foot, the powered exoskeleton controls itself to maintain the posture of the leg as desired according to the recognized phase. The desired angles of the powered skeleton ϕ t d = ( ϕ rw d , ϕ lw d ϕ rk d , ϕ lk d ) are set to the pre-defined desired angles ( ϕ r w rp , ϕ l w lp , ϕ r k r p , ϕ l k l p ) , where rp and lp indicate the phases of the right and left legs, respectively. ϕ i s w i n g and ϕ i s t a n d are the desired angles of joint i for the “swing phase” and the “stance phase”, respectively. A PD controller was applied to calculate the input of each joint motor u t = ( u r w , u l w , u r k , u l k ) by Equation (5):
u t = k p ( ϕ t ϕ t d ) + k d ( ϕ t ϕ t d ) ( ϕ t 1 ϕ t 1 d )
where ϕ t = ( ϕ r w , ϕ l w ϕ r k , ϕ l k ) are the given joint angles of the powered exoskeleton. The algorithm for this process is shown in Algorithm 2. Thresholds τ A and τ B are determined by trial and error.
The two competing methods introduced above struggle to distinguish forward and backward walks. Therefore, we apply and tune the parameters of these methods only for the forward walking motion.
Algorithm 2 Foot Force Switch-based Walk Assistance System
  • Obtain the floor reaction force f l , f r from the force sensors on the wearer’s feet
  • f diff = f l f r
  • if τ A f diff τ B then
  •  recognizes that both legs are in “stance phase”
  • ϕ r w d , ϕ l w d , ϕ r k d and ϕ l k d is set to ϕ r w s t a n d , ϕ l w s t a n d , ϕ r k s t a n d and ϕ l k s t a n d
  • end if
  • if f diff < τ A then
  •  recognizes that the left leg is in “swing phase” and the right leg is in “stance phase”
  • ϕ r w d , ϕ l w d , ϕ r k d and ϕ l k d is set to ϕ r w s t a n d , ϕ l w s w i n g , ϕ r k s t a n d and ϕ l k s w i n g
  • end if
  • if τ B < f diff then
  •  recognizes that the right leg is in “swing phase” and the left leg is in “stance phase"
  • ϕ r w d , ϕ l w d , ϕ r k d and ϕ l k d is set to ϕ r w s w i n g , ϕ l w s t a n d , ϕ r k s w i n g and ϕ l k s t a n d
  • end if

5. Experiments

Experiments were conducted to test the proposed method by comparing it with two comparative methods (The experiments were approved as No. H2016001 by the Research Ethics Committee, Department of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui.). One wearer was a male student in his early 20s. In this experiment, he walked forward and backward for approximately 5 m wearing the powered exoskeleton. The data for the database were obtained while the powered exoskeleton was lifted by a gantry, and the wearer walked with the gantry so that he did not have any payload from the powered exoskeleton while his motion was restricted by the kinematics of the exoskeleton. Figure 6 shows how the data for the database were obtained for (a) “walking forward” and (b) “walking backward”. The datasets for “standing” were also obtained when the wearer stood in upright position. The sampling time was approximately 80 milliseconds. The window size of the dataset was 10 steps. The number of datasets for each motion category was approximately 50.
Once the database had been constructed, the proposed method was applied. The k of the k-NN was set to 5 for motion category recognition and 10 for appropriate joint angle estimation in the experiments.
Figure 7 shows the results of the estimation of the wearer’s motion based on the proposed method. The wearer first stood in the upright position, started walking forward, stopped, and stayed still there for a while; he then walked backward, and stopped. The figure shows that the proposed method successfully recognized the wearer’s motion. The sampling time of the control system was approximately 80 milliseconds. The calculation of the motion recognition takes only about 20 milliseconds on the controller. The calculation of the whole control system including sensor value acquisition and motor control takes less than 80 milliseconds so that the powered exoskeleton assists the wearer’s motion in real time.
An additional 15 kg weight was placed on the powered exoskeleton in the experiments. The proposed method and the competitive methods described in Section 4.1 and Section 4.2 were applied to the powered exoskeleton one by one across enough breaks for the wearer. Figure 8 and Figure 9 show the image sequences of the motion “walking forward” and “walking backward” based on each method. The images were captured from a side. The motion category recognition worked perfectly based on the proposed method and the foot force switch-based method. Table 1 shows the walking speeds for “walking forward” and “walking backward” based on each method.
Figure 8 shows that the proposed method and the foot force switch-based method enabled the wearer to walk smoothly while the gravity compensation-based method did not. The sampling time of image capture was approximately 1.8 s. The gravity compensation-based method caused the wearer to walk more slowly than the other methods. Table 1 also shows that the proposed and the force switch-based methods supported “walking forward”. The wearer supported by the gravity compensation method slowly walked forward because this method does not actively support walking.
Figure 9 shows that the proposed method allowed the wearer to walk backward faster than other methods. The proposed method recognized the wearer’s motion of walking backward correctly and supported it appropriately. On the other hand, the foot force switch-based method caused the wearer to walk backward slowly because it tried to support him in walking forward even though he was walking backward. Eventually, the wearer needed to exert a strong force to push his leg backward and walk slowly. It was difficult for the foot force switch-based method to recognize walking forward and backward based only on the outputs of the force sensors of the feet. This was one of the drawbacks of the method. The gravity compensation method showed good result, but the walk tended to be slow because it did not assist the horizontal motion of the leg, even though it assisted vertical leg motion, such that the wearer had to firmly push his leg back. Table 1 supports the analysis in terms of the walking speed for the motion “walking backward”.
Figure 10 shows the average load on the wearer’s shoulder while walking forward and backward. The proposed method and the foot force switch method maintained a load of approximately 100 N whereas the gravity compensation method maintained one of 350 N. If there was no assist control, the wearer had approximately 350 N on his shoulders. The proposed method successfully reduced the load. It depends on the motion database D. When the datasets for the database were sampled, the powered exoskeleton was hung up on the gantry so that the wearer had no load due to the exoskeleton. Therefore, the proposed method lifted the exoskeleton. To maintain the back-drivability of the powered exoskeleton, we kept the control gain as small as possible. An approximately 100 N load on the shoulder was imposed because of the small control gain for back-drivability. The gravity compensation method did not reduce the load on the wearer’s shoulder. If the control gains k k and k h in Equations (3) and (4) became large, the load on the wearer’s shoulder in the upright position became small, but it became challenging for the wearer to swing the leg because the powered exoskeleton tried to keep the leg as vertical as possible. It eventually lost back-drivability. To retain back-drivability, the control gains k k and k h needed to be small, in which case the system failed to reduce the load on the wearer’s shoulder. The foot force switch-based method was as good as the proposed method according to Figure 10.
Figure 11 shows the horizontal front-back reaction force measured by foot force sensors while the wearer walked based on each control method. The reaction forces on the left leg under the proposed method and the force switch-based method were smaller than that for the gravity compensation method. This was because the gravity compensation method did not consider the motion of the feet in the horizontal direction. The proposed method used motion sensors to predict the posture of the powered exoskeleton and successfully reduced the reaction forces on the feet in the horizontal direction. The force switch-based method also reduced the reaction forces because the pre-defined motion for the method fed the swinging leg forward and the standing leg backward. The reaction force on the right leg was comparatively small when the gravity compensation was applied because of the wearer’s gait.
Figure 12 shows the horizontal front-back force measured by the foot force sensors while the wearer walked backward based on each control method. The proposed method showed the smallest magnitudes of forces during this. This was because it appropriately recognized the wearer’s motion and controlled the legs of the powered exoskeleton based on the estimated motion. The gravity compensation method yielded the highest resistance force to the wearer’s legs because it did not consider the motion of the feet in the horizontal direction, as mentioned above. The force switch-based method failed to support backward walking because it could not distinguish between walking forward and backward, and the pre-defined motion for the method was designed for forward walking. Eventually, the wearer had to push the swinging leg more strongly. The reaction force on the right foot was small because of the manner of the wearer’s walk.
Figure 11 and Figure 12 show that there are differences of the gate frequencies of the walks. The wearer with the gravity compensation method (b) walks slower than the other methods (a) and (c). The wearer with the gravity compensation method (b) needs to push the powered exoskeleton forward and backward by his legs intentionally because the gravity compensation method (b) just compensates for the gravity effect of the powered exoskeleton and does not support the current human motion. On the other hand, the proposed method (a) and the force switch-based method (c) support the walk motion actively so that the wearer can walk faster. Figure 11 and Figure 12 show that there were differences in the gate frequencies of the walks. The wearer using the gravity compensation method (b) walked slower than with methods (a) and (c). The wearer using the gravity compensation method (b) needed to push the powered exoskeleton forward and backward using his legs because this method (b) only compensates for the effect of gravity due to the powered exoskeleton and does not support the human motion. On the other hand, the proposed method (a) and the force switch-based method (c) supported the walking motion actively such that the wearer could walk faster.
To evaluate the usability of the proposed powered exoskeleton, we had a questionnaire on the powered exoskeleton controlled by each method. Three users wore the powered exoskeleton controlled by each method, the proposed method, the gravity compensation method, and the force switch method. After they walked forward, stopped, and walked backward, and repeated them a few times, they answered the questions on the lightness of the shoulder load, lightness of the reaction force to feet, and how freely they could move. The users answered these questions with numbers from 1 to 5; 1 is for the lowest and the 5 is the highest.
Figure 13 shows the results of the questionnaire. According to Figure 13a, they were aware of the lightness of the shoulder load if the proposed and the force switch methods applied. The gravity compensation method failed to reduce the shoulder load. The answers are consistent with the discussions on Figure 10.
Figure 13b suggests that the proposed and force switch methods successfully support the feet of the users when they walk forward but the force switch method failed to support them when they walk backward. The evaluation of the gravity compensation method depends on the user’s preference. This result is also consistent with the discussion on Figure 11 and Figure 12.
The gravity compensation method received high scores on how freely they can move according to the Figure 13c. The reason is that the gravity compensation method does not assist the power actively and just follows the motion of the user while the other methods try to assist the motion actively, but the assistant becomes against the user’s intention occasionally. The force switch method has a low evaluation from the users especially when they walk backward. The reason is that the method was designed for walking forward.
Figure 13 indicates that the overall evaluation of the proposed method is better than the other while it has room to improve the power assistant abilities. It is one of the future works to improve them.
The experimental results show that the proposed method outperformed the other competitive methods comprehensively, as it enabled the wearer to walk faster with a smaller reaction force than the other methods.

6. Conclusions and Discussions

This study proposed a power assist control system based on the wearer’s estimated motion using motion sensors for a powered exoskeleton without leg binding. It recognizes the wearer’s motion using motion sensors, estimates appropriate joint angles for the powered exoskeleton based on a motion database compiled in advance, and assists the wearer’s motion in real time. The experimental results exhibited the effectiveness of the proposed assistive system.
The major feature of our powered exoskeleton is that it does not bind the legs of the wearer. It allows the wearer to move his/her legs freely at the beginning of the motion even if the joints of the exoskeleton are fixed because of the room to move knees. The feature enables us to use motion sensors to recognize the wearer’s motion and give feedback on the power assist. It supports only hip and knee joints rotating on the lateral direction. The other joints, for example, hip joints rotating in different directions and ankle joints are passive. The wearer needs power assists on those joints, too, if the load on the wearer increases more. It is one of the future works from the viewpoint of the mechanical design of the powered exoskeleton to strengthen the existing active joints and replace the passive joints to the active one.
In principle, the proposed method simply replays the pre-recorded joint angles from the database. However, even if the walking speed changes, it tries to find the best matching motion from the database to support it. If the wearer walks more slowly than the pre-recorded walk, the system tries to find the best matching phase of the walk and assists faster walking. If the wearer walks more quickly than the pre-recorded walk, it assists in walking slower but does not prevent the human walk because it always follows the walk to find the best matching phase based on the database. Therefore, it can adapt to a certain degree. If the walk is too far from the pre-recorded datasets, the proposed method fails to support it and needs a new dataset for walks at different speeds. This will form part of our future work.
In this paper, we employed the k-nearest neighbors algorithm to deal with the motion database. The reason is that it does not make some specific assumptions on the motions of the human or powered exoskeleton and it is the simplest method among the various machine learning technique. However, there is a possibility to employ the other sophisticated algorithm. We are investigating more effective algorithms for motion learning [27,28]. Another part of our future work in this area will involve increasing the variety of motions that can be assisted, that is, not only standing and walking motions and forward-backward motions, but also sideways walking, squatting, swinging the body, climbing stairs, and so on. We also intend to investigate online updates of the motion database.

Author Contributions

Conceptualization, S.N. and Y.T. (Yasutake Takahashi); methodology, S.N. and Y.T. (Yasutake Takahashi); software, S.N.; validation, S.N., K.S. and S.M.; formal analysis, S.N. and S.M.; investigation, S.N., K.S., S.M., and Y.T. (Yasutake Takahashi); resources, Y.T. (Yasutake Takahashi), M.K., Y.T. (Yoshiaki Taniai), T.N.; data curation, S.N., K.S. and S.M.; writing—original draft preparation, Y.T. (Yasutake Takahashi); writing—review and editing, Y.T. (Yasutake Takahashi); visualization, S.N., K.S. and S.M.; supervision, Y.T. (Yasutake Takahashi), Y.T. (Yoshiaki Taniai), and T.N.; project administration, M.K.; funding acquisition, M.K.

Funding

This work was supported by research grants from the Fukui Prefectural Government, Japan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dollar, A.M.; Herr, H. Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art. IEEE Trans. Robot. 2008, 24, 144–158. [Google Scholar] [CrossRef][Green Version]
  2. Satoh, H.; Kawabata, T.; Sankai, Y. Bathing Care Assistance with Robot Suit HAL. In Proceedings of the International Conference on Robotics and Biomimetics, Guilin, China, 19–23 December 2009; pp. 498–503. [Google Scholar]
  3. Kawamoto, H.; Hayashi, T.; Sakurai, T.; Eguchi, K.; Sankai, Y. Development of Single Leg Version of HAL for Hemiplegia. In Proceedings of the 31st Annual International Conference of the IEEE EMBS, Minneapolis, MN, USA, 3–6 Septtmber 2009; pp. 5038–5043. [Google Scholar]
  4. Kawabata, T.; Satoh, H.; Sankai, Y. Working Posture Control of Robot Suit HAL for Reducing Structual Stress. In Proceedings of the International Conference on Robotics and Biomimetics, Guilin, China, 19–23 December 2009; pp. 2013–2018. [Google Scholar]
  5. Lu, R.; Li, Z.; Su, C.Y.; Xue, A. Development and Learning Control of a Human Limb With a Rehabilitation Exoskeleton. IEEE Trans. Ind. Electron. 2014, 61, 3776–3785. [Google Scholar] [CrossRef]
  6. Stegall, P.; Winfree, K.; Zanotto, D.; Agrawal, S.K. Rehabilitaion Exoskeleton Design: Exploring the Effect of the Anterior Lunge Degree of Freedom. IEEE Trans. Robot. 2013, 29, 838–846. [Google Scholar] [CrossRef]
  7. Zhang, X.; Xiang, Z.; Lin, Q.; Zhou, Q. The Design and Development of a Lower Limbs Rehabilitation Exoskeleton Suit. In Proceedings of the International Conference on Complex Medical Engineering, Beijing, China, 25–28 May 2013; pp. 307–312. [Google Scholar]
  8. Ma, W.; Zhang, X.; Yin, G. Design on Intelligent Perception System for Lower Limb Rehabilitation Exoskeleton Robot. In Proceedings of the International Conference on Ubiquitous Robots and Ambient Intelligence, Xi’an, China, 19–22 August 2016; pp. 587–592. [Google Scholar]
  9. Seungnam, Y.; Changsoo, H.; Ilje, C. Design considerations of a lower limb exoskeleton system to assist walking and load-carrying of infantry soldiers. Appl. Bionics Biomechan. 2014, 11, 119–134. [Google Scholar]
  10. Gui, L.; Yang, Z.; Yang, X.; Gu, W.; Zhang, Y. Design Control Technique Research of Exoskeleton Suit. In Proceedings of the International Conference on Automation and Logistics, Jinan, China, 18–21 August 2007; pp. 541–546. [Google Scholar]
  11. Cyberdyne. Hal for Disaster Recovery <on the Research and Development Stage>. 2016. Available online: http://www.cyberdyne.jp/english/products/supporting.html (accessed on 21 December 2016).
  12. Activelink Co. Ltd. Powerloader Linght “PLL”. 2014. Available online: http://activelink.co.jp/doc/668.html (accessed on 21 December 2016).
  13. Yamauchi, B. PackBot: A versatile platform for military robotics. In Proceedings of the Unmanned Ground Vehicle Technology VI, Orlando, FL, USA, 12–16 April 2004; Volume 5422, pp. 228–237. [Google Scholar]
  14. Fleischer, C.; Hommel, G. A Human-Exoskeleton Interface Utilizing Electromyography. IEEE Trans. Robot. 2008, 24, 872–882. [Google Scholar] [CrossRef]
  15. Takahashi, J.; Meraz, N.S.; Suezawa, S.; Hasegawa, Y.; Sankai, Y. Alternative Interface System by Using Surface EMG of Residual Muscles for Physically Challenged Person. In Proceedings of the International Conference on Robotics and Biomimetics, Phuket, Thailand, 7–11 December 2011; pp. 1349–1354. [Google Scholar]
  16. Chandrapal, M.; Chen, X.; Wang, W. Preminary evaluation of a lower-limb exoskeleton—Stair climbing. In Proceedings of the International Conference on Advanced Intelligent Mechatronics, Wollongong, NSW, Australia, 9–12 July 2013; pp. 1458–1463. [Google Scholar]
  17. Tsuji, M.; Kakamu, T.; Hayakawa, T.; Kumagai, T.; Hidaka, T.; Kanda, H.; Fukushima, T. Risk and preventive factors for heat illness in radiation decontamination workers after the Fukushima Daiichi Nuclear Power Plant accident. J. Occup. Health 2013, 55, 53–58. [Google Scholar]
  18. Kazerooni, H.; Racine, J.L.; Huang, L.; Steger, R. On the Control of the Berkeley Lower Extremity Exoskeleton (BLEEX). In Proceedings of the International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 4353–4360. [Google Scholar]
  19. Huang, L.; Steger, J.R.; Kazerooni, H. Hybrid Control of the Berkeley Lower Extremity Exoskeleton (BLEEX). In Proceedings of the 2005 ASME International Mechanical Engineering Congress and Exposition, Orlando, FL, USA, 5–11 November 2005; pp. 1–8. [Google Scholar]
  20. Zoss, A.; Kazerooni, H.; Chu, A. Hybrid Control of the Berkeley Lower Extremity Exoskeleton (BLEEX). In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3132–3139. [Google Scholar]
  21. Steger, R.; Kim, S.H.; Kazerooni, H. Control Scheme and Networked Control Architecture for the Berkeley Lower Extremity Exoskeleton (BLEEX). In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3469–3476. [Google Scholar]
  22. Sano, K.; Yagi, E.; Sato, M. Estimation of Walking Intention of Non-Handicapped Persons Using Foot Switches and Hip Joint Angles. J. Jpn. Soc. Mechan. 2013, 79, 3487–3500. [Google Scholar][Green Version]
  23. Sano, K.; Yagi, E.; Sato, M. Development of a Wearable Assist Suit for Walking and Lifting-Up Motion Using Electric Motors. J. Robot. Mechatron. 2013, 25, 923–930. [Google Scholar] [CrossRef]
  24. Liu, D.X.; Wu, X.; Wang, M.; Chen, C.; Zhang, T.; Fu, R. Non-Binding Lower Extremity Exoskeleton (NextExo) for Load-Bearing. In Proceedings of the IEEE Conference on Robotics and Biomimetics, Zhuhai, China, 6–9 December 2015; pp. 2312–2317. [Google Scholar]
  25. Wang, M.; Wu, X.; Liu, D.X.; Wang, C. A Human Motion Prediction Algorithm Based on HSMM for SIAT’s Exoskeleton. In Proceedings of the 35th chinese Control Conference, Chengdu, China, 27–29 July 2016; pp. 3891–3896. [Google Scholar]
  26. Madgwick, S.O. An Effcient Orientation Filter for Inertial And Inertial/Magnetic Sensor Arrays; Report x-io and University of Bristol; University of Bristol: Bristol, UK, 2010; Volume 25, pp. 113–118. [Google Scholar]
  27. Ishizuka, Y.; Murai, S.; Takahashi, Y.; Kawai, M.; Taniai, Y.; Naniwa, T. Modeling Walking Behavior of Powered Exoskeleton based on Complex-Valued Neural Network. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics, Miyazaki, Japan, 7–10 October 2018; pp. 1923–1928. [Google Scholar] [CrossRef]
  28. Murata, F.; Takahashi, Y. Walking Motion Model based on Quaternion-valued Recurrent Neural Network for Powered Exoskeleton. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems and 19th International Symposium on Advanced Intelligent Systems, Toyama, Japan, 5–8 December 2018; pp. 1354–1359. [Google Scholar]
Figure 1. Configuration of powered exoskeleton. (a) Axes of force sensors and the degrees of freedom of the joints; (b) Side view and zero position of angles.
Figure 1. Configuration of powered exoskeleton. (a) Axes of force sensors and the degrees of freedom of the joints; (b) Side view and zero position of angles.
Applsci 09 00164 g001
Figure 2. Motion sensors attached to the body of the wearer, and the definition of the axes.
Figure 2. Motion sensors attached to the body of the wearer, and the definition of the axes.
Applsci 09 00164 g002
Figure 3. The shoe for the wearer, the foot of the powered exoskeleton, and the force sensor attached to both.
Figure 3. The shoe for the wearer, the foot of the powered exoskeleton, and the force sensor attached to both.
Applsci 09 00164 g003
Figure 4. Overview of proposed control system.
Figure 4. Overview of proposed control system.
Applsci 09 00164 g004
Figure 5. Estimation of joint angle at time t + Δ t , ϕ t + Δ t , based on k-NN.
Figure 5. Estimation of joint angle at time t + Δ t , ϕ t + Δ t , based on k-NN.
Applsci 09 00164 g005
Figure 6. Data acquisition for database construction: the wearer walks forward and backward wearing the powered exoskeleton while it is lifted by a gantry and pushes the gantry himself.
Figure 6. Data acquisition for database construction: the wearer walks forward and backward wearing the powered exoskeleton while it is lifted by a gantry and pushes the gantry himself.
Applsci 09 00164 g006
Figure 7. Results of motion estimation based on k-nearest neighbor algorithm.
Figure 7. Results of motion estimation based on k-nearest neighbor algorithm.
Applsci 09 00164 g007
Figure 8. Image sequences of walking forward based on each method: the sampling time of image capture was approximately 1.8 s.
Figure 8. Image sequences of walking forward based on each method: the sampling time of image capture was approximately 1.8 s.
Applsci 09 00164 g008
Figure 9. Image sequences of walking backward based on each method: the sampling time of the image capture was approximately 5.4 s.
Figure 9. Image sequences of walking backward based on each method: the sampling time of the image capture was approximately 5.4 s.
Applsci 09 00164 g009
Figure 10. Average load on the shoulders while walking.
Figure 10. Average load on the shoulders while walking.
Applsci 09 00164 g010
Figure 11. Reaction force on feet in horizontal front-back direction while walking forward based on each control method.
Figure 11. Reaction force on feet in horizontal front-back direction while walking forward based on each control method.
Applsci 09 00164 g011
Figure 12. Reaction force on feet in horizontal front-back direction while walking backward based on each control method.
Figure 12. Reaction force on feet in horizontal front-back direction while walking backward based on each control method.
Applsci 09 00164 g012
Figure 13. Answers to questionnaire on the powered exoskeleton controlled by the methods.
Figure 13. Answers to questionnaire on the powered exoskeleton controlled by the methods.
Applsci 09 00164 g013
Table 1. Walking speed for each method.
Table 1. Walking speed for each method.
MethodWalking Forward [km/h]Walking Backward [km/h]
Proposed method2.702.16
Gravity compensation method2.371.65
Force Switch-based method2.700.75
Back to TopTop