Next Article in Journal
Noise-Robust Multimodal Audio-Visual Speech Recognition System for Speech-Based Interaction Applications
Next Article in Special Issue
Classification Models of Action Research Arm Test Activities in Post-Stroke Patients Based on Human Hand Motion
Previous Article in Journal
Review on Functional Testing Scenario Library Generation for Connected and Automated Vehicles
Previous Article in Special Issue
Characterization of the Workspace and Limits of Operation of Laser Treatments for Vascular Lesions of the Lower Limbs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes

by
Juan Daniel Ramirez-Zamora
1,
Omar Arturo Dominguez-Ramirez
1,*,
Luis Enrique Ramos-Velasco
2,
Gabriel Sepulveda-Cervantes
3,
Vicente Parra-Vega
4,
Alejandro Jarillo-Silva
5 and
Eduardo Alejandro Escotto-Cordova
6
1
Basic Sciences and Engineering Institute, Autonomous University of the State of Hidalgo—UAEH, Mineral de la Reforma 42184, Hidalgo, Mexico
2
Aeronautical Engineering Department, Metropolitan Polytechnic University of Hidalgo—UPMH, Tolcayuca 43860, Hidalgo, Mexico
3
Center for Innovation and Technological Development in Computing, National Polytechnic Institute—CIDETEC-IPN, Mexico City 07700, Mexico
4
Robotics and Advanced Manufacturing Department, Research Center for Advanced Studies—CINVESTAV Saltillo, Ramos Arizpe 25900, Coahuila, Mexico
5
Institute of Informatics, University of the South Sierra—UNSIS, Miahuatlán de Porfirio Díaz 70800, Oaxaca, Mexico
6
Faculty of Higher Studies Zaragoza, National Autonomous University of Mexico—UNAM, Mexico City 09230, Mexico
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 7729; https://doi.org/10.3390/s22207729
Submission received: 26 August 2022 / Revised: 16 September 2022 / Accepted: 5 October 2022 / Published: 12 October 2022
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)

Abstract

:
There exist several methods aimed at human–robot physical interaction (HRpI) to provide physical therapy in patients. The use of haptics has become an option to display forces along a given path so as to it guides the physiotherapist protocol. Critical in this regard is the motion control for haptic guidance to convey the specifications of the clinical protocol. Given the inherent patient variability, a conclusive demand of these HRpI methods is the need to modify online its response with neither rejecting nor neglecting interaction forces but to process them as patient interaction. In this paper, considering the nonlinear dynamics of the robot interacting bilaterally with a patient, we propose a novel adaptive control to guarantee stable haptic guidance by processing the causality of patient interaction forces, despite unknown robot dynamics and uncertainties. The controller implements radial basis neural network with daughter RASP1 wavelets activation function to identify the coupled interaction dynamics. For an efficient online implementation, an output infinite impulse response filter prunes negligible signals and nodes to deal with overparametrization. This contributes to adapt online the feedback gains of a globally stable discrete PID regulator to yield stiffness control, so the user is guided within a perceptual force field. Effectiveness of the proposed method is verified in real-time bimanual human-in-the-loop experiments.

1. Introduction

1.1. Background and Motivation

Research on motor rehabilitation therapy has provided advanced strategies for upper limb of people with cerebrovascular accident (CVA) [1,2], ranging from induced movement therapy [3], electromechanical assisted training [4], to robot-based haptics [5,6]. The emerging technologies of virtual reality [7,8], features rehabilitation by promoting repetition, and task-oriented training in a ludic, motivating and playful environment [9], facilitating functional, useful and improved experience. These not only benefit the user from this experience, but also the therapists who perform and evaluate and document online the tasks, with studies on upper limb motor recovery of CVA patient [10,11]. However, virtual body representation remains an involved issue [12], including its critical variable of interaction force of the patient [13], for viable and feasible interaction given the motor variability of each patient [14]. Platforms that integrate visual rendering with haptic stimuli conveys the user a multimodal perception for improved interaction [15]. There has been studied for motor training and/or rehabilitation protocols to generate a novel environment by executing the time and spatial patters of movements accurately through practice [16,17,18]. However, it remains unclear how to deal with volitional patient interaction force [19].
Neuropsychological rehabilitation offers also promising results based on programming specific tasks from the qualitative analysis of symptoms, some based on motor learning, i.e., changes in patient movements that reflect changes in the structure and function of the nervous system. In [20], it is presented an experimental study to improve the prediction time and reduce the robot response taken to reach a desired position, based on hand and eye-gaze motion determination, to foresee the point of user interaction in a virtual environment. However, the essential patient force interaction is not assessed. The development of haptic technology is incipient for rehabilitation purposes, though mature for computer and gaming interaction [21], it stands for subject of research how to deal patient interaction force, given their deviated motor patters, to promote their improvement under kinesthetic stimulus. It stands for a subject of research in several fields, including clinical tests have been implemented [22].
The so-called Porteus Maze Test (PMT) consists of solving mazes ordered in a pattern of increasing difficulty, it has been studied for upper limb rehabilitation of stroke patients; though PMT was proposed originally as a psychometric nonverbal test to measure psychological planning capacity and foresight (performance intelligence) [17]. The PMT is also currently used as a test associated with the activation of the frontal region of the brain, involved in the planning factor and capable of detecting perseverative errors [17]. Thus, it can be extended to evaluate executive functioning since it qualifies observable behaviors in neuropsychological rehabilitation [23], by inferring dysfunction of the Central Nervous System [24]. Then, it becomes a subject of interest to evaluate PMT under haptic guidance.

1.2. Contribution

A self-tuning scheme based on neural network identification of nonlinear dynamics is presented to adapt control feedback gains of a discrete PID guidance controller. Since a PID structure can be abstracted as the addition of restitution viscoelastic and memory generalized vector forces computed to converge to the nominal cartesian task, then the robot end-effector guides human user hand with such a vector force, which increases (diminishes) if position error increases (diminishes). Real-time experiments with nine healthy volunteers are presented under bilateral PMT using two Omni haptic interfaces. The user solves the virtual maze navigating under haptic guidance of the Omni at each hand under two modalities: (i) passive haptic guidance (PHG), where the user perceives a contact force each time he/she touches the maze boundaries, the less touches the better; (ii) active haptic guidance (AHG), where the user is guided continuously with a haptic force corresponding to how much it deviates from the nominal maze trajectory, the less position error the better. Results shows that in AHG renders improved trajectory precision from the self-tuning adaptation of force feedback.

2. The Problem Formulation

2.1. The Dynamics of the HRpI System

Consider a human–robot physical interaction system (HRpI) equipped with one haptic robotic device, one per left and right hand, that exhibits a high-end electromechanical performance, such as low friction, backdrivability and low inertia, with a high bandwidth to display force, whose nonlinear dynamic model is [25,26,27]
H σ ( q σ ) q ¨ σ + C σ ( q σ , q ˙ σ ) q ˙ σ + G σ ( q σ ) + τ f σ = τ σ ,
where σ is used to indicate left l, or right r haptic device, q σ R n , q ˙ σ R n are the generalized position and velocity joint coordinates, respectively; H σ ( q σ ) R n × n denotes a symmetric positive definite inertial matrix, C σ ( q σ , q ˙ σ ) R n × n represents the Coriolis and centripetal forces, G σ ( q σ ) R n models the gravity loads from earth gravitation field, and τ σ R n stands for the torque input. Term τ f σ = f b σ q ˙ σ + f c σ tanh ( γ σ q ˙ σ ) stands for joint friction, where f b σ , f c σ are positive definite n × n matrices modelling viscous damping and the dry friction respectively and its coefficient γ σ > 0 . When the human operator is grasping the haptic device through placing its fingertip into its thimble, the dynamics changes remarkably due to human exerts a human torque τ h σ into the haptic robotic device:
H σ ( q σ ) q ¨ σ + C σ ( q σ , q ˙ σ ) q ˙ σ + G σ ( q σ ) + τ f σ = τ σ + τ h σ ,
where τ h σ is assumed Liptchitz, giving rise to a human-in-the-loop configuration [26]. System (2) can be written in continuous time state space representation x σ = [ x σ 1 x σ 2 ] T = [ q σ q ˙ σ ] T as follows
x ˙ σ ( t ) = f σ ( x σ ( t ) ) + g σ ( x σ ( t ) ) u σ ( t ) + g σ ( x σ ( t ) ) τ h σ ( t )
y σ ( t ) = c σ x σ ( t )
where g σ ( x σ ( t ) ) τ h σ ( t ) is the map of the exogenous time-varying unmeasurable human torque, and
f σ ( x σ ( t ) ) = x σ 2 H σ 1 [ C σ x σ 2 + G σ + τ f σ ] , g σ ( x σ ( t ) ) = 0 H σ 1
are unknown smooth functions, and u σ = τ σ is the control input.

2.2. Problem Statement

When the human operator has a motor disability, there has been proposed non-linear robot controllers u σ to assist motion [28,29,30], including a high performance decentralized continuous nonlinear PID controller [31], however with constant feedback gains that do not adapt to changing conditions, such a time varying persistent human interaction term g σ ( x σ ( t ) ) τ h σ ( t ) . Then, the problem can be stated as follows: assuming unknown f σ ( x σ ( t ) ) and unknown human interaction generalized force g σ ( x σ ( t ) ) τ h σ ( t ) , design a model-free control u σ with feedback gains that adapts online for each patient so as to his/her performance improves when interacting with the robotic device under haptic guidance. Figure 1 shows a maze solution application.

3. Adaptive Interaction System

In this section, the adaptive interactive system is presented as shown in Figure 2, as can be seen each of the haptic device has a programmed wavenet PID controller which has communication between them through the computer where the algorithms are run. The wavenet PID controller scheme is based on an identification of inverse dynamics of each haptic device and a IIR filter to tune PID feedback gains and guarantees global regulation.
For this purpose, the control scheme shown in Figure 3 is used, where four main blocks can be observed: HRpI, Wavenet identification, Discrete PID controller and Auto-tunning gains. The following subsections will describe each of them.

3.1. Input-Output Dynamics of the HRpI

It is well-known that any sufficiently smooth continuous time non-linear system admits a discrete-time representation [32,33]. Then, (2) or (3),(4) can be represented in discrete time state space, by assuming access to all state at each time, with small enough sampling period T > 0 , and provided that input remains constant between sampling instant I k = [ k T ( k + 1 ) T ] , where k 0 . In this way, (3) is approximated with a first order Euler forward difference x ˙ σ x σ ( t + T ) x σ ( t ) T as follows
x σ ( t + T ) x σ ( t ) T = f σ ( x σ ( t ) ) + g σ ( x σ ( t ) ) u σ ( t ) .
Solving (6) for x σ ( t + T ) leads to
x σ ( t + T ) = x σ ( t ) + f σ ( x σ ( t ) ) T + g σ ( x σ ( t ) ) T u σ ( t ) .
Evaluating (7) and (4) at t = k T yields the following nonlinear discrete time system
x σ [ ( k + 1 ) T ] = x σ [ k T ] + f σ ( x σ [ k T ] ) T + g σ ( x σ [ k T ] ) T u σ [ k T ]
y σ [ ( k + 1 ) T ] = c σ x σ [ ( k + 1 ) T ] .
Substituting (8) into (9), the discrete output at instant ( k + 1 ) T is given by
y σ [ ( k + 1 ) T ] = C σ x σ [ k T ] + f σ ( x σ [ k T ] ) T + C σ ( g σ ( x σ [ k T ] ) T ) u σ [ k T ] Φ σ [ x σ [ k T ] , T ] + Γ σ [ x σ [ k T ] , T ] u σ [ k T ]
where Φ σ ( ) stands for the flow of discrete dynamic system with Γ σ ( ) the input matrices of robot, with
Φ σ [ x σ [ k T ] , T ] = c σ ( x σ [ k T ] + f σ ( x σ [ k T ] ) T )
Γ σ [ x σ [ k T ] , T ] = c σ ( g σ ( x σ [ k T ] ) T ) .
Notice that (10) describes the input-output dynamics of the haptic device σ = { l , r } , at instant k + 1 . Notice that input u σ ( k ) and system output y σ ( k ) are the only data available. In this paper, we exploit the properties of wavenets to approximate the input-output dynamics (10) of each haptic device, but additionally we consider IIR filter in the output layer to prune irrelevant signals to build an efficient identification scheme useful to tune PID feedback gains [34].

3.2. Wavenet Identification

A scheme is proposed to identify approximately the inverse plant (HRpI system), to this end, a radial basis neural network is used. The activation functions ψ ( τ σ ) are daughter wavelet functions ψ j ( τ σ ) of RASP1 type. To filter neurons that have little contribution in the identification process, three IIR filters are incorporated in cascade, using the least number of neurons possible and reduce the computational load in the learning process [35]. In Figure 4, the signal propagation and the general interconnection are presented, where
τ L σ = u σ ( k ) b L σ a L σ .
Infinite impulse response (IIR) recurrent structure (Figure 5), in cascading structure, yields double improving speed of learning by pruning those nodes with insignificant relevant information from the cross contribution summation of daugthers wavelets. Notice in the scheme, the forward delayed structure modulated by the input and the feedback loop modulated by the persistent signal to allows swapping a range of frequency [36].
The mother wavelet function ψ ( k ) generates daughter wavelets ψ L ( τ L σ ) by its property of expansion or contraction and translation, represented as [37]:
ψ L ( τ L σ ) = 1 a ψ ( τ L σ )
with a 0 ; a , b R and
τ L σ = j = 1 p ( u j b L σ ) 2 1 / 2 / a L σ
the j scale variable, a L σ allows expansion and contraction, and b L σ stands for the ( L σ ) translation variable at k, in the classical role of RBF, with the advantage of dealing with more refinement through daughters wavelets ψ L ( τ L σ ) . This last feature is essential in the present algorithm together with the pruning capability of the IIR filter. As suggested in [37], the mathematical representation of wavelet RASP1 is a singularity-free normalization of the argument of the wavelet
R A S P 1 σ = τ σ ( τ σ 2 + 1 ) 2
whose partial derivative with respect to b L σ is
R A S P 1 σ b L σ = 1 a 3 τ σ 2 1 ( τ σ 2 + 1 ) 3
In this way, for the letf o rigth haptic device, the i wavenet approximation signal with IIR filter can be calculated as:
y ^ i σ ( k ) = q = 1 p l = 0 M c i , l z i σ ( k l ) u q σ ( k ) + j = 1 N d i , j y ^ i σ ( k j ) v σ ( k )
z i σ ( k ) = l = 1 L w i , l ψ l σ ( k )
where L stands for the number of daughter wavelets, w i , l the weights of each neuron in the wavenet, c i , l and d i , j are the coefficients of forward and backward IIR filter, respectively, and M and N the coefficients number of forward and backward IIR σ filter, respectively. As can be seen, the (17) has the following matrix structure
y ^ σ ( k + 1 ) = Φ ^ σ [ y σ ( k ) , Θ Φ σ ] + Γ ^ σ [ y σ ( k ) , Θ Γ σ ] · u σ ( k ) .
System (19) is estimated by two wavenet functions as follows
Φ ^ σ [ y σ ( k ) , Θ Φ σ ] = j = 1 N d i , j y ^ i σ ( k j ) v σ ( k )
Γ ^ σ [ y σ ( k ) , Θ Γ σ ] = l = 0 M c i , l z i σ ( k l )
with adjustable parameters Θ Φ σ and Θ Γ σ . Therefore, since nonlinear functions wavenet functions Φ ^ σ ( k ) and Γ ^ σ ( k ) estimate Φ σ ( k ) and Γ σ ( k ) , as k , then error e i σ ( k ) = y i σ ( k ) y ^ i σ ( k ) 0 can be used as a Lebesgue measure useful to tune feedback gains.

Weavenet Learning

The parameters of the neural network and the IIR filters in their matrix form are: control signal u σ = [ u 1 σ , u 2 σ , , u p σ ] T , the translation parameter b σ = [ b 1 σ , b 2 σ , , b L σ ] T , the dilatation parameter a σ = [ a 1 σ , a 2 σ , , a L σ ] T , the daughter wavelets ψ σ = [ ψ 1 , ψ 2 , , ψ L ] T , the neural network output z σ = [ z 1 σ , z 2 σ , , z p σ ] T , the estimated position y ^ σ = [ y ^ 1 σ , y ^ 2 σ , , y ^ p σ ] T , and the synaptic weight matrices W σ R p × L ; and the coefficients C σ R p × M and D σ R p × N for the filters are:
W σ = w 11 w 12 w 1 p w 21 w 22 w 2 p w p 1 w p 2 w L p , C σ = c 10 c 11 c 1 M c 20 c 21 c 2 M c p 1 c p 2 c p M , D σ = d 11 d 12 d 1 N d 21 d 22 d 2 N d p 1 d p 2 d p N .
The output z σ ( k ) of the wavenet is given by
z σ ( k ) = u σ T ( k ) W σ ( k ) ψ σ T ( k ) ,
which is passed through the IIR filters to obtain the estimated position y ^ σ ( k ) ,
y ^ σ ( k ) = Γ ^ σ ( k ) + D σ Y ^ σ ( k ) v σ ( k )
where
Γ ^ σ ( k ) = C σ z σ ( k ) ,
Y ^ σ ( k ) = y ^ 1 ( k 1 ) y ^ 1 ( k 2 ) y ^ 1 ( k N ) y ^ 2 ( k 1 ) y ^ 2 ( k 2 ) y ^ 2 ( k N ) y ^ p ( k 1 ) y ^ p ( k 2 ) y ^ p ( k N )
and v σ ( k ) = v 1 σ ( k ) v p σ ( k ) v p σ T is the persistent filter signal.
The wavenet parameters are optimized by a least mean square algorithm (LMS) subject to minimizing a convex radially unbounded cost functions E σ , defined by
E σ ( k ) = 1 2 i = 1 p e i σ ( k ) 2 .
Let the estimation error between wavenet output signal with IIR σ filter and system output be
e i σ ( k ) = y i σ ( k ) y ^ i σ ( k ) .
To minimize E σ ( k ) , the steepest gradient-descent method is considered. To this end, notice that partial derivatives of E σ ( k ) with respect to a σ ( k ) , b σ ( k ) , W σ ( k ) , C σ ( k ) and D σ ( k ) are required for each haptic device to update the incremental changes of each parameter along its negative gradient direction. That is,
Δ W σ ( k ) = E σ ( k ) W σ ( k )
Δ a σ ( k ) = E σ ( k ) a σ ( k )
Δ b σ ( k ) = E σ ( k ) b σ ( k )
Δ C σ ( k ) = E σ ( k ) C σ ( k )
Δ D σ ( k ) = E σ ( k ) D σ ( k )
then, the tuning update parameter for each haptic device becomes:
W σ ( k + 1 ) = W σ ( k ) + μ W σ Δ W σ ( k )
a σ ( k + 1 ) = a σ ( k ) + μ a σ Δ a σ ( k )
b σ ( k + 1 ) = b σ ( k ) + μ b σ Δ b σ ( k )
C σ ( k + 1 ) = C σ ( k ) + μ C σ Δ C σ ( k )
D σ ( k + 1 ) = D σ ( k ) + μ D σ Δ D σ ( k )

3.3. Discrete PID Controller for Each Haptic Device

The following discrete PID controller is proposed:
u σ ( k + 1 ) = u σ ( k ) + k p σ ( k ) [ ε σ ( k ) ε σ ( k 1 ) ] + k d σ ( k ) [ ε σ ( k ) 2 ε σ ( k 1 ) + ε σ ( k 2 ) ] + k i σ ( k ) ε σ ( k )
where k p σ ( k ) , k i σ ( k ) and k d σ ( k ) stand for strictly positive definite proportional, integral and derivative feedback gains, respectively; u σ ( k ) is the controller at instant k, and error is defined as
ε σ ( k ) = y r e f σ ( k ) y σ ( k )
for σ = { l , r } . Each feedback gain is tuned according to the corresponding error they affect in (39) and modulated by Γ ^ σ , the input matrix of (19).

3.4. Auto-Tuning PID Gains

Due to the gains k p σ , k i σ and k d σ are considered within the cost function (27), those can be updated similar to (34)–(38). Let
k p σ ( k ) = k p σ ( k 1 ) + μ p e σ ( k ) Γ ^ i , q ( k ) [ ε σ ( k ) ε σ ( k 1 ) ]
k i σ ( k ) = k i σ ( k 1 ) + μ i e σ ( k ) Γ ^ i , q ( k ) ε σ ( k ) k d σ ( k ) = k d σ ( k 1 ) + μ d e σ ( k ) Γ ^ i , q ( k )
[ ε σ ( k ) 2 ε σ ( k 1 ) + ε σ ( k 2 ) ]
where Γ ^ σ is defined by (21), for 0 < μ < 1 the learning rate of the PID controller gains. Notice that learning rates μ are designer parameters and are used for both controllers.

3.5. PID Wavenet Controller Algorithm

The proposed PID wavenet Algorithm 1 is summarized as follows:   
Algorithm 1: PID Wavenet Controller
1:
Read the wavenet parameters: number of neurons L σ , learning rates ( μ W σ , μ a σ , μ b σ ), synaptic weight values W σ and the wavelet ψ σ .
2:
Read the IIR filter parameters: number of feed-backs and feed-forward coefficients, M σ and N σ , respectively; learning rates ( μ C σ , μ D σ ), the IIR coefficient values ( C σ D σ ) and the persistent signal v σ ( k ) .
3:
Read the PID controller parameters: gain values ( K p σ , K i σ , K d σ ) and its update rates ( μ p σ , μ i σ , μ d σ ).
4:
Read the operation parameters: number of epochs e p k s .
5:
Initialize the internal parameters and k = 0 .
6:
while it is working do
7:
    for  e p k = 0 ; e p k e p k s ; i + +  do
8:
        Get the control signal u σ ( k ) and the output from plant y σ ( k ) , Equations (39) and (9).
9:
        Compute the vector τ L σ ( k ) using (14).
10:
        Evaluate the mother’s wavelet ψ L ( τ L σ ( k ) ) with (13).
11:
        Compute the wavelet output z σ ( k ) with (23).
12:
        Compute estimated y ^ σ ( k ) using (24).
13:
        Compute the estimated error using (28).
14:
        Compute the energy function E σ ( k ) with (27).
15:
        Compute the parameter gradients: Δ W σ ( k ) , Δ a σ ( k ) , Δ b σ ( k ) , Δ C σ ( k ) , Δ D σ ( k ) , Equations (29)–(33).
16:
        Update the parameter values using its learning ratios: Δ W σ ( k + 1 ) , Δ a σ ( k + 1 ) , Δ b σ ( k + 1 ) , Δ C σ ( k + 1 ) , Δ D σ ( k + 1 ) ; Equations (34)–(38).
17:
    end for
18:
    Get the parameter Γ ^ σ ( k ) and the tracking error ε σ ( k ) , Equations (25) and (40).
19:
    Tune controller gains: K p σ ( k ) , K i σ ( k ) , K d σ ( k ) : Equations (41)–(43).
20:
    Calculate the new control signal u ( k + 1 ) using (39).
21:
    Reassign the new values.
22:
    Increase the value of while loop operator, k = k + 1 .
23:
end while
The flowchart for the PID wavenet algorithm is illustrated in Figure 6.

4. The Experimental System

The goal of this section is to present the experimental platform as well as the design of experiments.

4.1. Experimental Platform

Consider a Geomagic Touch [38], as the haptic interface for each hand, modeled as a three degrees of freedom nonlinear robot given in (3) and (4), see Figure 7. A PC equipped with an Intel(R) Core(TM) i7-4720HQ CPU runs at 2.60 GHz, 16 GB RAM and graphic card NVIDIA GeForce GTX 980M, under OS Windows 10, and Unity3D, release 22 March 2020. System deploys a soft real-time thread that updates the whole haptic control loop at [h = 1 ms], corresponding a fast enough sampling frequency of 1KHz for the kinesthetic stimulus, while visual renderization is update at 66 Hz.

4.2. Design of Experiments

The experiments aim at evaluating competency of solving a maze with motor commands within a given order and precision, involving executive decision making and motor patters, using PMT protocol. It is surmised that such motion patters leads to coordinate bimanual cooperation of both hands that improves under haptic guidance. Then, it is considered two experiments, one providing only haptic stimuli when user touches la boundaries of the maze and other one with continuous haptic guidance, not only when it deviates as much as touching the limits of the maze.
To this end, the user solves the maze by commanding the 3D haptic interface, which is represented in the virtual world as the pointer within the virtualized maze as shown in Figure 8. The middle road of the maze is considered as the position reference, then the task of the haptic control is to converge to such position reference path, whatever how the user navigates to solve the maze. In this way, the novel paradigm of invariant motor learning is implemented in our scheme: User tracks at his/her own pace and motor capacity the defined invariant position points P i shown in Figure 8, i.e., the algorithm does not impose a desired time base since desired velocity is neither enforced visually nor imposed throughout the control scheme. In this way, user intentional movement is deployed to solve the maze at will, which es essential for motor rehabilitation.
Two exercises are designed, depending of two level of difficulty are instrumented by considering: (a) Low difficulty represented by a Simple Connection Maze (SCM), where a unique non-branching path solves the maze, and (b) Medium difficulty represented by Multiple Connection Maze (MCM), where there exists multiple branches and dead-ends requiring executive decision making to transverse until reaching the exit.

4.3. How the Haptic Control Occurs and How Human Is Guided Spatially

Let Figure 9 and Figure 10 show the nominal trajectory for the left and right hand SCM and MCM, where T i represents the nominal transect segments. The ends of each T i are constant spatial points. Assuming that haptic device pointer is at any given instant in a given spatial Cartesian location ξ r , the closest T i is chosen, and it is determined the closest point ξ T i as the reference point at that instant, i.e., y σ r e f = ξ r . Then, the controller injects a torque u σ N m to the haptic device to attract ξ r y σ r e f . In this way, wherever the pointer is, it is attracted to the closest point of transect T i , independently of time, and independently of how fast or slow the velocity of user pointer is. Since human fingertip is inserted at thimble of the haptic device, then human perceive such torque as a vector of haptic force f h , given by f h = J ( q ) 1 u σ N .
For the MCM exercises, a maze of medium difficulty is shown in Figure 11, whose virtualization was programmed in Unity3D, with a unique solution for both right and left haptic devices. Now, 20 via points are considered, for i = 19 transects T i , starting at P 1 , see Figure 12 and Figure 13 for right and left haptic devices, respectively.

5. Experimental Results

A pre-training phase is performance to obtain the initial values of the neural network parameters, see Table 1 and Table 2. THis phase is conducted in a human-in-the-loop configuration.

5.1. Experiment 1: Active Haptic Guidance with SCM and MCM

Figure 14 shows the initial position that the user must have when starting each of the experiments. Figure 15 shows the virtual navigation behavior of the user in the workspace to solve the SCM bimanually, with smooth position behaviour, as shown in Figure 16.
In this passive navigation configuration, user exhibits low performance since haptic guidance not only is scarce but intermitent (user perceives a force at fingertip only when it touches the walls of the maze).

5.2. Experiment 2: Passive Haptic Guidance with SCM and MCM

The following exercise consists of the implementation and application of a control law for trajectory tracking, from the construction of a desired trajectory through motion planning (Figure 17), a different one for each of the haptic devices integrated in the platform (Figure 18). The experiment consists of each device performing tracking-based regulation with the user in the loop, giving the user visual and force feedback on the planned trajectory, where the applied controller guarantees position convergence, the goal is that all this can be used for rehabilitation purposes. After the development of exercise 1, Figure 19 and Figure 20 show the position errors of each of the haptic devices. Figure 21 and Figure 22 show the control signal that is sent to each device to generate trajectory tracking.

5.3. Exercise 1: Passive Haptic Guidance without User in the Loop

For the next test, passive haptic guidance is applied on the device without user in the loop, Figure 23 shows the results in position convergence and energy exchange.

5.4. Exercise 1: Passive Haptic Guidance with User in the Loop and Disturbances

The following test was performed to check the effectiveness of the controller implemented to compensate uncertainty and disturbance generated by the user when it is coupled with the device. Figure 24 shows the moments where there are disturbances, the same instant where there is an increase of energy, the same that the controller uses to compensate and redirect the device to the desired trajectory.

5.5. Exercise 2: Active Haptic Guidance

The present subsection shows the experimental results for the case of Active Haptic Guidance. The Figure 25 shows the behavior of the two haptic devices in the workspace and Figure 26 shows the operational position of two haptic devices in active haptic guidance task.

5.6. Exercise 3: Passive Haptic Guidance

The platform with two haptic devices (right hand and left hand), was evaluated in 2 different experiments (mazes with different level of difficulty), each maze with 2 conditions (without control and with control), as defined below: (i) The user uses two haptic devices to solve a maze in free motion (active haptic guidance), i.e., the user controls their own movements. In this condition (without interactive forces), the execution time is allusive to the performance of each user (different in each hand); visual feedback plays an important role. The compensation of the vector of gravitational forces is established. The optical encoders of the haptic device allow performance measurement by mapping the vector of joint variables to the operational space; and (ii) The user interacts with the platform actively, that is, a tracking control law is implemented, which has the objective of teaching the user how to solve the maze. The control law has the goal of compensating for the uncertainties generated by the user when performing the task (disturbances and position errors). In these conditions (adaptive force that conditions the guide in the operational space), it describes a kinesthetic learning task. The performance of each user in the task represents involuntary movements of the trajectory and establishes the energy requirement, defined in an adaptive way by the control.
As a result of the application of the exercise on the labyrinth of medium difficulty, Figure 27 is presented, which corresponds to the trajectory on the workspace of the two haptic devices, in Figure 28 the position operation on each axis of both devices (x, y, z), Figure 29 and Figure 30 show the position errors generated from tracking the desired trajectory of the two haptic devices. These graphs show the performance of the controller in passive haptic guidance tasks in position tracking and convergence.

5.7. PID Wavenet-IIR Parematers

The performance of the PID wavenet control was evaluated based on the convergence time to the desired trajectory. The following figures describe the behavior of the adaptive control implemented on the maze of exercise 1 and 2. Figure 31 and Figure 32 show the trajectory tracking in the workspace, the response estimation of the plant (haptic device), as well as the maze estimation error of exercise 1 and exercise 2 respectively.
Figure 33 and Figure 34 show: (a) the neural network weights W, (b) parameters a and b, where a is the scaling variable, which allows for dilations and contractions; and b is the translation variable, which allows for displacement at instant k, as well as (c) parameters C and D which are the forward and backward coefficients of the IIR filter respectively.
It is observed that all of them change their value in each instant of time of the exercise, as they evolve to the dynamics generated by the user and the region in which the haptic device is located within the workspace.
Figure 35 and Figure 36 corresponds to the behavior of the PID gains, auto-tuned online for each degree of freedom of the device.

6. Conclusions and Future Work

6.1. Conclusions

A novel identification and control scheme for the 3D nonlinear haptic robotic devices is implemented efficiently based in wavenet with IIR filter; it identifies inverse dynamics aimed at tuning PID feedback gains, not to approximate dynamics as usual neural networks-based control. Purposely, this scheme yields self-tuning of feedback gains to react to human interaction and commanding forces, notably, without any a priori knowledge of the haptic device to guarantee global asymptotic convergence. Real-time human-in-the-loop bimanual experiments show human cooperative decision making since both hands maneuver in the same workspace. The proposed scheme is viable for for practical implementation, where typically not only the exact nonlinear dynamics is now known but it accounts to varying and persistent exciting human interacting force. There was implemented the patterns of a clinical test with a healthy volunteer to assess the usefulness of the platform in real conditions, showing potential for patients.

6.2. Future Work

Platform was tested with healthy subject exhibiting velocities and range of motion within the expected regimes of patients. Next step is to run a controlled and clinically supervised protocol with upper limb motor disability patients who require motor rehabilitation. Particular interest is on cerebrovascular accident patients that requires also motor and cognitive coordination, for which virtual mazes tests are an option.

Author Contributions

Investigation, J.D.R.-Z., O.A.D.-R. and E.A.E.-C.; Methodology, L.E.R.-V., O.A.D.-R. and A.J.-S.; Software, G.S.-C. and J.D.R.-Z.; Writing—review & editing, V.P.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the reason that no medical and psychological measurements were performed.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All the data recorded during the tasks of active haptic guidance and passive haptic guidance on the human-robot interaction platform can be provided by the authors of this paper upon request.

Acknowledgments

The first author thanks the National Council of Science and Technology (CONACYT)- Mexico for grant Doctorate Scholarship No. 410284. The second author thanks the support of the research department of the Autonomous University of the State of Hidalgo.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Winstein, C.J.; Stein, J.; Arena, R.; Bates, B.; Cherney, L.R.; Cramer, S.C.; Deruyter, F.; Eng, J.J.; Fisher, B.; Harvey, R.L.; et al. Guidelines for Adult Stroke Rehabilitation and Recovery: A Guideline for Healthcare Professionals from the American Heart Association/American Stroke Association. Stroke 2016, 47, e98–e169. [Google Scholar] [CrossRef] [PubMed]
  2. Lohse, K.R.; Lang, C.E.; Boyd, L.A. Is more better? Using metadata to explore dose-response relationships in stroke rehabilitation. Stroke 2014, 45, 2053–2058. [Google Scholar] [CrossRef] [PubMed]
  3. Barreca, S.; Wolf, S.L.; Fasoli, S.; Bohannon, R. Treatment interventions for the paretic upper limb of stroke survivors: A critical review. Neurorehabil. Neural Repair 2003, 17, 220–226. [Google Scholar] [CrossRef]
  4. Schultheis, M.T.; Himelstein, J.; Rizzo, A.A. Virtual reality and neuropsychology: Upgrading the current tools. J. Head Trauma Rehabil. 2002, 17, 378–394. [Google Scholar] [CrossRef] [PubMed]
  5. Fu, M.J.; Cavusoglu, M.C. Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface. IEEE Trans. Syst. Man Cybern. Part B 2012, 42, 1633–1644. [Google Scholar] [CrossRef]
  6. Krebs, H.I.; Hogan, N.; Aisen, M.L.; Volpe, B.T. Robot-aided neurorehabilitation. IEEE Trans. Rehabil. Eng. 1988, 6, 75–87. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Sucar, L.; Orihuela-Espina, F.; Velazquez, R.; Reinkensmeyer, D.; Leder, R.; Hernandez-Franco, J. Gesture Therapy: An Upper Limb Virtual Reality-Based Motor Rehabilitation Platform. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 634–643. [Google Scholar] [CrossRef] [Green Version]
  8. Kim, W.-S.; Cho, S.; Ku, J.; Kim, Y.; Lee, K.; Hwang, H.-J.; Paik, N.-J. Clinical Application of Virtual Reality for Upper Limb Motor Rehabilitation in Stroke: Review of Technologies and Clinical Evidence. J. Clin. Med. 2020, 9, 3369. [Google Scholar] [CrossRef]
  9. Garcia-Hernande, N.; Huerta-Cervantes, K.; Muñoz-Pepi, I.; Parra-Vega, V. Touch location and force sensing interactive system for upper limb motor rehabilitation. Multimed. Tools Appl. 2022, 81, 14133–14152. [Google Scholar] [CrossRef]
  10. Henderson, A.; Korner-Bitensky, N.; Levin, M. Virtual Reality in Stroke Rehabilitation: A Systematic Review of its Effectiveness for Upper Limb Motor Recovery. Top. Stroke Rehabil. 2007, 14, 52–61. [Google Scholar] [CrossRef] [PubMed]
  11. Huang, Q.; Wu, W.; Chen, X.; Wu, B.; Wu, L.; Huang, X.; Jiang, S.; Huang, L. Evaluating the effect and mechanism of upper limb motor function recovery induced by immersive virtual-reality-based rehabilitation for subacute stroke subjects: Study protocol for a randomized controlled trial. Trials 2019, 20, 104. [Google Scholar] [CrossRef] [Green Version]
  12. Meneses-González, J.D.; Domínguez-Ramírez, O.A.; Ramos-Velasco, L.E.; Castro-Espinoza, F.A.; Parra-Vega, V. An Adaptive Robotic Assistance Platform for Neurorehabilitation Therapy of Upper Limb. In Proceedings of the Mexican International Conference on Artificial Intelligence, Guadalajara, Mexico, 22–27 October 2018; Springer: Cham, Switzerland, 2018; Volume 11289. [Google Scholar]
  13. Garcia-Hernandez, N.; Huerta-Cervantes, K.; Muñoz-Pepi, I.; Parra-Vega, V. Personalized Touch-Based Exergame System for Unilateral and Bilateral Rehabilitation Training. Games Health J. 2022, 11, 157–167. [Google Scholar] [CrossRef] [PubMed]
  14. Caballero Sanchez, C.; Francisco, J.M.; Vaillo, R.R.; Romero, A.R.; Coves, A.; Murillo, D.B. The role of motor variability in motor control and learning depends on the nature of the task and the individual’s capabilities. Eur. J. Hum. Mov. 2017, 38, 12–26. [Google Scholar]
  15. Sveistrup, H. Motor rehabilitation using virtual reality. J. Neuroeng. Rehabil. 2004, 1, 1–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Holden, M. Virtual Environments for Motor Rehabilitation: Review. Cyberpsychol. Behav. 2005, 8, 187–211. [Google Scholar] [CrossRef] [PubMed]
  17. Carlozzi, N.E. Porteus Maze. In Encyclopedia of Clinical Neuropsychology; Kreutzer, J.S., DeLuca, J., Caplan, B., Eds.; Springer: New York, NY, USA, 2011. [Google Scholar]
  18. El Saddik, A. Chapter 2: Haptics: Haptics Applications, Haptic Technologies; Springer: Berlin/Heidelberg, Germany, 2011; pp. 21–43. [Google Scholar]
  19. Khan, S.; Andersson, K.; Wikander, J. Dynamic based control strategy for haptic devices. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 131–136. [Google Scholar]
  20. Mugisha, S.; Guda, V.K.; Chevallereau, C.; Zoppi, M.; Molfino, R.; Chablat, D. Improving Haptic Response for Contextual Human Robot Interaction. Sensors 2022, 22, 2040. [Google Scholar] [CrossRef] [PubMed]
  21. Jahanmahin, R.; Masoud, S.; Rickli, J.; Djuric, A. Human-robot interactions in manufacturing: A survey of human behavior modeling. Robot. Comput.-Integr. Manuf. 2022, 78, 102404. [Google Scholar] [CrossRef]
  22. Chen, J.; Ro, P.I. Human Intention-Oriented Variable Admittance Control with Power Envelope Regulation in Physical Human-Robot Interaction. Mechatronics 2022, 84, 102802. [Google Scholar] [CrossRef]
  23. Morgante, F.; Gaggioli, A.; Strambi, L.; Rusconi, M.L.; Riva, G. Computer-enhanced Route and Survey Spatial Knowledge Assessment in Clinical Neuropsychology. In Proceedings of the 2006 International Workshop on Virtual Rehabilitation, New York, NY, USA, 29–30 August 2006; pp. 110–115. [Google Scholar]
  24. Huiyu, Z.; Huosheng, H. Inertial Motion Tracking of Human Arm Movements in Stroke Rehabilitation. In Proceedings of the IEEE International Conference Mechatronics and Automation, Niagara Falls, ON, Canada, 29 July–1 August 2005; Volume 3, pp. 1306–1311. [Google Scholar]
  25. Arimoto, S. Control theory of nonlinear mechanical systems. In A Passivity-Based and Circuit-Theoretic Approach; Clarendon Press: Oxford, UK; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  26. Dominguez-Ramirez, O.A.; Parra-Vega, V. Texture, roughness, and shape haptic perception of deformable virtual objects with constrained Lagrangian formulation. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 3, pp. 3106–3111. [Google Scholar]
  27. Spong, M.W.; Vidyasagar, M. Robot Dynamics and Control; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  28. Jarillo-Silva, A.; Dominguez-Ramirez, O.A.; Parra-Vega, V. Haptic training method for a therapy on upper limb. In Proceedings of the 2010 3rd International Conference on Biomedical Engineering and Informatics, Yantai, China, 16–18 October 2010; Volume 4. [Google Scholar]
  29. Ramirez-Zamora, J.D.; Martinez-Teran, G.; Dominguez-Ramirez, O.A.; Ramos-Velasco, L.; Parra-Vega, V.; Saucedo-Ugalde, I. Wavenet control of a CyberForce system with human dynamic on passive haptic guidance tasks. In Proceedings of the 2015 CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Santiago, Chile, 28–30 October 2015; pp. 121–127. [Google Scholar]
  30. Turijan-Rivera, J.A.; Ruiz-Sanchez, F.J.; Dominguez-Ramirez, O.A.; Parra-Vega, V. Modular platform for haptic guidance in paediatric rehabilitation of upper limb neuromuscular disabilities. In Converging Clinical and Engineering Research on Neurorehabilitation; Springer: Berlin/Heidelberg, Germany, 2013; pp. 923–928. [Google Scholar]
  31. Parra-Vega, V.; Arimoto, S.; Liu, Y.H.; Hirzinger, G.; Akella, P. Dynamic sliding PID control for tracking of robot manipulators: Theory and experiments. IEEE Trans. Robot. Autom. 2003, 19, 967–976. [Google Scholar] [CrossRef]
  32. Lai, R.; Yamanaka, Y.; Ohkawa, F.; Katoh, R. Digital control of a robot manipulator disturbed by unknown external forces. In Proceeding of The Society if Instruments and control Engineers, Tokushima, Japan, 29–31 July 1997; pp. 1115–1118. [Google Scholar]
  33. Mareels, I.; Penfold, H.B.; Evans, R.J. Controlling nonlinear time-varying systems via Euler approximations. Automatica 1992, 28, 681–696. [Google Scholar] [CrossRef]
  34. Levin, A.U.; Narendra, K.S. Control of nonlinear dynamical systems using neural networks: Controllability and stabilization. IEEE Trans. Neural Netw. 1933, 4, 192–206. [Google Scholar] [CrossRef]
  35. Haykin, S. Kalman Filtering and Neural Networks; John Wiley & Sons: Hoboken, NJ, USA, 2004; Volume 47. [Google Scholar]
  36. Ramos-Velasco, L.E.; Domínguez-Ramírez, O.A.; Parra-Vega, V. Wavenet fuzzy PID controller for nonlinear MIMO systems: Experimental validation on a high-end haptic robotic interface. Appl. Soft Comput. 2016, 40, 199–205. [Google Scholar] [CrossRef]
  37. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
  38. Jarillo-Silva, A.; Domínguez-Ramírez, O.A.; Parra-Vega, V.; Ordaz-Oliver, J.P. Phantom omni haptic device: Kinematic and manipulability. In Proceedings of the 2009 Electronics, Robotics and Automotive Mechanics Conference (CERMA), Cuernavaca, Mexico, 22–25 September 2009; pp. 193–198. [Google Scholar]
Figure 1. Patients under haptic guidance carrying out a maze task. These studies suggest that adaptability of the control strategy promotes rehabilitation evidence of motor patterns.
Figure 1. Patients under haptic guidance carrying out a maze task. These studies suggest that adaptability of the control strategy promotes rehabilitation evidence of motor patterns.
Sensors 22 07729 g001
Figure 2. Block diagram of the bimanual cooperative system, showing the multimodal stimuli to the human, consequently the human decision making process solves cooperativeness to issue simultaneously motor commands to each device.
Figure 2. Block diagram of the bimanual cooperative system, showing the multimodal stimuli to the human, consequently the human decision making process solves cooperativeness to issue simultaneously motor commands to each device.
Sensors 22 07729 g002
Figure 3. PID Wavenet controller scheme where σ can be left, l or right, r, i.e., σ = { l , r } , y r e f σ ( k ) is the reference signal, ε σ ( k ) stands for the error signal, the control input is u σ ( k ) , r σ ( k ) models the noise signal, y σ ( k ) is the HRpI (human patient in the haptic loop ) output with y ^ σ ( k ) its estimate, and e σ ( k ) the error estimated, finally, v σ ( k ) stands for the persistence signal.
Figure 3. PID Wavenet controller scheme where σ can be left, l or right, r, i.e., σ = { l , r } , y r e f σ ( k ) is the reference signal, ε σ ( k ) stands for the error signal, the control input is u σ ( k ) , r σ ( k ) models the noise signal, y σ ( k ) is the HRpI (human patient in the haptic loop ) output with y ^ σ ( k ) its estimate, and e σ ( k ) the error estimated, finally, v σ ( k ) stands for the persistence signal.
Sensors 22 07729 g003
Figure 4. Diagram of a wavenet neural network with an IIR filter in cascade where σ can be left, l or right, r, i.e., σ = { l , r } .
Figure 4. Diagram of a wavenet neural network with an IIR filter in cascade where σ can be left, l or right, r, i.e., σ = { l , r } .
Sensors 22 07729 g004
Figure 5. IIR filter structure.
Figure 5. IIR filter structure.
Sensors 22 07729 g005
Figure 6. Flowchart of the PID wavenet algorithm proposed.
Figure 6. Flowchart of the PID wavenet algorithm proposed.
Sensors 22 07729 g006
Figure 7. Experimental platform shows the user solving a virtual maze with a his right hand, which commands a (right) haptic interface.
Figure 7. Experimental platform shows the user solving a virtual maze with a his right hand, which commands a (right) haptic interface.
Sensors 22 07729 g007
Figure 8. Multiple branch maze, showing the left-hand (a) and right-hand (b) solution for SCM excercise.
Figure 8. Multiple branch maze, showing the left-hand (a) and right-hand (b) solution for SCM excercise.
Sensors 22 07729 g008
Figure 9. Transects T i trajectories for right haptic device for MCM test.
Figure 9. Transects T i trajectories for right haptic device for MCM test.
Sensors 22 07729 g009
Figure 10. Composite task from T i trajectories for left haptic device for MCM test.
Figure 10. Composite task from T i trajectories for left haptic device for MCM test.
Sensors 22 07729 g010
Figure 11. Solution to the maze MCM of first exercise.
Figure 11. Solution to the maze MCM of first exercise.
Sensors 22 07729 g011
Figure 12. Composite task from T i transects for right haptic device of exercise 2 (MCM).
Figure 12. Composite task from T i transects for right haptic device of exercise 2 (MCM).
Sensors 22 07729 g012
Figure 13. Composite task from T i transects for left haptic device of exercise 2 (MCM).
Figure 13. Composite task from T i transects for left haptic device of exercise 2 (MCM).
Sensors 22 07729 g013
Figure 14. Initial operating position by the user.
Figure 14. Initial operating position by the user.
Sensors 22 07729 g014
Figure 15. Workspace trajectory of two haptic devices in an active haptic guidance task for SCM.
Figure 15. Workspace trajectory of two haptic devices in an active haptic guidance task for SCM.
Sensors 22 07729 g015
Figure 16. Operational position of two haptic devices in active haptic guidance task for SCM.
Figure 16. Operational position of two haptic devices in active haptic guidance task for SCM.
Sensors 22 07729 g016
Figure 17. Workspace trajectory of two haptic devices in passive haptic guidance task: exercise 1.
Figure 17. Workspace trajectory of two haptic devices in passive haptic guidance task: exercise 1.
Sensors 22 07729 g017
Figure 18. Operational position of two haptic devices in passive haptic guidance task: exercise 1.
Figure 18. Operational position of two haptic devices in passive haptic guidance task: exercise 1.
Sensors 22 07729 g018
Figure 19. Right haptic device position error: exercise 1.
Figure 19. Right haptic device position error: exercise 1.
Sensors 22 07729 g019
Figure 20. Left haptic device position error: exercise 1.
Figure 20. Left haptic device position error: exercise 1.
Sensors 22 07729 g020
Figure 21. Right haptic device control signal: exercise 1.
Figure 21. Right haptic device control signal: exercise 1.
Sensors 22 07729 g021
Figure 22. Left haptic device control signal: exercise 1.
Figure 22. Left haptic device control signal: exercise 1.
Sensors 22 07729 g022
Figure 23. Experimental results with user in the loop: (a) Workspace path, (b) Operational position, (c) Position error, (d) Total energy in the task.
Figure 23. Experimental results with user in the loop: (a) Workspace path, (b) Operational position, (c) Position error, (d) Total energy in the task.
Sensors 22 07729 g023
Figure 24. Experimental results with user in the loop: (a) Workspace path, (b) Operational position, (c) Position error, (d) Total energy in the task.
Figure 24. Experimental results with user in the loop: (a) Workspace path, (b) Operational position, (c) Position error, (d) Total energy in the task.
Sensors 22 07729 g024
Figure 25. Workspace trajectory of two haptic devices in active haptic guidance task: exercise 2.
Figure 25. Workspace trajectory of two haptic devices in active haptic guidance task: exercise 2.
Sensors 22 07729 g025
Figure 26. Operational position of two haptic devices in active haptic guidance task: exercise 2.
Figure 26. Operational position of two haptic devices in active haptic guidance task: exercise 2.
Sensors 22 07729 g026
Figure 27. Workspace trajectory of two haptic devices in passive haptic guidance task: exercise 3.
Figure 27. Workspace trajectory of two haptic devices in passive haptic guidance task: exercise 3.
Sensors 22 07729 g027
Figure 28. Operational position of two haptic devices in passive haptic guidance task: exercise 3.
Figure 28. Operational position of two haptic devices in passive haptic guidance task: exercise 3.
Sensors 22 07729 g028
Figure 29. Position error of right haptic device: exercise 3.
Figure 29. Position error of right haptic device: exercise 3.
Sensors 22 07729 g029
Figure 30. Position error of left haptic device: exercise 3.
Figure 30. Position error of left haptic device: exercise 3.
Sensors 22 07729 g030
Figure 31. Performance PID wavenet-IIR controller for exercise 1. In this case, the robot was used for σ = r (right robot).
Figure 31. Performance PID wavenet-IIR controller for exercise 1. In this case, the robot was used for σ = r (right robot).
Sensors 22 07729 g031
Figure 32. Performance PID wavenet-IIR controller for exercise 2, σ = r .
Figure 32. Performance PID wavenet-IIR controller for exercise 2, σ = r .
Sensors 22 07729 g032
Figure 33. Behavior of neural network parameters used in the labyrinth of exercise 1.
Figure 33. Behavior of neural network parameters used in the labyrinth of exercise 1.
Sensors 22 07729 g033
Figure 34. Behavior of neural network parameters used in the labyrinth of exercise 2.
Figure 34. Behavior of neural network parameters used in the labyrinth of exercise 2.
Sensors 22 07729 g034
Figure 35. Self-tuning of gains k p , k d and k i in exercise 1.
Figure 35. Self-tuning of gains k p , k d and k i in exercise 1.
Sensors 22 07729 g035
Figure 36. Self-tuning of gains k p , k d and k i in exercise 2.
Figure 36. Self-tuning of gains k p , k d and k i in exercise 2.
Sensors 22 07729 g036
Table 1. Proposed values for wavenet-IIR PID controller.
Table 1. Proposed values for wavenet-IIR PID controller.
ParameterValueParameterValueParameterValue
Neurons, L3 μ W σ 0.5 μ p [ 0.002 , 0.002 , 0.002 ]
Feed-back, M3 μ a σ 0.5 μ i [ 0.002 , 0.002 , 0.002 ]
Feed-forward, N2 μ b σ 0.5 μ d [ 0.004 , 0.004 , 0.004 ]
Epocs, e p k 50 μ C σ 0.5 K p σ ( k ) [ 200 , 200 , 200 ]
μ D σ 0.5 k i σ ( k ) [ 0.018 , 0.015 , 0.008 ]
v σ ( k ) 0.5 k d σ ( k ) [ 0.3 , 0.03 , 0.03 ]
Table 2. Initial values of the parameters of wavenet and PID controller.
Table 2. Initial values of the parameters of wavenet and PID controller.
ParameterValue
W σ 0.05 0.05 0.05 0.05 0.05 0.05 0.08 0.08 0.08
a σ 302 55 14.2
b σ 8 9 10 8 9 10 8 9 10
C σ 0.18 0.576 1.25 1 1 5 0.5 0.5 2.5
μ C 0.5
D σ 0.43 1.75 0.43 1.75 0.43 1.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ramirez-Zamora, J.D.; Dominguez-Ramirez, O.A.; Ramos-Velasco, L.E.; Sepulveda-Cervantes, G.; Parra-Vega, V.; Jarillo-Silva, A.; Escotto-Cordova, E.A. HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes. Sensors 2022, 22, 7729. https://doi.org/10.3390/s22207729

AMA Style

Ramirez-Zamora JD, Dominguez-Ramirez OA, Ramos-Velasco LE, Sepulveda-Cervantes G, Parra-Vega V, Jarillo-Silva A, Escotto-Cordova EA. HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes. Sensors. 2022; 22(20):7729. https://doi.org/10.3390/s22207729

Chicago/Turabian Style

Ramirez-Zamora, Juan Daniel, Omar Arturo Dominguez-Ramirez, Luis Enrique Ramos-Velasco, Gabriel Sepulveda-Cervantes, Vicente Parra-Vega, Alejandro Jarillo-Silva, and Eduardo Alejandro Escotto-Cordova. 2022. "HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes" Sensors 22, no. 20: 7729. https://doi.org/10.3390/s22207729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop