Next Article in Journal
Future Trend in Wearable Electronics in the Textile Industry
Next Article in Special Issue
A BMI Based on Motor Imagery and Attention for Commanding a Lower-Limb Robotic Exoskeleton: A Case Study
Previous Article in Journal
Turbine Blade Three-Wavelength Radiation Temperature Measurement Method Based on Reflection Error Correction
Previous Article in Special Issue
A Hand Motor Skills Rehabilitation for the Injured Implemented on a Social Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction

by
Harsh Maithani
1,*,
Juan Antonio Corrales Ramon
2,
Laurent Lequievre
1,
Youcef Mezouar
1 and
Matthieu Alric
3
1
Université Clermont Auvergne, CNRS, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France
2
Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain
3
ADIV, ZAC des Gravanches, 10 Rue Jacqueline Auriol, 63100 Clermont-Ferrand, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(9), 3907; https://doi.org/10.3390/app11093907
Submission received: 28 March 2021 / Revised: 16 April 2021 / Accepted: 20 April 2021 / Published: 26 April 2021
(This article belongs to the Special Issue Robotic Platforms for Assistance to People with Disabilities)

Abstract

:
Musculoskeletal disorders of the wrist are common in the meat industry. A proof of concept of a physical human-robot interaction (pHRI)-based assistive strategy for an industrial meat cutting system is demonstrated which can be transferred to an exoskeleton later. We discuss how a robot can assist a human in pHRI, specifically in the context of an industrial project i.e for the meat cutting industry. We developed an impedance control-based system that enables a KUKA LWR robot to provide assistive forces to a professional butcher while simultaneously allowing motion of the knife (tool) in all degrees of freedom. We developed two assistive strategies—a force amplification strategy and an intent prediction strategy—and integrated them into an impedance controller.

1. Introduction and Motivation

The agri-food industry, and particularly the meat industry, is one of the most dangerous industries when it comes to employee safety. Among the various occupations, slaughtering, cutting and meat processing operations require specific dexterity to handle sharp tools or dangerous machines. However, they also require physical strength to carry heavy loads such as pallets or carcasses, quarters or muscles of meat, or to perform deboning work tasks and cutting quarters into pieces of meat. Similarly, they require performing repetitive movements or working in a cold refrigerated room and humid environment. In fact, accidents at work are common and can occur at any time. Thus, the rate of these accidents and their frequency are among the highest, among all professions combined. In addition to work-related accidents, musculoskeletal disorders (MSDs) accounted for almost 91% of occupational illness cases in 2019 [1], with 842,490 days of temporary interruption of work for all sectors. A study carried out in the Brittany region of France showed that the agri-food industries are the most risky sectors in terms of MSDs [1]. For instance, at the French level, 30% of the declared MSDs are recorded in the meat sector.
Not only is this a major problem for employees, it is also a big problem for companies and society at large. In 2019, compensation for MSDs generated two billion euros in fees (social security estimate) in France, with an average of more than 21,000 euros for each MSD stoppage, not counting the daily allowances [2]. In some companies with 10 to 20% absenteeism, MSDs disrupt production and generate additional costs. In the meat sector, MSDs involve personnel at all stages. Thus, operators involved in meat cutting, represented by boners, parers and slicers, as well as those located at manufacturing and packaging stations are affected [3]. MSDs affecting the wrist, hand and fingers represent approximately 50% of all MSDs in the meat industry.
The arduousness of tasks related to physical effort, the repeatability of movements and the agri-food environment (cold, humidity and hygiene) encountered in the meat sector deters the recruitment of young people, and ultimately few people, trained initially for this sector, remain there. This difficulty in recruiting or retaining young people leads to a significant shortage of manpower in these sectors. Technological evolution of certain workstations which would allow on the one hand a reduction of the arduousness, opening to the women certain activities which were until now reserved for the men, and on the other hand a revaluation of the trades, could change the outlook of the individuals in this sector of activity, give a positive image and promote its attractiveness.
Faced with these findings, the meat sector could soon integrate assistive and cobotic equipments to encourage companies to improve the quality of life at work of their employees. Cobotics, also known as collaborative robotics, is a technology that uses robotics, mechanics, electronics, and cognitive science to assist humans in their tasks. For the meat sector, the advantages of having cobotics are numerous:
  • Reduction of occupational risks by reducing the arduousness of the operators’ tasks and improving their working conditions.
  • Overcome a major shortage of skilled labour by reducing the lack of image and attractiveness, particularly among young people (hardship in low-value jobs, etc.), which will help maintain or even develop jobs in the sector.
  • Improve the competitiveness of companies by reducing the direct and indirect costs of work stoppages and by increasing productivity.
  • Improve the safety of products by reducing the direct handling of products by operators or by integrating cleaning systems (e.g., sterilization of tools online between each operation).
In this paper, we propose the development of a proof-of-concept assistive strategy implemented through a collaborative robot in meat cutting tasks in order to reduce the musculoskeletal disorders on the wrist of human operators working in the meat industry. The developed impedance control strategy enables a KUKA LWR robot to provide assistive forces to a professional butcher while simultaneously allowing motion of the knife (tool) in all degrees of freedom. Previous robotic systems for autonomous meat handling [4] required one or several robots for performing very specific meat cutting operations. For instance, the ARMS system [5,6,7] was based on the separation of beef shoulder muscles, the GRIBBOT system [8] was applied to chicken breast fillet harvesting while the DEXDEB system [9,10] was useful for ham deboning. Therefore these robotic systems could not be reused for other meat handling tasks since the quality of their cut was not enough for other types of meat pieces. The new proposed robotic system (called Exoscarne) works on the principle of pHRI (physical Human-Robot Interaction) to solve this lack of generalization. From one side, the expertise of the skilled butcher is kept since the cutting trajectory is defined by the human operator, who is holding the tool at the same time as the robot. From the other side, the robot carries the load of the tool and increases the cutting force when touching the meat so that the effort applied by the human is smaller. This system results in a greater flexibility for different meat cutting tasks and can adapt itself to on-the-fly decisions made by the user.
Therefore, our new Exoscarne system, which will be described in the next sections (see Section 3 for its software components and Section 4 for its hardware components), is able to:
  • Compensate the weight of the cutting tool.
  • Permit the user to move the tool in all 6 degrees of freedom.
  • Permit impedance shaping according to the operation being performed by the user.
  • Provide assistive forces during the meat cutting operations as per the user’s convenience.
  • Allow the user to perform the operations autonomously.

2. Background on Robot Assistance

One of the primary motivations of using robots for pHRI is their ability to share physical loads with their human partners. When the physical load of an object is shared or when the object is manipulated for a task, a natural division of effort between the human and the robot occurs. For example, load sharing can be for human-robot cooperative manipulation [11] or during rehabilitation [12]. Mortl et al. [13] proposed effort sharing policies for load sharing of an object by a human and multiple robots.
Some researchers focused on the larger question of selection of an ‘assistance strategy’ for a pHRI task. Dumora et al. [14] considered large object manipulation tasks in pHRI and proposed a library of robot assistances. Medina et al. [15] proposed a dynamic strategy selection between model-based and model-free strategies. The strategy selection is based on the concept of disagreement between the human and the robot, which in turn depends on the interaction force.
In the literature, the only example of load sharing of tool for a cooperative pHRI task, similar to meat cutting, was in [16] in which the author used a robot for assistive welding by supporting the weight of the welding equipment. In fact, most existing IADs, “Intelligent Assist devices” (i.e., active cobotics systems for human assistance) [17], are used in the automotive industry for the quasi-static collaborative transportation of heavy loads (e.g., motors, doors...). However, these solutions are not suitable for the meat industry [18] since they can only assist through specific directions in the work-space [19] (while meat cutting requires complex 6D trajectories), they do not integrate safety solutions for handling dangerous tools (such as the knife for meat cutting) and they do not take into account the important dynamic non-linear effects of the meat cutting operations [20]. To the best of our knowledge, there is no prior pHRI related work which uses a robot for assisting a human for cutting meat by taking into account not only a classical force amplification strategy (such as in [20]) but also an intent prediction module in order to reduce the final forces to be applied by the human operator.

3. Methodology

3.1. pHRI Assistive Strategies

The main interaction controllers for pHRI are impedance and admittance control (see Section 3.2 for their mathematical definition). Controller stability issues are common with admittance control [21,22]. When a human holds the tool at the end-effector, it results in a coupled system that can lose stability if the human operator stiffens his arm muscles, leading to robot vibrations. Hence for this task we chose impedance control. Investigating the stability issues of admittance control is a field in itself and there are several heuristic methods in the literature [20]. Impedance control was possible as the robot had a torque sensor in each joint, enabling torque control. Admittance control is preferred with robots that have only position control, by using an external FT sensor. As the task involved motion in the cartesian space hence we used the cartesian impedance controller that is explained later. We devised two assistive strategies:
  • Force amplification strategy
In this strategy we amplify the forces applied by the user on the knife’s handle (see Section 4 for the experimental setup), detected by the FT sensor and input to our control scheme. The control diagram is shown in Figure 1. The appropriate parameter η has to be determined, as explained in Section 5.3.
  • Intent prediction strategy
In this strategy we predict the forces to be applied by the user using RNN-LSTM networks explained in Section 3.3. The control diagram is shown in Figure 2.
Both Figure 1 and Figure 2 are identical except for the module of their respective assistive strategies. In the force amplification strategy, the amplification module amplifies the human user’s force input F h at each time step through robot assistance η F h . In the intent prediction strategy, a trained RNN-LSTM network takes the human user’s force input F h at each time step to anticipate the user’s input for the next time step F p r e d and provides this force via robot assistance. In both cases the user can haptically sense this assistance being provided by the robot.

3.2. Impedance Control

The forward kinematics of a robotic manipulator is written as [23]:
x ( t ) = f ( q )
where x ( t ) n and q n are the pose (i.e., position/orientation) of the end-effector in the Cartesian space and the joint angle coordinates in the joint space, respectively. Differential kinematics is obtained by deriving (1) with respect to time:
x ˙ ( t ) = J ( q ) q ˙
where J ( q ) n × n is the Jacobian matrix. Differentiating (2) again results in the acceleration of the end-effector:
x ¨ ( t ) = J ˙ ( q ) q + J ( q ) q ¨
The robot arm dynamics in the joint space is given by:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + G ( q ) = τ J T ( q ) F
where M ( q ) n × n is the symmetric positive-definite inertia matrix; C ( q , q ˙ ) q ˙ n is the Coriolis and Centrifugal forces; G ( q ) n is the gravitational force; τ n is the vector of control input; F n denotes the net force exerted by the robot on the environment at the end-effector ( F = F r F e x t ). Otherwise, F r n is the force exerted by the robot on the environment at the end-effector while F e x t n is the external force exerted by the environment on the robot at the end-effector. In our pHRI task, the environment is the human, specifically the human hand that is in contact with the robot. The robot dynamics can be written in the Cartesian space as:
M x ( q ) x ¨ + C x ( q , q ˙ ) x ˙ + G x ( q ) = u F
where
M x ( q ) = J T ( q ) M ( q ) J 1 ( q ) C x ( q , q ˙ ) = J T ( q ) ( C ( q , q ˙ ) M ( q ) J 1 ( q ) J ˙ ( q ) ) J 1 ( q ) , G x ( q ) = J T ( q ) G ( q ) , u = J T ( q ) τ , F = F r F e x t
As explained in Section 3.1, the most common interaction controllers for pHRI are admittance and impedance control. Both controllers are based on a target impedance model for the robot and they only differ in terms of input and output (see Figure 3 and Figure 4). Therefore, the target robot impedance model can be represented as a mass-damper-spring system:
M d ( x ¨ d x ¨ ) + B d ( x ˙ d x ˙ ) + K d ( x d x ) = F d = F r
where M d , B d , K d are the virtual inertia, damping and stiffness of the robot, respectively. F d is the desired force and x d can be interpreted as the rest position of this virtual mass-damper-spring system. For pHRI tasks in which the human touches the robot, the human limb (arm+hand) can also be modeled as a mass-damper-spring system:
M h x ¨ h + B h x ˙ h + K h ( x h x h d ) = F h
where M h , B h , K h are the limb inertia, damping and stiffness respectively and F h is the force applied by the human to the robot. The limb impedance values are not fixed and depend from person to person, as well as on the task being carried out. x h is the position of the human wrist in the robot frame and x h d can be interpreted as the desired target position. Discussion on the limb impedance parameters or their calculation are not in the purview of this paper. In fact, the investigation of the limb impedance parameters would have been necessitated if an admittance controller was used, to tackle the potential controller stability issues. Since we have used a torque-based impedance controller for our experiments, it does not face such issues. While real time measurements of limb impedance values could be used for interpreting the intent of the human operator during the operation, this would have required attaching EMG sensors on the arms of the human operator, such as in [24], thereby reducing the practicality of this research for an industrial meat cutting scenario. In our experiments the human intent is interpreted via the force measurements read by the FT sensor, while staying comfortable and intuitive for the human operator.
When no assistance strategy is required and we want the robot to freely follow the motion of the human, we can set K d = 0 or x = x d . This “direct teaching mode” is essentially based on the elimination of the spring component of the robot impedance model (“minimum impedance” strategy in Section 5.4) and avoids any restoring forces. If the human holds the tool rigidly then we can assume that the forces are transmitted completely to the robot. If the hand is close enough to the end-effector, we can assume that their positions, velocities and forces are equivalent: x d = x h d , x ˙ h = x ˙ and x ¨ h = x ¨ .
In admittance control (popularly called position-based impedance control) the input is the force applied by the environment and the output is displacement/velocity as shown in Figure 3. In pHRI tasks, the force is applied by the human on the tool at the end-effector ( F e x t = F h ) and it is measured by the FT sensor. The difference between the desired force for the robot ( F d = F r ) and this external force is applied as input to the impedance model in (6) in order to obtain the desired position for the end-effector x d as output:
M d ( x ¨ d x ¨ ) + B d ( x ˙ d x ˙ ) + K d ( x d x ) = F d F h
On the contrary, the impedance control (torque-based impedance control) has as input the end-effector displacement/velocity and the desired force as output, as shown in Figure 4. In this case, the desired robot impedance model is the same as in (6) but the difference between the desired robot force and the external/human force is transformed into joint torques for the robot control.

3.3. Long Short-Term Memory Model

For the intent prediction module, we use RNN-LSTM units [25]. RNNs (Recurrent Neural Networks) are based on processing sequential data, especially temporal data as they have an internal memory. In fact, they are useful for making predictions using time-series data [23]. RNNs can be improved by using what are called LSTM units in order to solve the vanishing gradient problem (i.e., gradients tend to disappear in RNNs when backpropagating errors in too long sequences). In fact, each LSTM unit has a special structure composed by 3 gates to control what information to keep and what to forget so that more stable errors are backpropagated (see Figure 5):
  • An input gate (i).
  • An output gate (o).
  • A forget gate (f).
As a result, RNNs with LSTM units are able to learn long-term dependencies within data sequences that were not possible only with RNNs. Given an input sequence x ¯ 1 , x ¯ 2 , , x ¯ t the LSTM unit maps the input sequence to a sequence of hidden states h 1 , h 2 , , h t (which are also the outputs) by passing information through a combination of gates (see Figure 6):
The Input gate (for updating the cell) is:
i t = σ g ( W i x ¯ t + R i h t 1 + b i )
The Forget gate (for reseting the cell/forgetting) is:
f t = σ g ( W f x ¯ t + R f h t 1 + b f )
The Cell candidate (for adding information to the cell) is:
g t = σ c ( W g x ¯ t + R g h t 1 + b g )
where σ c is the state activation function (here it is the hyperbolic tangent function: σ c = t a n h ( x ¯ ) ).
The Output gate is:
o t = σ g ( W o x ¯ t + R o h t 1 + b o )
where σ g is the gate activation (here it is the sigmoid function: σ ( x ) = ( 1 + e x ¯ ) 1 )
The Memory Cell state at timestep t is:
c t = f t c t 1 + i t g t
Here ⊙ is the Hadamard product (element-wise multiplication of vectors). The memory cell selectively retains information from previous timesteps by controlling what to remember via the forget gate f t .
The Hidden state (also called Output state) at time step t is:
h t = o t σ c ( c t )
The hidden state is passed as input to the next timestep and thus, it is possible to stack numerous LSTM units (see Figure 6). W i , W f , W g are the learnable input weights; R i , R f , R g are the learnable recurrent weights and b i , b f , b g are the learnable bias. By using memory cells and hidden states, LSTM units are able to retain information. The sigmoid function is a good activation function for the 3 gates (In, Out and Forget) since it outputs a value between 0 and 1. However for the memory cell, the values should be able to increase or decrease (which is not possible with the sigmoid activation function whose output is always non-negative). Therefore, the hyperbolic tangent function ( t a n h ) is used as the activation function for the memory cell.

4. Experimental Setup

4.1. KUKA LWR Robot

The KUKA LWR is a Torque-controlled Flexible Robot with 7 degrees of freedom. The design and control concepts of the robot have been discussed in [26,27]. There are two control modes available. The joint position control is implemented at a frequency of 3 kHz (decentralized control) [28]. Inverse kinematics and the cartesian impedance control mode run at a frequency of 1 kHz. The KUKA LWR has a torque sensor at each joint that enables torque control and impedance control. It also has motor side position sensors, as well as link side position sensors. Due to friction it is difficult for robots to implement torque control only with motor current commands [28].
The default coordinate system of the KUKA LWR is Right Handed System for which det(R) = 1. The dimensions of the robot can be retrieved from the official manufacturer documentation. Using these dimensions we can obtain the DH parameters (refer Table 1). To simulate the robot, an alternate form of representation called URDF is shown in Table 2 and its corresponding 3D model is shown in Figure 7. The Unified Robotic Description Format (URDF) is an XML file format used in ROS to describe all elements of a robot. The robot interacts with the external PC through the Fast Research Interface [29] through three modes—(a) Joint position control (b) Joint impedance control mode and (c) Cartesian impedance control mode.
To fulfill our objectives we used the cartesian impedance control mode available with the KUKA LWR robot. The KUKA manual [30] states that the control law for the cartesian impedance controller is
τ c m d = J T ( K c ( x d e s i r e d x c u r r e n t ) + F c m d ) + D ( d c ) + f d y n a m i c s ( q , q ˙ , q ¨ )
where q n is the joint position vector, K d is the stiffness matrix in the end-effector frame, D d is the normalized damping parameter in the end-effector frame, x and x d are the current and the desired pose of the end-effector respectively in the global frame. The translational stiffness K x , K y , K z [ 0.01 , 5000 ] N/m and rotational stiffness K A z , K B y , K C x [ 0.01 , 300 ] N/m-rad.
The KUKA LWR has an inbuilt external tool calibration functionality. Using this feature the robot can account for external tool dynamics as well, thereby enabling gravity compensation for the tool.
In this work we used the KUKA Fast Research Interface (FRI [29]), ROS [31], Kinematics and Dynamics library (KDL [32]), the MATLAB toolbox by Peter Corke (RCV [33]), and the MATLAB Robotics Toolbox (RTB [34]). For the intent prediction strategy, the high level program was written in MATLAB and connected to the network via ROS (Robot Operating System), by using the MATLAB Robotics Toolbox.

4.2. ATI FT Sensors

In order for the robot to provide assistive forces as per user comfort we had to measure how much forces are being applied by the user. For this, we used two 6-axis ATI Gamma force-torque sensors (see Figure 8): one mounted below the joystick (sensor B) and another mounted on the end-effector of the arm (sensor A, see Figure 9). Both FT (Force-Torque) sensors provide a 6-dimensional wrench in the sensor frame at 1000 Hz.
The relation between the different frames associated to these sensors (A and B), to the tool (i.e., tool center point—TCP—or tip of the knife) and the robot world frame O are shown in Table 3 and visualized in Figure 10. To use the impedance controller all the forces must be in the same frame. This requires transformation of the sensed forces in the sensor frame to the end-effector frame of the robot. The equation for transformation of forces from one frame to another is:
A F A A M A = A B R 0 [ A t x ] A B R A B R B F B B M B
where [ t x ] = 0 t z t y t z 0 t x t y t x 0 or
A F A = A B T f B F B

4.3. Allen-Bradley Joystick

The joystick (i.e., the knife handle) is an Allen-Bradley 440J-N enabling switch. It has 8 electrical connections and when the joystick switch is pushed or released, combinations of these 8 connections are activated as shown in Figure 11.
The 8 electrical wires from the joystick are connected to an Arduino Uno circuit board which was then connected to a laptop (see Figure 12 for the complete connection diagram). The rosserial ROS package and the ros_lib library is used to integrate the Arduino Uno with the robot network via ROS.
By using this joystick, the user will be able to communicate his intention clearly with regards to the meat cutting operation. See Section 5.2 for a more detailed description of how the user will use this joystick for a safe and unobtrusive physical human-robot interaction.

4.4. Meat Cutting Equipment

To fully understand the practical issues in implementing the technical aspects of the project we first performed some meat cutting trials using the robot in the gravity compensation mode. A special holding tool was developed to accommodate a joystick which would be held by the user and the cutting tool. The project involved meat operations on chicken, pork and beef for which special cutting tools were made. The joystick was an industrial joystick which we customized for the project and it allowed a very natural and intuitive user interface. The blades are professional butcher blades customized for the project by machining in the laboratory. A customized pneumatic hanger was developed for holding the pork vertically. Specific cutting tools and holding apparatus for each meat product or carcass to cut were devised as shown in Figure 13 below.

5. Experiments

The complete network diagram is shown in Figure 14. All the PCs of this network communicate via ROS. The two assistive strategies are implemented in the right laptop, which recovers FT sensor information from the two middle PCs and sends robot commands to the left PC.
We performed experiments to address the following questions:
  • Which FT sensor should be used as a source of input for the control scheme?
  • What should the impedance shaping strategy performed through the joystick buttons be?
  • What should the amplification factor be for the force amplification strategy?
  • Comparison of the force amplification strategy and the intent prediction strategy.

5.1. Comparison of FT Sensors

As shown in Figure 10, there were two FT sensors on the cutting tool. We wanted to confirm if 2 FT sensors provide an advantage over a single FT sensor, as well as determine which FT sensor is better as a source of input for the control scheme.
A piece of thick foam was used as the material to be cut. This foam reproduces the same type of shearing forces as meat cutting. Two experiments were performed. In the first one the robot was commanded to apply forces gradually from 0 to 50 N in the Y-direction of the world frame, which was parallel to the cutting direction of the knife. In the second experiment, a human cut the foam multiple times in the Y-direction, as shown in Figure 15.
Based on the sensor readings from both the sensors, it was observed that for free motion in space (without any cutting), both the sensors record the same readings. However, for cutting motion, sensor A (refer to Figure 10) is blind to the cutting forces. Sensor B recorded forces when the robot cut the foam autonomously, but the recorded forces were feeble and almost negligible to the forces commanded to the robot. When a human cut the foam using impedance controller the sensor B registered adequate forces.
Hence it was determined that it is reasonable to consider that the forces sensed by sensor B (below the joystick) capture the human applied forces on the knife and thus, the user’s intention. During actual meat cutting experiments, both the sensors were alternated as sensor inputs for the controller in numerous trials and the user responded that he preferred the system behaviour when sensor B was the sensor input.

5.2. Impedance Shaping for Cutting

While cutting meat, the user performs an active meat cutting operation followed by an inactive repositioning of the knife for the next stroke. Furthermore, the user may want to use both of his hands for some other task and as such need the robot to stay still in the last position. The joystick has two buttons (see Figure 10): the first lateral grey button with three possible positions (i.e., a dead-man’s switch) and the second upper black button having two states (i.e., a pushbutton). This gives us a total of 6 possible combinations shown in Table 4. The corresponding electrical connections of both buttons are shown in Figure 11.
The button configurations from Table 4 are interpreted in the impedance shaping algorithm as shown in c. We can see that the last two rows are identical. Originally we tried to use position 2 with an impedance relation (i.e., K , D F h ); however, the user preferred to keep the interface simple and make the position 1 and position 2 identical for operation. This shows that developing user-friendly interface is essential for the adoption of pHRI over human-only or robot-only equipment. The constant values of Table 5 are:
  • K m i n = 0.1 0.1 0.1 0.1 0.1 0.1 ,
  • K m a x = 5000 5000 5000 300 300 300 ,
  • D m i n = 0.01 0.01 0.01 0.01 0.01 0.01 ,
  • D m a x = 1.0 1.0 1.0 1.0 1.0 1.0

5.3. Tuning the Amplification Factor

In the force amplification strategy we amplify the forces applied by the user on the joystick, detected by the FT sensor and input to our control scheme (see Figure 1). For the robot to provide assistive forces we had to determine the amplification factor η for each degree of freedom as shown in Equation (18):
F c m d = e F x e F y e F z e F A z e F B y e F C x = η x η y η z η A z η B y η C x B e F x B e F y B e F z B e τ z B e τ y B e τ x
While a high amplification factor would easen the load on the user, it could also give the user the perception that he is no longer in control of the operation and reduce his comfort with the system (as was realized during the experiments). This is the reason we cannot have the robot simply apply the highest forces possible.
The tuning of the comfortable amplification factors was a continuous process where several iterations of meat cutting were performed by a professional butcher. It was decided collectively that there would be only two amplification factors: one common η f for forces and one η τ for torques (i.e., η f = η x = η y = η z and η τ = η A z = η B y = η C x ). This iterative tuning process finished when the butcher found that the assistance behavior was comfortable, as explained in Figure 16.
The meat cutting operation involves both the application of forces and torques, as such the force amplification factor η f cannot be determined independently of the torque amplification factor η τ . In earlier experiments the user felt that η τ = 3 was ideal for him and hence to determine η f a series of consecutive experiments were done as shown in Table 6 enabling the user to make a subjective comparison.
At the end of the experiment it was concluded that the user preferred η f [ 10 , 20 ] and η τ = 3 . With η f = 20 and η τ = 4 , the user found the system to be too reactive and he felt he was no longer in control. These 6 experiments and the temporal evolution of the cutting forces applied by the user after applying these optimal force amplification factors are shown in the next section.

5.4. Meat Cutting with Force Amplification Strategy

As a knife mounted on the end-effector of a robot is inherently dangerous, the knife for the meat cutting experiments was always covered with a sheath when not in use to avoid accidental injury. When experiments were being performed, even if the user could stop the robot at any time by releasing the button, a second person always had his/her hand on the emergency robot switch to disable the robot if something goes wrong. The user also wore a protective gear around his arms. In addition, as a precaution, the robot was wrapped with a plastic sheet to prevent minute meat pieces from entering the internal structure of the robot.
Experiment 1—Cobot assisted pork cutting
Figure 17 and Figure 18 show the sequence of cuts of Experiment 1 and the corresponding temporal evolution of the XYZ-total forces applied by the user and measured by the B sensor, respectively. The terminology A cut, B cut, etc. in these figures are author defined to refer to the iteration of the cutting and not the cutting methods. Therefore, the images shown in Figure 17 do not represent one single cut in progress, but instead one image of each cut: they can be interpreted as cut 1, cut 2, etc.
Experiment 2—Cobot assisted pork cutting
In Experiment 1, it took the butcher (the user) 4 cuts to make 1 slice. Nevertheless, in Experiment 2 (see Figure 19), with the same section but from a different meat, it took him 9 cuts to make 1 slice (see Figure 20 for the corresponding temporal force evolution). This is due to the natural variation in the body composition from one animal meat to another and also how much forces the user wanted to apply for a single cut.
Experiment 3—Cobot assisted pork cutting
Figure 21 shows the 5 cuts of Experiment 3 while Figure 22 represents the temporal evolution of the forces exerted by the human during those cuts while assisted by the force amplification strategy.
Experiment 4—Manual pork cutting
Figure 23 shows the 9 cuts of Experiment 4 and Figure 24 represents the corresponding temporal evolution of the forces applied by the human.
Experiment 5—Manual pork cutting
Figure 25 shows the 5 cuts of Experiment 5 and Figure 26 represents the corresponding temporal evolution of the forces applied by the human.
Experiment 6—Manual pork cutting
Finally, Figure 27 shows the 7 cuts of Experiment 6 and Figure 28 represents the corresponding temporal evolution of the forces applied by the human.
Experiment 7—Foam cutting with intent prediction module
For the previous meat cutting experiments with force amplification strategy we had a professional butcher as the user. However, for verifying the intent prediction strategy, we used a foam block and multiple users in order to perform the training of the LSTM network (see Figure 29 for the corresponding experimental setup). This foam reproduces the same type of shearing forces as meat cutting.
As explained earlier, our intent prediction module uses RNN-LSTM units. For each user, we performed sample trials with foam cutting to collect the training dataset. The LSTM network was trained on this dataset to predict force values, similar to the prediction of force values of the Natural Motion dataset (NM-F) in [23].
For each user we took 90 percent of the sample dataset as the training dataset. The entire architecture consisted of 4 layers- an input layer, an LSTM layer, a fully connected layer and a regression layer. We tested the prediction accuracy with combinations of different hyperparameters such as the number of epochs, sequence length and learning rate. However the results were almost the same i.e., a prediction accuracy of 0.3 (root mean square error).
The number of features was 1 as we had only a single variable—force applied. Similarly as only 1 output was expected, the number of responses was 1. The number of hidden units was taken as 200 and the maximum number of epochs was set to 250. The sequence length for the input layer was 25 time steps (0.2 s). For the output layer, we used Stochastic Gradient Descent algorithm with a learning rate of 0.01. The sigmoid function was used as the activation function for the 3 gates—In, Out and Forget in the LSTM units as it outputs a value between 0 and 1. However for the memory cell, the values should be able to increase or decrease which is not possible with the sigmoid function as the output is always non-negative, hence we used the hyperbolic tangent function (tanh) as the activation function for the memory cell.
Figure 30 shows the plot of cutting forces applied by a user with and without the intent prediction module (for 30 cm cutting of the foam). When the intent prediction module was turned off, we had K m i n and D m i n only (i.e. minimum impedance). With the intent prediction module the user applied only 20 percent of the forces as compared to when the module was turned off.
We also compared the force amplification strategy with the intent prediction strategy with 5 users as shown in Figure 31. For each user sample trials were conducted with force amplification strategy for them to decide which amplification factor they are comfortable with. All the users stated that the intent prediction module made the cutting more intuitive than the force amplification strategy.

6. Conclusions and Future Work

In this paper, we demonstrated a proof of concept of two pHRI-based assistive strategies for an industrial meat cutting system. It was determined that sensor B below the joystick gives better reactivity and hence this sensor should be used as the sensor input. From the previous experiments, it is evident that the forces applied by the user are approximately 30% small with the cobot and the force amplification strategy than with the manual meat cutting operation. Blades with lengths 20 cm and 10 cm were tested, and the one with length 10 cm was adjudged to give better cutting performance, because it was stiffer, with less bending and mechanical compliance.
The cartesian impedance controller runs at a frequency of 1000 Hz, as well as the FT sensor. At this frequency the user found it intuitive and useful to operate the tool for the meat cutting task without any issue. However, it is known that the human central nervous system operates at a lower frequency than a robot controller, and if he is coupled to the system via a tool it is possible for the user to ’perceive’ a loss of control if the system is too reactive, an observation that capped the upper limit of the amplification in the force amplification strategy.
In the foam cutting experiment, it was shown that the intent prediction strategy was better than the force amplification strategy, especially with regards to intuitiveness. In a pHRI system such as this one, not only should the robot provide assistance as a machine, but also the interface should be intuitive, natural and easy to use.
Our contributions are:
  • We followed a systematic methodology to develop a user friendly pHRI controller system for meat cutting.
  • We developed two assistive strategies: a force amplification strategy and an intent prediction strategy.
  • The developed system allowed the user to move the knife in all 6 degrees of freedom.
  • Using impedance shaping, the impedance values were altered between the cutting and non-cutting re-positioning movement of the knife, which was found as comfortable by the user.
For future work, we would like to compare the force amplification strategy and the intent prediction strategy on meat cutting tasks. Furthermore, in the current experiments we had only one professional butcher and it would be interesting to see what are the experimental results with more professional users. The next step of project Exoscarne would involve the design and development of a specific exoskeleton for meat cutting and transferring the controller that was developed in this work.

Author Contributions

Conceptualization: H.M., J.A.C.R., Y.M.; methodology: H.M., J.A.C.R., L.L., Y.M., M.A.; software, H.M., L.L.; validation and analysis: H.M., M.A.; writing—original draft preparation: H.M.; writing—review and editing: J.A.C.R., L.L., Y.M., M.A.; supervision: J.A.C.R., Y.M.; funding acquisition: J.A.C.R., Y.M., M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the French government research program Investissements d‘Avenir through the UMTs ACTIA Mécarnéo-AgRobErgo and the project Exoscarne (Call P3A-ICF2A-2I2A, FranceAgriMer) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement nº 869855 (SoftManBot project).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
pHRIPhysical human-robot interaction
MSDMusculoskeletal disorders
ARMSA multi arms Robotic system for Muscle Separation
RNNRecurrent neural networks
LSTMLong Short Term Memory
ROSRobot Operating System
FTForce-Torque
URDFUnified Robot Description Format
FRIFast Research Interface
DHDenavit–Hartenberg
LWRLight Weight Robot
KDLKinematics and Dynamics Library
IADIntelligent Assist Devices

References

  1. Statistiques Accidents du Travail et Maladies Professionnelles; Carsat Bretagne: Rennes, France, 2020.
  2. Rapport Annuel 2019, L’Assurance Maladie-Risques Professionels, Éléments Statistiques et Financiers; Caisse nationale de l’Assurance Maladie des travailleurs salariés: Paris, France, 2020.
  3. État de Santé des Salariés de la Filière Viande du Régime Agricole en Bretagne; Institut de Veille Sanitaire: Saint-Maurice, France, 2019.
  4. De Medeiros Esper, I.; From, P.J.; Mason, A. Robotisation and intelligent systems in abattoirs. Trends Food Sci. Technol. 2021, 108, 214–222. [Google Scholar] [CrossRef]
  5. Long, P.; Khalil, W.; Martinet, P. Modeling control of a meat-cutting robotic cell. In Proceedings of the 2013 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013; pp. 1–6. [Google Scholar] [CrossRef]
  6. Long, P.; Khalil, W.; Martinet, P. Robotic cutting of soft materials using force control image moments. In Proceedings of the 2014 13th International Conference on Control Automation Robotics Vision (ICARCV), Singapore, 10–12 December 2014; pp. 474–479. [Google Scholar] [CrossRef]
  7. Long, P.; Khalil, W.; Martinet, P. Force/vision control for robotic cutting of soft materials. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4716–4721. [Google Scholar] [CrossRef] [Green Version]
  8. Misimi, E.; Øye, E.R.; Eilertsen, A.; Mathiassen, J.R.; Åsebø, O.B.; Gjerstad, T.; Buljo, J.; Skotheim, Ø. GRIBBOT—Robotic 3D vision-guided harvesting of chicken fillets. Comput. Electron. Agric. 2016, 121, 84–100. [Google Scholar] [CrossRef] [Green Version]
  9. Wei, G.; Stephan, F.; Aminzadeh, V.; Dai, J.S.; Gogu, G. DEXDEB—Application of DEXtrous Robotic Hands for DEBoning Operation. In Gearing up and Accelerating Cross-Fertilization Between Academic and Industrial Robotics Research in Europe; Röhrbein, F., Veiga, G., Natale, C., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 217–235. [Google Scholar]
  10. Alric, M.; Stephan, F.; Sabourin, L.; Subrin, K.; Gogu, G.; Mezouar, Y. Robotic solutions for meat cutting and handling. In European Workshop on Deformable Object Manipulation; Innorobo: Lyon, France, 2014. [Google Scholar]
  11. Lawitzky, M.; Mörtl, A.; Hirche, S. Load sharing in human-robot cooperative manipulation. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 185–191. [Google Scholar] [CrossRef] [Green Version]
  12. Reinkensmeyer, D.J.; Wolbrecht, E.; Bobrow, J. A computational model of human-robot load sharing during robot-assisted arm movement training after stroke. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 4019–4023. [Google Scholar]
  13. Mörtl, A.; Lawitzky, M.; Kucukyilmaz, A.; Sezgin, M.; Basdogan, C.; Hirche, S. The role of roles: Physical cooperation between humans and robots. Int. J. Robot. Res. 2012, 31, 1656–1674. [Google Scholar] [CrossRef] [Green Version]
  14. Dumora, J.; Geffard, F.; Bidard, C.; Aspragathos, N.A.; Fraisse, P. Robot assistance selection for large object manipulation with a human. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 1828–1833. [Google Scholar]
  15. Medina, J.R.; Lawitzky, M.; Molin, A.; Hirche, S. Dynamic strategy selection for physical robotic assistance in partially known tasks. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1180–1186. [Google Scholar]
  16. Erden, M.S.; Marić, B. Assisting manual welding with robot. Robot. Comput. Integr. Manuf. 2011, 27, 818–828. [Google Scholar] [CrossRef]
  17. Colgate, J.E.; Peshkin, M.; Klostermeyer, S.H. Intelligent assist devices in industrial applications: A review. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003; 3, pp. 2516–2521. [Google Scholar] [CrossRef]
  18. Paxman, J.; Liu, D.; Wu, P.; Dissanayake, G. Cobotics for Meat Processing: An Investigation into Technologies Enabling Robotic Assistance for Workers in the Meat Processing Industry; Technical Report; University of Technology: Sydney, Australia, 2006. [Google Scholar]
  19. Campeau-Lecours, A.; Otis, M.J.D.; Gosselin, C. Modeling of physical human–robot interaction: Admittance controllers applied to intelligent assist devices with large payload. Int. J. Adv. Robot. Syst. 2016, 13. [Google Scholar] [CrossRef] [Green Version]
  20. Lamy, X.; Collédani, F.; Geffard, F.; Measson, Y.; Morel, G. Overcoming human force amplification limitations in comanipulation tasks with industrial robot. In Proceedings of the 2010 8th World Congress on Intelligent Control and Automation, Jinan, China, 7–9 July 2010; pp. 592–598. [Google Scholar] [CrossRef]
  21. Duchaine, V.; Gosselin, C.M. Investigation of human-robot interaction stability using Lyapunov theory. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 2189–2194. [Google Scholar]
  22. Dimeas, F.; Aspragathos, N. Online stability in human-robot cooperation with admittance control. IEEE Trans. Haptics 2016, 9, 267–278. [Google Scholar] [CrossRef] [PubMed]
  23. Maithani, H.; Ramon, J.A.C.; Mezouar, Y. Predicting Human Intent for Cooperative Physical Human-Robot Interaction Tasks. In Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), Edinburgh, UK, 6–9 July 2019; pp. 1523–1528. [Google Scholar] [CrossRef] [Green Version]
  24. Grafakos, S.; Dimeas, F.; Aspragathos, N. Variable admittance control in pHRI using EMG-based arm muscles co-activation. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 001900–001905. [Google Scholar]
  25. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  26. Albu-Schäffer, A.; Haddadin, S.; Ott, C.; Stemmer, A.; Wimböck, T.; Hirzinger, G. The DLR lightweight robot: Design and control concepts for robots in human environments. Ind. Robot. Int. J. 2007, 34, 376–385. [Google Scholar] [CrossRef] [Green Version]
  27. Bischoff, R.; Kurth, J.; Schreiber, G.; Koeppe, R.; Albu-Schäffer, A.; Beyer, A.; Eiberger, O.; Haddadin, S.; Stemmer, A.; Grunwald, G.; et al. The KUKA-DLR Lightweight Robot arm-a new reference platform for robotics research and manufacturing. In Proceedings of the ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), VDE, Munich, Germany, 7–9 June 2010; pp. 1–8. [Google Scholar]
  28. Albu-Schaffer, A.; Hirzinger, G. Cartesian impedance control techniques for torque controlled light-weight robots. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Washington, DC, USA, 11–15 May 2002; 1, pp. 657–663. [Google Scholar]
  29. Schreiber, G.; Stemmer, A.; Bischoff, R. The fast research interface for the kuka lightweight robot. In Proceedings of the IEEE Workshop on Innovative Robot Control Architectures for Demanding (Research) Applications How to Modify and Enhance Commercial Controllers (ICRA 2010), Anchorage, AK, USA, 3–7 May 2010; pp. 15–21. [Google Scholar]
  30. KUKA LWR Document Manual; KUKA Roboter GmbH: Augsburg, Germany, 2010.
  31. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In ICRA Workshop on Open Source Software; IEEE ICRA 2009: Kobe, Japan, 2009; Volume 3, p. 5. [Google Scholar]
  32. Available online: http://docs.ros.org/indigo/api/orocos_kdl/html/ (accessed on 21 April 2021).
  33. Corke, P. Robotics, Vision and Control: Fundamental Algorithms in MATLAB® Second Edition, Completely Revised; Springer: Berlin/Heidelberg, Germany, 2017; Volume 118. [Google Scholar]
  34. Available online: https://in.mathworks.com/help/robotics/ (accessed on 21 April 2021).
Figure 1. Impedance controller diagram with amplification module for the force amplification strategy.
Figure 1. Impedance controller diagram with amplification module for the force amplification strategy.
Applsci 11 03907 g001
Figure 2. Impedance controller diagram with intent prediction module for intent prediction strategy.
Figure 2. Impedance controller diagram with intent prediction module for intent prediction strategy.
Applsci 11 03907 g002
Figure 3. Admittance control (position-based impedance control).
Figure 3. Admittance control (position-based impedance control).
Applsci 11 03907 g003
Figure 4. Torque-based impedance control.
Figure 4. Torque-based impedance control.
Applsci 11 03907 g004
Figure 5. A single LSTM unit.
Figure 5. A single LSTM unit.
Applsci 11 03907 g005
Figure 6. An unrolled Recurrent Neural Network with LSTM units.
Figure 6. An unrolled Recurrent Neural Network with LSTM units.
Applsci 11 03907 g006
Figure 7. Exoscarne 3D simulated model with the KUKA LWR arm in Home configuration.
Figure 7. Exoscarne 3D simulated model with the KUKA LWR arm in Home configuration.
Applsci 11 03907 g007
Figure 8. Photos of the force-torque (FT) sensor.
Figure 8. Photos of the force-torque (FT) sensor.
Applsci 11 03907 g008
Figure 9. Attaching the FT sensor A to the KUKA LWR 4+ robot end-effector.
Figure 9. Attaching the FT sensor A to the KUKA LWR 4+ robot end-effector.
Applsci 11 03907 g009
Figure 10. Visualization of all frames relating the tool -knife- with the FT sensors A/B.
Figure 10. Visualization of all frames relating the tool -knife- with the FT sensors A/B.
Applsci 11 03907 g010
Figure 11. Allen-Bradley joystick used as the handle of the knife.
Figure 11. Allen-Bradley joystick used as the handle of the knife.
Applsci 11 03907 g011
Figure 12. Arduino circuit diagram for connecting the joystick to a laptop.
Figure 12. Arduino circuit diagram for connecting the joystick to a laptop.
Applsci 11 03907 g012
Figure 13. Exoscarne equipment.
Figure 13. Exoscarne equipment.
Applsci 11 03907 g013
Figure 14. Network diagram of the Exoscarne system. From left to right: control box of the KUKA LWR 4+ robot connected to one PC with FRI/KDL libraries; data acquisition boxes for FT sensors A and B connected to two PCs and an Arduino board for joystick acquisition connected to a laptop.
Figure 14. Network diagram of the Exoscarne system. From left to right: control box of the KUKA LWR 4+ robot connected to one PC with FRI/KDL libraries; data acquisition boxes for FT sensors A and B connected to two PCs and an Arduino board for joystick acquisition connected to a laptop.
Applsci 11 03907 g014
Figure 15. Comparison of sensor A and sensor B.
Figure 15. Comparison of sensor A and sensor B.
Applsci 11 03907 g015
Figure 16. Iterative procedure for tuning the two amplification factors for forces and torques.
Figure 16. Iterative procedure for tuning the two amplification factors for forces and torques.
Applsci 11 03907 g016
Figure 17. Experiment 1—cobot assisted pork cutting.
Figure 17. Experiment 1—cobot assisted pork cutting.
Applsci 11 03907 g017
Figure 18. Forces applied by the user in Experiment 1.
Figure 18. Forces applied by the user in Experiment 1.
Applsci 11 03907 g018
Figure 19. Experiment 2—cobot assisted pork cutting.
Figure 19. Experiment 2—cobot assisted pork cutting.
Applsci 11 03907 g019
Figure 20. Forces applied by the user in Experiment 2.
Figure 20. Forces applied by the user in Experiment 2.
Applsci 11 03907 g020
Figure 21. Experiment 3—cobot assisted pork cutting.
Figure 21. Experiment 3—cobot assisted pork cutting.
Applsci 11 03907 g021
Figure 22. Forces applied by the user in Experiment 3.
Figure 22. Forces applied by the user in Experiment 3.
Applsci 11 03907 g022
Figure 23. Experiment 4—manual pork cutting.
Figure 23. Experiment 4—manual pork cutting.
Applsci 11 03907 g023
Figure 24. Forces applied by the user in Experiment 4.
Figure 24. Forces applied by the user in Experiment 4.
Applsci 11 03907 g024
Figure 25. Experiment 5—manual pork cutting.
Figure 25. Experiment 5—manual pork cutting.
Applsci 11 03907 g025
Figure 26. Forces applied by the user in Experiment 5.
Figure 26. Forces applied by the user in Experiment 5.
Applsci 11 03907 g026
Figure 27. Experiment 6—manual pork cutting.
Figure 27. Experiment 6—manual pork cutting.
Applsci 11 03907 g027
Figure 28. Forces applied by the user in Experiment 6.
Figure 28. Forces applied by the user in Experiment 6.
Applsci 11 03907 g028
Figure 29. Foam cutting along the global y-direction.
Figure 29. Foam cutting along the global y-direction.
Applsci 11 03907 g029
Figure 30. Cutting forces applied by the user using the intent prediction module for 30 cm cutting of the foam. The blue line is the force applied with K m i n and D m i n only (i.e. minimum impedance), while the red line is the force applied with K m i n and D m i n and the intent prediction module.
Figure 30. Cutting forces applied by the user using the intent prediction module for 30 cm cutting of the foam. The blue line is the force applied with K m i n and D m i n only (i.e. minimum impedance), while the red line is the force applied with K m i n and D m i n and the intent prediction module.
Applsci 11 03907 g030
Figure 31. Comparison of different strategies with 5 users.Comparison of different strategies with 5 users
Figure 31. Comparison of different strategies with 5 users.Comparison of different strategies with 5 users
Applsci 11 03907 g031
Table 1. DH parameters of KUKA LWR 4+ robot.
Table 1. DH parameters of KUKA LWR 4+ robot.
Joints d i (m) q i (rad) a i α i (rad) q m i n q m a x τ m a x (Nm)
J1A10.3105 q 1 0 π /2−170170176
J2A20 q 2 0 π /2−120120176
J3E10.4 q 3 0 π /2−170170100
J4A30 q 4 0 π /2−120120100
J5A40.39 q 5 0 π /2−170170100
J6A50 q 6 0 π /2−12012038
J7A60.078 q 7 00−17017038
Table 2. URDF description of KUKA LWR 4+ robot.
Table 2. URDF description of KUKA LWR 4+ robot.
Jointsx (m)y (m)z (m)rpyAxis
A1000.11000[0 0 1]+z axis
A2000.2005000[0 −1 0]−y axis
E1000.2000[0 0 1]+z axis
A3000.2000[0 1 0]+y axis
A4000.2000[0 0 1]+z axis
A5000.19000[0 −1 0]−y axis
A6000.078000[0 0 1]+z axis
Table 3. Joystick frames when the robot is in Home configuration. Sensor A is always aligned with the robot end-effector frame.
Table 3. Joystick frames when the robot is in Home configuration. Sensor A is always aligned with the robot end-effector frame.
Robot World FrameSensor A (FT13855)Sensor B (FT13953)Relation
X o X A −Z B X o | | X A | | −Z B
Y o Y A X B Y o | | X A | | X B
Z o Z A Y B Z o | | Z A | | Y B
Table 4. States of joystick buttons.
Table 4. States of joystick buttons.
Button 2
State 0State 1
Button 1Position 0Robot is stiff + no amplificationRobot is stiff + amplification
Position 1Robot is free, but no amplification
(used for positioning of knife)
Robot is free + amplification
(used for cutting)
Position 2Robot is free, but no amplification
(used for positioning of knife)
Robot is free + amplification
(used for cutting)
Table 5. Impedance shaping using joystick.
Table 5. Impedance shaping using joystick.
Button 2
State 0State 1
Button 1Position 0 K m a x , D m a x , η = 0 K m a x , D m a x , η
Position 1 K m i n , D m i n , η = 0
(used for positioning of knife)
K m i n , D m i n , η
(used for cutting)
Position 2 K m i n , D m i n , η = 0
(used for positioning of knife)
K m i n , D m i n , η
(used for cutting)
Table 6. Determining the force amplification factor η F .
Table 6. Determining the force amplification factor η F .
S.No.MeatPartSideOperationFT Sensor InputForce Amplification η f Torque Amplification η τ
1PorkSquareRightCuttingB203
2PorkSquareRightCuttingB203
3PorkSquareRightCuttingB153
4PorkSquareRightCuttingB153
5PorkSquareRightCuttingB103
6PorkSquareRightCuttingB203
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maithani, H.; Corrales Ramon, J.A.; Lequievre, L.; Mezouar, Y.; Alric, M. Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction. Appl. Sci. 2021, 11, 3907. https://doi.org/10.3390/app11093907

AMA Style

Maithani H, Corrales Ramon JA, Lequievre L, Mezouar Y, Alric M. Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction. Applied Sciences. 2021; 11(9):3907. https://doi.org/10.3390/app11093907

Chicago/Turabian Style

Maithani, Harsh, Juan Antonio Corrales Ramon, Laurent Lequievre, Youcef Mezouar, and Matthieu Alric. 2021. "Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction" Applied Sciences 11, no. 9: 3907. https://doi.org/10.3390/app11093907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop