Experimental Study on Tele-Manipulation Assistance Technique Using a Touch Screen for Underwater Cable Maintenance Tasks

: In underwater environments restricted from human access, many intervention tasks are performed by using robotic systems like underwater manipulators. Commonly, the robotic systems are tele-operated from operating ships; the operation is apt to be inefﬁcient because of restricted underwater information and complex operation methods. In this paper, an assistance technique for tele-manipulation is investigated and evaluated experimentally. The key idea behind the assistance technique is to operate the manipulator by touching several points on the camera images. To implement the idea, the position estimation technique utilizing the touch inputs is investigated. The assistance technique is simple but signiﬁcantly helpful to increase temporal efﬁciency of tele-manipulation for underwater tasks. Using URI-T, a cable burying ROV (Remotely Operated Vehicle) developed in Korea, the performance of the proposed assistance technique is veriﬁed. The underwater cable gripping task, one of the cable maintenance tasks carried out by the cable burying ROV, is employed for the performance evaluation, and the experimental results are analyzed statistically. The results show that the assistance technique can improve the efﬁciency of the tele-manipulation considerably in comparison with the conventional tele-operation method.


Introduction
In underwater environments restricted from human access, many tasks have been performed by using underwater robots [1,2]. The tasks involve installation and repair of underwater cables, e.g., High Voltage Direct Current (HVDC) cables between lands and islands [3], underwater communication cables [4], and underwater cables for ocean energy developments as offshore wind turbines [5]. When being installed, the cables are commonly buried below the seabed for protection. For repair, the underwater cable is cut and recovered to the ship [6]. The tasks regarding underwater cables are usually performed in the deep sea where people are restricted from accessing it, and require a high amount of power. Thus, large size underwater robots for heavy-duty tasks are used.
URI-T is a heavy-duty Remotely Operated Vehicle (ROV) developed in South Korea for the underwater cable burying tasks and the maintenance tasks [7]. As shown in Figure 1, URI-T has cable detecting systems and water-jetting systems for cable burying tasks. Equipping with manipulators and tools, URI-T is also used for cable maintenance tasks: cable cutting and cable gripping for recovering. Like other underwater ROVs, URI-T performs the tasks by the tele-operation from the ship. Performing underwater tasks, especially manipulation tasks, by tele-operation is apt to be inefficient due to several reasons. One of the reasons is lack of underwater information. The operators have to understand the underwater situation from several camera images and sensors. The view points of the cameras are limited and the cameras only provide 2D images; the operators have to imagine the 3D situation from the limited information [2,8,9]. Another reason is the complexity of the operation. Commonly, the underwater manipulators are tele-operated with joint level commands: by using the commanding device, the operators have to generate the command for every joint motion of the manipulator. Thus, skilled operators are required to perform underwater tasks by the tele-operation.
Several research works have been tried to improve the efficiency in underwater manipulations [10,11]. Some of the works have been approached by utilizing the autonomy: the autonomous manipulation by recognizing the object utilizing several sensors. They were focused on how to obtain the position of the object automatically in underwater situations, e.g., the sonar sensor based technique [12,13], the vision based approaches [14,15], deep learning with vision images [16], and the laser scanner based methods [17,18]. The schemes based on autonomy can provide comfort ways in the operation; however, the scheme may not be well-accepted by operators. It is because underwater tasks demand high reliability and the autonomy may involve risks of malfunction [19]. There have been research works to improve the efficiency of the tele-operation by assisting with guidance techniques or perceptional improvements: augmented realities [20,21] or virtual realities [22] to improve the visual feedback or to guide the manipulation with virtual information, virtual guidance techniques to avoid collision with the environment [23], real-time collision detection algorithms to improve the human perception in the underwater environments having poor visibilities [24], and shared tele-operation techniques using model based learning [25,26].
In this paper, an assistance technique to improve efficiency of underwater manipulation is studied. The main idea behind the proposed technique is to tele-operate the manipulator by touching several points on the camera images via a touch screen, which is helpful to alleviate the mental burden of the operators. The proposed technique can be distinguished from the autonomy in the previous research works because the initial information of the assistance is obtained reliably by operators. The proposed technique focuses on assisting the gross motions of the manipulator as approaching the object, and the conventional tele-operation is still used for dexterous motions such as handling objects. The main issues to design the proposed assistance technique can be categorized as follows: • Object position estimation using inputs via touch screen, and • Control structure for assisted tele-operation utilizing touch based position estimation.
Regarding the position estimation, a six degree-of-freedom (DOF) position estimation technique by utilizing several inputs on the camera images via touch screen is studied. The touching process can alleviate mental loads of operators from commanding every joint of the manipulators. Moreover, it is easy to guarantee the reliability on the estimation results of object positions because the initial information for the estimation is given by the operators. Thus, the proposed method can reduce the risk of malfunction that may be involved in the automatic estimation technique. To implement the assistance technique, an appropriate control structure is also investigated. The control structure involves the switching mode of the controller between the touch screen based operation and conventional tele-operation.
The performance of the proposed assistance technique is verified through experimental studies with tasks for underwater cable maintenance using URI-T. By performing the cable gripping experiment and statistical analysis, the validity of the assistance technique is compared with the conventional tele-operation.

Cable Maintenance Using URI-T
URI-T is a heavy-duty ROV for the underwater cable burial and the cable maintenance. As shown in Figure 1, URI-T equips two foldable water-jetting arms and cable detection sensors, TSS350 and TSS440, for the burying tasks of underwater cables and the verifying tasks of the cable burial states [27]. URI-T is designed to bury cables with 3.0 m depth with two 300 HP water pumps, at the seabed of 2500 m water depth.
URI-T also equips manipulators and tools for the cable maintenance tasks as shown in Figure 2. Cable maintenance tasks are performed by using the manipulators with appropriate tools. When performing cable gripping tasks, for example, the operators deploy the gripping tool on the cable by using the manipulator, and execute the gripping tool to grip the cable. Figure 2 also shows a couple of snapshots of cable maintenance experiments using URI-T at the sea-ground. When performing cable maintenance tasks, the manipulators and the tools are teleoperated from the operating room on the ship. The tele-operation is a significant burden process for the operators because the operators have to recognize the underwater situation from the limited information of cameras and sensors. As shown in Figure 3, there are several cameras in URI-T; however, all camera views are restricted to forward direction from the robot, and it is not possible to install cameras monitoring from side or backward.
It is because all cameras should be installed in the robot. By using the commanding device, moreover, the operators have to generate every joint command to tele-operate the manipulator. As a result, the tele-operation demands considerable concentration of operators. The tele-operation is apt to be inefficient and requires well-experienced operators.
For the efficient tele-operation, in the paper, an assistance technique using touch screen is studied. The purpose of the assistance technique is to alleviate the burden of operators by providing them tele-operating method by only touching several points on the camera images.

Touch Screen Based Estimation of an Object Position
In this section, a position estimation technique of objects utilizing touch screen inputs is addressed. By utilizing several touch inputs fed by operators on the camera images, the technique can estimate translation and orientation of the object. The technique is derived under a couple assumptions: • two cameras providing different viewpoints to each other for the manipulation are installed in the ROV, • a touch screen is available as a commanding device for the operators.
The assumptions are practically reasonable because, in the ROVs, several cameras are commonly installed for monitoring the underwater situation. Nowadays, touch screens are used widely as command devices for operating ROVs. In the case of URI-T, there are 12 cameras on the ROV, including two cameras to provide stereo views for the manipulation. There are two touch screens in the operating room of URI-T: one for the main operator, and the other for the co-operator.

Touch Screen Inputs Acquisition
For the position estimation, in total, six points in two camera images (three points in each camera image) are gathered. The number of the touched points are designed to be as small as possible to estimate the object position including the translation and the orientation. When a point of each camera image is given, one can estimate the translation of a point in 3D space; when three points are given, one can determine two direction vectors of orientation. It is well-known that the direction vectors in orientation matrix are orthogonal, one can determine the last direction vector using the two direction vectors. As shown in Figure 4, the authors utilized three touch inputs from each camera images as follows: • a point on the center of gripping position in the object, Ii P Ti , • a point laying on the approach direction of the object, Ii P Xi , and, • a point laying on the normal direction of the object, Ii P Yi , where superscript Ii signifies the camera image coordinate; i = 1, 2 the camera number. Note that each touched point has 2D position information w.r.t. the camera image coordinates, Ii.

Position Estimation of the Object
By utilizing the touched point information, a position estimation technique, including translation and orientation, is studied. The estimation technique is derived based on the kinematic relationship among manipulator, cameras, and the object. In Figure 5, illustrating the translation estimation method, one can find the coordinate systems including the camera image coordinates, Ii, the camera focus coordinates, Ci, and the world coordinate, W.
As a preprocessing stage, the 2D information of the touched points is transformed to the 3D information in the world coordinate. When the focal lengths of the camera are given, where f x and f y denote the focal lengths. By utilizing coordinate transform [28], one can obtain the touched point w.r.t. the world coordinate as follows: where W Ci R denotes the rotation matrix of the each camera focus coordinate, C i , w.r.t. the world coordinate, W; P Ci the position vector of each camera focus coordinate w.r.t. the world coordinate.
For the translation estimation of the object, the points, P Ti and P Ci , are utilized. As shown in Figure 5, the one can determine the lines passing the points, P Ti and P Ci ; and, by finding the nearest point between the lines, the translation of the object, T * , is obtained. The process to obtain the translation is given in Figure 6 in which blocks are defined in Appendix A. Refer to [29] for a detailed procedure to obtain the translation.  The estimation technique for the orientation of the object is designed by utilizing all of the three touched inputs in Figure 4. In detail, the points, P Ti and P Xi , are utilized to find the approach vector of the rotation matrix; P Ti and P Yi are for the normal vector. Then, one can find the sliding vector by taking the cross product of the two aforementioned vectors.
As depicted in Figure 7, the approach vector, n X , is determined as follows. First, the two surfaces spanning the lines, P Xi P Ci and P Ti P Ci (i = 1, 2), are obtained. Note that, from Figure 4, the points, P Xi and P Ti , are on the approach vectors and the two surfaces contain the direction of the approach vector. Second, the approach vector is determined by finding the direction of the common line between the aforementioned surfaces. In Appendix A, the definitions of the blocks in Figure 7 are described. Refer to [29] for the detailed procedure. By replacing P Xi with P Yi and following the same procedure in Figure 7, one can find the normal vector, n Y .
The sliding vector of the rotation matrix can be determined by taking the cross product of the approach vector and the normal vector. Before taking the cross product, some practical treatments are applied to the vectors, n X and n Y . The first treatment is for adjusting the directions of the vectors. In detail, the approach vector must have the direction from P Ti to P Xi ; the normal vector from P Ti to P Yi . The adjustment of the vector direction are accomplished by the following signum operation: where sgn() denotes the signum function. The other treatment is for guaranteeing the orthogonality between n X and n Y . We assumed that, if the angle between the two surfaces for the each vector is larger, the vector is more reliable. If, according to the assumption, the approach vector is more reliable than the normal vector, the vectors are modified as follows: and vice versa. Finally, the rotation matrix as a result of the orientation estimation is determined as follows: Figure 7. Estimation process for the approach vector of an orientation matrix.

Performance Evaluation of the Proposed Estimation Technique
In this subsection, the performance of the proposed position estimation technique is verified experimentally. The experiment is designed to estimate positions of three points of which the true positions are known in advance. The experimental setup is depicted in Figure 8 in which the true positions of the points are given. The positions of the three points in Figure 8 are estimated 15 times per point for the performance evaluation. Figure 9 shows the experimental results, and the average absolute errors (AAE) and the maximum absolute errors (MAE) for each axis are arranged in Table 1. As shown in Figure 9 and Table 1, the translation errors for each axis are under 71 mm, and the orientation errors for each axis are under 28 deg. The accuracy of the results may not be sufficient for dexterous manipulation. In the case of gross manipulation, whereas the accuracy is sufficient to be utilized to move the manipulator around the object.   In Figure 10, the control structure for the tele-operation with the proposed assistance technique is illustrated. As shown in Figure 10, the structure contains a couple of mode switching inputs. The mode selection input is for switching the control mode. By using the input, the operator can select an appropriate control mode between the conventional tele-operation mode and the proposed assistance mode, during tele-operating tasks. In the tele-operation mode, the operator tele-operates every joint of the manipulator by using the commanding device; in the assistance mode, whereas, the operator can handle the manipulator by using the touch screen. The control structure in Figure 10 contains another mode switching input for the tool possession selection to determine an appropriate offset distance to prevent the collision, of which the design method will be explained in the next subsection. As illustrated in Figure 10, the controller for assistance mode includes the motion planner for generating smooth motion trajectory to the object, the inverse kinematics to transform the trajectory from the Cartesian space to the joint space, and the joint motion controller which is a low level controller for joint position tracking for the manipulator.
The proposed assistance technique is implemented with an industrial PC having a touch screen. Thus, the touch inputs can be gathered by touching directly on the screen or by other conventional input devices as a mouse.

Considering Offset Distance for Approaching the Object
If the manipulator moves to the exact object position, the end-effector of the manipulator will collide with the object. To prevent the collision, thus, we need to take into account adequate offset distance between the end-effector and the object. As depicted as offset setting in Figure 10, the offset distance should be selected differently according to whether the manipulator possesses a tool or not. When the manipulator does not possess any tool, the offset distance can be set by considering the safe distance between the end-effector and the object. When the manipulator possesses a tool, whereas, one needs to consider the length of the tool when determining the offset distance. As a result, the offset distance is designed as follows: when manipulator does not possess any tool, and where T O denotes offset distance between the end-effector and the object; T tool the length of the tool. Then, one can determine final goal position in Cartesian space as follows: The offset is set as T L = [−T L , 0, 0] T along to the negative direction of the approach in

Controller Design for the Assistance Technique
The motion planner in Figure 10 is designed by using the 5th order polynomial trajectory to generate smooth trajectory from the current position to the goal position in (6) [28]. For an appropriate moving speed, the time duration for the trajectory is designed as ∆t = max(x goal − x start )/v, wherev denotes the averaged velocity limit which is designed by the user. When designing the inverse kinematics in Figure 10, the prevention of the joint limit violation of the manipulator is considered. It is because, due to the nonlinear relationship between the Cartesian space motion and the joint space motion, any Cartesian trajectory possibly violates the joint limitation, in the position level or in the velocity level, of the manipulators. The weighted damped least squares (WDLS) is utilized for the inverse kinematics [30][31][32][33]. The Cartesian velocity of the manipulator is described from the joint velocity as follows [30]:ẋ andẋ ∈ n denotes the Cartesian velocity of the end-effector;Θ ∈ m the joint velocity vector; and J ∈ n×m the Jacobian matrix; W ∈ m×m the positive definite weight matrix; J W the weighted Jacobian matrix; Θ W the weighted joint velocity vector. The inverse solution of (7) is obtained as follows [30]: andq 0 denotes a negative gradient vector of a cost function, h(Θ), for optimizing the null space of the weighted Jacobian; λ a damping parameter. From (8) and (7), as a result, the kinematics solution of the WDLS is obtained as follows: The weight matrix, W, is utilized to avoid to meet the joint limits [33]. We take into account both of the position limit and the velocity limit of each joint. Thus, the weight matrix is designed as W = W PL W VL , where W PL ∈ m×m denotes the weight matrix for the joint position limit; and W VL ∈ m×m the weight matrix for the joint velocity limit. In addition, λ and h(Θ) are utilized for preventing another problems as kinematic singularity. Refer to [30] for the detailed design procedure of the inverse kinematics. For the joint motion controller, the controllers built in the manipulator are utilized. The manipulator of URI-T involves joint velocity controllers. By adding external proportional feedbacks of joint position errors, we have modified the controllers to work as joint position controllers.

Task
Through the experiment using URI-T, the performance of the assistance technique is evaluated in comparison with the conventional tele-operation. The task for evaluation is designed as the cable gripping task, one of the cable maintenance tasks performed by using URI-T. In the cable gripping task, the operator displaces the gripping tool using the manipulator and grips tightly the cable to tow the cable to the place for repairing. The procedure of the gripping task can be arranged as the following three steps: • Approaching: Approaching the manipulator to the gripping tool in the bucket, • Seizing: Seizing the handle of the gripping tool with the jaw of the manipulator, • Displacing: Moving the gripping tool to the cable and executing the tool to grip the cable.
A detailed description of the task is arranged in Table 2. As described in Table 2, the assistance technique is only applied to the gross motion of the manipulator. In the case of the dexterous motion like seizing the gripping tool, whereas the conventional teleoperation is still used to prevent the accidents as collision. Table 2. Detailed description of the task.

Control Mode
Step of Task Description Conventional Tele-Op. Assisted Tele-Op.
#0 Initial posture Manipulator is in initial posture (same posture in Figure 11). Jaw is closed --

#1 Approaching
Moving manipulator to the gripping tool tele-operation assistance Opening the jaw tele-operation tele-operation #2 Seizing Delicate positioning of the manipulator to seize the gripping tool tele-operation tele-operation Closing jaw (seizing the tool) tele-operation tele-operation

#3 Displacing
Displacing gripping tool to the cable tele-operation assistance Delicate positioning of the gripping tool on the cable tele-operation tele-operation Closing the gripping tool (gripping the cable) tele-operation tele-operation

Experimental Setup
As illustrated in Figure 11, the equipment of URI-T is used for the experiment. The equipment includes a 7-function manipulator (UW3, KnR Systems), a gripping tool and two cameras providing different viewpoints for the manipulation. The offset, T O in (6), for the controller is designed as 150 mm to have an appropriate safety factor against the position estimation errors as shown in Figure 9 and Table 1. T L in (6) is designed based on the true length of the tool. To separate each step of the task, operations of the jaw and the gripping tool are utilized. For example, the steps between the approaching and the seizing are distinguished by the opening operation of the jaw of the manipulator. Similarly, the steps between the seizing and the displacing are divided by the jaw closing operation; and the end of the displacing is determined by the closing operation of the gripping tool to grip the cable. The operations of the jaw and the gripping tool are appropriate indications to divide the steps because the operations take a very short time just for toggling switches.

Experimental Method
Through the experiment for human based evaluation, the performance of the proposed assistance technique is compared with that of the conventional tele-operation. The performance index is designed as the time duration taken for each step of the task. To minimize the learning effect during the experiments, experts having lots of experiences in tele-operating the manipulator were collected for the subjects. As a result, two subjects participated for the experiment. One is an operator of URI-T having more than three years' experiences including several field evaluations and underwater construction operations using URI-T. The other is an engineer developing control algorithms in the company manufacturing the manipulator of URI-T. They are males in their late 20s, and are very familiar with operating the manipulator. Each subject performed six sets of tests, and a total of twelve test results are obtained and analyzed statistically. For the statistical analysis, the paired t-test is conducted to determine if there were statistically significant difference in the time duration. An alpha level of 0.05 was taken to indicate statistical significance.
The experiment was performed at an in-lab environment, which may be a little different from the underwater situation. However, other setups of the experiment are designed as to match the real application. The equipment and the operating system of URI-T for underwater tasks are used for the experiment, and, the experimental scenario is designed similarly to the underwater cable maintenance task. Regarding the touching input method, a mouse device is used instead of directly touching the screen. The subjects prefer to use the mouse because it is easier to indicate a point accurately than to touch directly on the screen. Two movie clips of the experiments are available as Supplemntary Materials of which links are mentioned at the last of the paper: one is when the proposed assistance technique is applied; the other is when the conventional tele-operation is used. Refer to the movie clips for further understanding of the experimental method and environment. Table 3 show the experimental results. Table 3 reveals that, in the approaching step, the operation time using the proposed assistance technique is decreased on average as much as 18.62% compared with the conventional tele-operation. The results have statistical significance (p < 0.05), implying the effectiveness of the assistance algorithm to improve the temporal efficiency for the tele-operation. In the case of seizing step, the averaged time is reduced as 41.69% with statistical significance. Note that, in both tele-operation mode (the conventional tele-operation and the assisted tele-operation), the seizing task is performed by the conventional tele-operation method. It is quite an interesting result that the seizing step in the assisted tele-operation mode takes a shorter amount of time than that in the conventional tele-operation mode. This is because, in the assisted tele-operation mode, the manipulator is placed in a better position to seize the gripping tool when the approaching step is finished. In the conventional tele-operation, the operator decides the end of the approaching step through only camera images. When finishing the approaching step, the manipulator positions between each test sets are not even; thus, it takes additional time to adjust the positions proper to seize the tool. In the case of the displacing step, the averaged time is decreased as 19.64% with statistical significance. Note that the displacing step is a mixed step with the assistance mode (displacing the gripping tool) and the tele-operation mode (delicate positioning of the gripping tool); the experimental results show that the assistance technique is still effective at increasing the temporal efficiency. In summary, the total time duration for the task using the assistance technique is decreased by 22.41% compared with the conventional tele-operation.

Conclusions
In this paper, an assistance technique using a touch screen for underwater manipulation tasks is addressed. The technique is designed to provide an easy way to operate the manipulator by simply touching several points on the camera images. The technique involves a position estimator of objects, including the translation and the orientation, by utilizing the touched information. Via the touch screen, the point information is fed by operators to guarantee reliable estimation results; the reliability is one of the most important issues in underwater tasks. An appropriate control structure for the assistance is also discussed. By switching the control mode between the assistance mode and the conventional tele-operation mode, the operator can perform the manipulation tasks efficiently. The validity of the assistance technique is evaluated experimentally with a cable gripping task using URI-T, a cable burying ROV. The experimental results show that the proposed technique improves temporal efficiency around 20% compared with the conventional tele-operation method.
As future works, the proposed assistance technique will be integrated into the operating system of URI-T. Then, the validity of the technique will be verified with the experimental studies in the underwater environments like water tanks. Finally, the technique will be applied to the underwater construction operations, helping the operators of URI-T to tele-operate the manipulation tasks and improving the operation efficiency.