Special Issue "Visual Servoing in Robotics"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Systems & Control Engineering".

Deadline for manuscript submissions: closed (30 June 2019).

Special Issue Editor

Guest Editor
Dr. Jorge Pomares Website E-Mail
Department of Physics, Systems Engineering and Signal Theory, University of Alicante, Alicante 03690, Spain
Interests: visual servoing; robot control; space robotics

Special Issue Information

Dear Colleagues,

Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captures by one or several cameras. With respect to vision issues, different problems are currently under research such as the use of different kind of image features (or different kind of cameras), image processing at high velocity, convergence properties, etc. Furthermore, the use of new control schemes allows the system to behave more robustly. Related issues such as optimal and robust approaches, direct control, path tracking or sensor fusion allows the application of the visual servoing systems in different domains.

This Special Issue aims to cover the most recent advances in visual servoing including industrial and service robotics. Novel theoretical approaches or practical applications of all aspects of visual servoing systems are welcomed. Reviews and surveys of the state-of-the-art are also welcomed. Topics of interest to this Special Issue include, but are not limited to, the following topics:

  • Path-planning in visual servoing
  • Navigation and localization using visual servoing
  • Dynamic and direct visual control of robotic systems
  • Robust and optimal control of robots
  • Intelligent control 
  • Deep learning and machine learning in visual servoing
  • Intelligent transportation using visual servoing
  • Non-linear visual control of robotics systems
  • Visual servoing in manipulation tasks
  • Visual servoing in field robotics
  • Space robotics and visual servoing
  • Real-time embedded visual control systems
  • Humanoid robots and visual servoing

Dr. Jorge Pomares
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Path-planning in visual servoing
  • Navigation and localization using visual servoing
  • Dynamic and direct visual control of robotic systems
  • Robust and optimal control of robots
  • Deep learning and machine learning in visual servoing
  • Non-linear visual control of robotics systems
  • Visual servoing in manipulation taks
  • Visual servoing in field robotics
  • Space robotics and visual servoing
  • Real-time embedded visual control systems
  • Humanoid robots and visual servoing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Enhanced Switch Image-Based Visual Servoing Dealing with FeaturesLoss
Electronics 2019, 8(8), 903; https://doi.org/10.3390/electronics8080903 - 15 Aug 2019
Abstract
In this paper, an enhanced switch image-based visual servoing controller for a six-degree-of-freedom (DOF) robot with a monocular eye-in-hand camera configuration is presented. The switch control algorithm separates the rotating and translational camera motions and divides the image-based visual servoing (IBVS) control into [...] Read more.
In this paper, an enhanced switch image-based visual servoing controller for a six-degree-of-freedom (DOF) robot with a monocular eye-in-hand camera configuration is presented. The switch control algorithm separates the rotating and translational camera motions and divides the image-based visual servoing (IBVS) control into three distinct stages with different gains. In the proposed method, an image feature reconstruction algorithm based on the Kalman filter is proposed to handle the situation where the image features go outside the camera’s field of view (FOV). The combination of the switch controller and the feature reconstruction algorithm improves the system response speed and tracking performance of IBVS, while ensuring the success of servoing in the case of the feature loss. Extensive simulation and experimental tests are carried out on a 6-DOF robot to verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Open AccessArticle
Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor
Electronics 2019, 8(8), 836; https://doi.org/10.3390/electronics8080836 - 26 Jul 2019
Abstract
Parallel robots present outstanding advantages compared with their serial counterparts; they have both a higher force-to-weight ratio and better stiffness. However, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. This paper [...] Read more.
Parallel robots present outstanding advantages compared with their serial counterparts; they have both a higher force-to-weight ratio and better stiffness. However, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. This paper focuses on the dynamic model identification of the 6-DOF parallel robots for advanced model-based visual servoing control design purposes. A visual closed-loop output-error identification method based on an optical coordinate-measuring-machine (CMM) sensor for parallel robots is proposed. The main advantage, compared with the conventional identification method, is that the joint torque measurement and the exact knowledge of the built-in robot controllers are not needed. The time-consuming forward kinematics calculation, which is employed in the conventional identification method of the parallel robot, can be avoided due to the adoption of optical CMM sensor for real time pose estimation. A case study on a 6-DOF RSS parallel robot is carried out in this paper. The dynamic model of the parallel robot is derived based on the virtual work principle, and the built dynamic model is verified through Matlab/SimMechanics. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, a visual closed-loop output-error identification method is proposed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Open AccessArticle
Picking Robot Visual Servo Control Based on Modified Fuzzy Neural Network Sliding Mode Algorithms
Electronics 2019, 8(6), 605; https://doi.org/10.3390/electronics8060605 - 29 May 2019
Cited by 1
Abstract
Through an analysis of the kinematics and dynamics relations between the target positioning of manipulator joint angles of an apple-picking robot, the sliding-mode control (SMC) method is introduced into robot servo control according to the characteristics of servo control. However, the biggest problem [...] Read more.
Through an analysis of the kinematics and dynamics relations between the target positioning of manipulator joint angles of an apple-picking robot, the sliding-mode control (SMC) method is introduced into robot servo control according to the characteristics of servo control. However, the biggest problem of the sliding-mode variable structure control is chattering, and the speed, inertia, acceleration, switching surface, and other factors are also considered when approaching the sliding die surface. Meanwhile, neural network has the characteristics of approaching non-linear function and not depending on the mechanism model of the system. Therefore, the fuzzy neural network control algorithm can effectively solve the chattering problem caused by the variable structure of the sliding mode and improve the dynamic and static performances of the control system. The comparison experiment is carried out through the application of the PID algorithm, the sliding mode control algorithm, and the improved fuzzy neural network sliding mode control algorithm on the picking robot system in the laboratory environment. The result verified that the intelligent algorithm can reduce the complexity of parameter adjustments and improve the control accuracy to a certain extent. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing
Electronics 2019, 8(4), 374; https://doi.org/10.3390/electronics8040374 - 27 Mar 2019
Abstract
This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based [...] Read more.
This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based on an optimal control framework and it is employed to control mobile manipulators during the tracking of image trajectories taking into account robot dynamics. The direct approach allows us to take both the manipulator and base dynamics into account. The proposed image-based controllers consider the optimization of the motor signals sent to the mobile manipulator during the tracking of image trajectories by minimizing the control force and torque. As the results show, the proposed direct visual servoing system uses the eye-in-hand camera images for concurrently controlling both the base platform and robot arm. The use of the optimal framework allows us to derive different visual controllers with different dynamical behaviors during the tracking of image trajectories. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Open AccessArticle
Robust Visual Compass Using Hybrid Features for Indoor Environments
Electronics 2019, 8(2), 220; https://doi.org/10.3390/electronics8020220 - 16 Feb 2019
Cited by 3
Abstract
Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., [...] Read more.
Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Graphical abstract

Open AccessArticle
Optimized Combination of Spray Painting Trajectory on 3D Entities
Electronics 2019, 8(1), 74; https://doi.org/10.3390/electronics8010074 - 09 Jan 2019
Cited by 1
Abstract
In this research, a novel method of space spraying trajectory optimization is proposed for 3D entity spraying. According to the particularity of the three-dimensional entity, the finite range model is set up, and the 3D entity is patched by the surface modeling method [...] Read more.
In this research, a novel method of space spraying trajectory optimization is proposed for 3D entity spraying. According to the particularity of the three-dimensional entity, the finite range model is set up, and the 3D entity is patched by the surface modeling method based on FPAG (flat patch adjacency graph). After planning the spray path on each patch, the variance of the paint thickness of the discrete point and the ideal paint thickness is taken as the objective function and the trajectory on each patch is optimized. The improved GA (genetic algorithm), ACO (ant colony optimization), and PSO (particle swarm optimization) are used to solve the TTOI (tool trajectory optimal integration) problem. The practicability of the three algorithms is verified by simulation experiments. Finally, the trajectory optimization algorithm of the 3D entity spraying robot can improve the spraying efficiency. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Open AccessArticle
Estimated Reaction Force-Based Bilateral Control between 3DOF Master and Hydraulic Slave Manipulators for Dismantlement
Electronics 2018, 7(10), 256; https://doi.org/10.3390/electronics7100256 - 16 Oct 2018
Cited by 2
Abstract
This paper proposes a novel bilateral control design based on an estimated reaction force without a force sensor for a three-degree of freedom hydraulic servo system with master–slave manipulators. The proposed method is based upon sliding mode control with sliding perturbation observer (SMCSPO) [...] Read more.
This paper proposes a novel bilateral control design based on an estimated reaction force without a force sensor for a three-degree of freedom hydraulic servo system with master–slave manipulators. The proposed method is based upon sliding mode control with sliding perturbation observer (SMCSPO) using a bilateral control environment. The sliding perturbation observer (SPO) estimates the reaction force at the end effector and second link without using any sensors. The sliding mode control (SMC) is used as a bilateral controller for the robust position tracking and control of the slave device. A bilateral control strategy in a hydraulic servo system provides robust position and force tracking between master and slave. The difference between the reaction force of the slave produced by the effect of the remote environment and the operating force applied to the master by the operator is expressed in the target impedance model. The impedance model is applied to the master and allows the operator to feel the reaction force from the environment. This research experimentally verifies that the slave device can follow the trajectory of the master device using the proposed bilateral control strategy based on the estimated reaction force. This technique will be convenient for three or more degree of freedom (DOF) hydraulic servo systems used in dismantling nuclear power plants. It is worthy to mention that a camera is used for visual feedback on the safety of the environment and workspace. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Graphical abstract

Open AccessArticle
Time Sequential Motion-to-Photon Latency Measurement System for Virtual Reality Head-Mounted Displays
Electronics 2018, 7(9), 171; https://doi.org/10.3390/electronics7090171 - 01 Sep 2018
Cited by 1
Abstract
Because the interest in virtual reality (VR) has increased recently, studies on head-mounted displays (HMDs) have been actively conducted. However, HMD causes motion sickness and dizziness to the user, who is most affected by motion-to-photon latency. Therefore, equipment for measuring and quantifying this [...] Read more.
Because the interest in virtual reality (VR) has increased recently, studies on head-mounted displays (HMDs) have been actively conducted. However, HMD causes motion sickness and dizziness to the user, who is most affected by motion-to-photon latency. Therefore, equipment for measuring and quantifying this occurrence is very necessary. This paper proposes a novel system to measure and visualize the time sequential motion-to-photon latency in real time for HMDs. Conventional motion-to-photon latency measurement methods can measure the latency only at the beginning of the physical motion. On the other hand, the proposed method can measure the latency in real time at every input time. Specifically, it generates the rotation data with intensity levels of pixels on the measurement area, and it can obtain the motion-to-photon latency data in all temporal ranges. Concurrently, encoders measure the actual motion from a motion generator designed to control the actual posture of the HMD device. The proposed system conducts a comparison between two motions from encoders and the output image on a display. Finally, it calculates the motion-to-photon latency for all time points. The experiment shows that the latency increases from a minimum of 46.55 ms to a maximum of 154.63 ms according to the workload levels. Full article
(This article belongs to the Special Issue Visual Servoing in Robotics)
Show Figures

Figure 1

Back to TopTop