Visual Servoing-Based Robotic Manipulation

A special issue of Robotics (ISSN 2218-6581). This special issue belongs to the section "Intelligent Robots and Mechatronics".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 486

Special Issue Editor


E-Mail Website
Guest Editor
Extreme Robotics Laboratory (ERL), University of Birmingham, Birmingham, UK
Interests: visual servoing; robotic manipulation; grasping; telemanipulation

Special Issue Information

Dear Colleagues,

Visual servoing is an important area of research in robotics, driven by advancements in computer vision, machine learning and sensor technologies. As robots become increasingly autonomous and capable of executing complex tasks, the ability to accurately perceive and interact with their environment becomes indispensable. Visual servoing provides a powerful framework for achieving precise control and manipulation in dynamic and uncertain environments. Robots equipped with visual servoing capabilities excel in tasks such as pick-and-place operations, assembly and manipulation, and are applied across various domains, including industrial automation, healthcare and service robotics. Despite the development of general-purpose visual servoing methods, the specific requirements of different applications often necessitate specialized approaches. These requirements are influenced by variations in tasks, environmental conditions and the unique challenges inherent in each application domain. While numerous visual servoing techniques and control strategies have been proposed, there remain significant opportunities for further innovation in creating solutions that meet the specific needs of various applications. Moreover, the integration of visual feedback systems with control strategies has the potential to enhance the performance and adaptability of robotic systems, contributing to the advancement of robotic manipulation technologies.

This Special Issue aims to consolidate the latest research and advancements in utilizing visual feedback for robotic manipulation tasks. Particularly, this Special Issue will explore the state-of-the-art visual servoing technologies, their practical applications in robotic manipulation, and the challenges and opportunities these developments present for future research.

The scope of this Special Issue includes, but is not limited to, the following topics:

  • Adaptive and robust visual servoing control methods;
  • Deep learning and computer vision techniques for robotic manipulation;
  • Sensor fusion and multi-modal approaches for enhanced manipulation;
  • Applications of visual servoing in industrial automation;
  • Simulation, modeling and real-world implementation of visual servoing systems;
  • Human–robot interaction and collaboration using visual feedback.

Dr. Naresh Marturi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual servoing
  • image-based visual servoing (IBVS)
  • position-based visual servoing (PBVS)
  • 3D visual servoing
  • control algorithms
  • robotic manipulation
  • machine vision
  • adaptive visual servoing
  • robust control methods
  • sensor fusion
  • multi-sensor integration
  • deep learning for visual servoing
  • dynamic environments
  • uncalibrated visual servoing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 4481 KiB  
Article
Hybrid Deep Learning Framework for Eye-in-Hand Visual Control Systems
by Adrian-Paul Botezatu, Andrei-Iulian Iancu and Adrian Burlacu
Robotics 2025, 14(5), 66; https://doi.org/10.3390/robotics14050066 - 19 May 2025
Abstract
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented [...] Read more.
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented to fuse interest-point maps with RGB channels, enabling the network to capture scene geometry better. A manipulator robot with an eye-in-hand configuration provides a reference image, while subsequent poses and images are generated synthetically, removing the need for extensive real data collection. The experimental results reveal that this enriched input representation significantly improves convergence accuracy and velocity smoothness compared to a baseline that processes real images alone. Specifically, including feature point maps allows the network to discriminate crucial elements in the scene, resulting in more precise velocity commands and stable end-effector trajectories. Thus, integrating additional, synthetically generated map data into convolutional architectures can enhance the robustness and performance of the visual servoing system, particularly when real-world data gathering is challenging. Unlike existing visual servoing methods, our early fusion strategy integrates feature maps directly into the network’s initial convolutional layer, allowing the model to learn critical geometric details from the very first stage of training. This approach yields superior velocity predictions and smoother servoing compared to conventional frameworks. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

34 pages, 20593 KiB  
Article
Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
by Yuki Kawawaki, Kenichi Murakami and Yuji Yamakawa
Robotics 2025, 14(5), 65; https://doi.org/10.3390/robotics14050065 - 17 May 2025
Viewed by 83
Abstract
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task [...] Read more.
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task execution. To meet these demands, we design a comprehensive system composed of two primary components: high-speed skeleton tracking and path planning. For tracking, we implement a high-speed skeleton tracking method that combines deep learning-based detection with optical flow-based motion extraction. In addition, we introduce a dynamic search area adjustment technique that focuses on the target joint to extract the desired motion more accurately. For path planning, we propose a high-speed, geometry-informed potential field model that addresses four key challenges: (P1) avoiding local minima, (P2) suppressing oscillations, (P3) ensuring adaptability to dynamic environments, and (P4) handling obstacles with arbitrary 3D shapes. We validated the effectiveness of our high-frequency feedback control and the proposed system through a series of simulations and real-world collision-free path planning experiments. Our high-speed skeleton tracking operates at 250 Hz, which is eight times faster than conventional deep learning-based methods, and our path planning method runs at over 10,000 Hz. The proposed system offers both versatility across different working environments and low latencies. Therefore, we hope that it will contribute to a foundational motion generation framework for human–robot collaboration (HRC), applicable to a wide range of downstream tasks while ensuring safety in dynamic environments. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Back to TopTop