Topic Editors

Dr. Yugang Liu
Department of Electrical and Computer Engineering, Royal Military College of Canada, Kingston, ON K7K 7B4, Canada
Prof. Dr. Sidney Givigi
School of Computing, Queen’s University, 557 Goodwin Hall, Kingston, ON K7L 2N8, Canada

Advances in Robot Vision Perception and Control Technology

Abstract submission deadline
31 January 2026
Manuscript submission deadline
30 April 2026
Viewed by
522

Topic Information

Dear Colleagues,

Over the past few decades, the robotics industry has witnessed incredible growth. The robotics market, in particular, the autonomous robots market, will continue to expand at remarkable speed. Unlike traditional robotic manipulators, which perform labor-intensive tasks in structured factory settings, modern robots are required to work alongside human beings. In order to succeed in these uncontrolled settings, modern robots must have the ability to understand the surrounding environment and control their actions without continuous human intervention. In other words, robotic perception and control play a vital role for autonomous robots in unstructured human environments. Similar to human eyes, cameras provide a robot with abundant information, allowing the robot to understand its location, detect obstacles, find objects of interest, etc. While promising, robot vision perception and control are underexplored topics, and there remain numerous technical challenges to address. The following Topic provides researchers with a platform to share their research insights on the theoretical analysis and applications of robot vision in practical experiments. Topics of interest include, but are not limited to:

  • Visual SLAM
  • Visual odometry
  • Visual serving
  • Visual tracking
  • Vision-based object detection
  • Machine learning techniques with application to robot vision
  • Vision-based obstacle avoidance
  • Vision-based robotic manipulation
  • Computer vision with application to robotics

Dr. Yugang Liu
Prof. Dr. Sidney Givigi
Topic Editors

Keywords

  • visual SLAM
  • visual odometry
  • visual serving
  • visual tracking
  • vision-based object detection

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
5.0 6.9 2020 20.7 Days CHF 1600 Submit
Applied Sciences
applsci
2.5 5.5 2011 19.8 Days CHF 2400 Submit
Electronics
electronics
2.6 6.1 2012 16.8 Days CHF 2400 Submit
Machines
machines
2.5 4.7 2013 16.9 Days CHF 2400 Submit
Robotics
robotics
3.3 7.7 2012 21.8 Days CHF 1800 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
43 pages, 1528 KiB  
Article
Adaptive Sign Language Recognition for Deaf Users: Integrating Markov Chains with Niching Genetic Algorithm
by Muslem Al-Saidi, Áron Ballagi, Oday Ali Hassen and Saad M. Darwish
AI 2025, 6(8), 189; https://doi.org/10.3390/ai6080189 - 15 Aug 2025
Viewed by 333
Abstract
Sign language recognition (SLR) plays a crucial role in bridging the communication gap between deaf individuals and the hearing population. However, achieving subject-independent SLR remains a significant challenge due to variations in signing styles, hand shapes, and movement patterns among users. Traditional Markov [...] Read more.
Sign language recognition (SLR) plays a crucial role in bridging the communication gap between deaf individuals and the hearing population. However, achieving subject-independent SLR remains a significant challenge due to variations in signing styles, hand shapes, and movement patterns among users. Traditional Markov Chain-based models struggle with generalizing across different signers, often leading to reduced recognition accuracy and increased uncertainty. These limitations arise from the inability of conventional models to effectively capture diverse gesture dynamics while maintaining robustness to inter-user variability. To address these challenges, this study proposes an adaptive SLR framework that integrates Markov Chains with a Niching Genetic Algorithm (NGA). The NGA optimizes the transition probabilities and structural parameters of the Markov Chain model, enabling it to learn diverse signing patterns while avoiding premature convergence to suboptimal solutions. In the proposed SLR framework, GA is employed to determine the optimal transition probabilities for the Markov Chain components operating across multiple signing contexts. To enhance the diversity of the initial population and improve the model’s adaptability to signer variations, a niche model is integrated using a Context-Based Clearing (CBC) technique. This approach mitigates premature convergence by promoting genetic diversity, ensuring that the population maintains a wide range of potential solutions. By minimizing gene association within chromosomes, the CBC technique enhances the model’s ability to learn diverse gesture transitions and movement dynamics across different users. This optimization process enables the Markov Chain to better generalize subject-independent sign language recognition, leading to improved classification accuracy, robustness against signer variability, and reduced misclassification rates. Experimental evaluations demonstrate a significant improvement in recognition performance, reduced error rates, and enhanced generalization across unseen signers, validating the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

Back to TopTop