Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (355)

Search Parameters:
Keywords = indoor robot systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7733 KiB  
Article
Assessing Geometry Perception of Direct Time-of-Flight Sensors for Robotic Safety
by Jakob Gimpelj and Marko Munih
Sensors 2025, 25(14), 4385; https://doi.org/10.3390/s25144385 - 13 Jul 2025
Viewed by 191
Abstract
Time-of-flight sensors have emerged as a viable solution for real-time distance sensing in robotic safety applications due to their compact size, fast response, and contactless operation. This study addresses one of the key challenges with time-of-flight sensors, focusing on how they perceive and [...] Read more.
Time-of-flight sensors have emerged as a viable solution for real-time distance sensing in robotic safety applications due to their compact size, fast response, and contactless operation. This study addresses one of the key challenges with time-of-flight sensors, focusing on how they perceive and evaluate the environment, particularly in the presence of complex geometries and reflective surfaces. Using a Universal Robots UR5e arm in a controlled indoor workspace, two different sensors were tested across eight scenarios involving objects of varying shapes, sizes, materials, and reflectivity. Quantitative metrics including the root mean square error, mean absolute error, area difference, and others were used to evaluate measurement accuracy. Results show that the sensor’s field of view and operating principle significantly affect its spatial resolution and object boundary detection, with narrower fields of view providing more precise measurements and wider fields of view demonstrating greater resilience to specular reflections. These findings offer valuable insights into selecting appropriate ToF sensors for integration into robotic safety systems, particularly in environments with reflective surfaces and complex geometries. Full article
(This article belongs to the Special Issue SPAD-Based Sensors and Techniques for Enhanced Sensing Applications)
Show Figures

Figure 1

22 pages, 3045 KiB  
Article
Type-2 Fuzzy-Controlled Air-Cleaning Mobile Robot
by Chian-Song Chiu, Shu-Yen Yao and Carlo Santiago
Symmetry 2025, 17(7), 1088; https://doi.org/10.3390/sym17071088 - 8 Jul 2025
Viewed by 243
Abstract
This research presents the development of a type-2 fuzzy-controlled autonomous mobile robot specifically designed for monitoring and actively maintaining indoor air quality. The core of this system is the proposed type-2 fuzzy PID dual-mode controller used for stably patrolling rooms along the walls [...] Read more.
This research presents the development of a type-2 fuzzy-controlled autonomous mobile robot specifically designed for monitoring and actively maintaining indoor air quality. The core of this system is the proposed type-2 fuzzy PID dual-mode controller used for stably patrolling rooms along the walls of the environment. The design method ingeniously merges the fast error correction capability of PID control with the robust adaptability of type-2 fuzzy logic control, which utilizes interval type-2 fuzzy sets. Furthermore, the type-2 fuzzy rule table of the right wall-following controller can be extended from the first designed fuzzy left wall-following controller in a symmetrical design manner. As a result, this study eliminates the drawbacks of excessive oscillations arising from PID control and sluggish response to large initial errors in typical traditional fuzzy control. The following of the stable wall and obstacle is facilitated with ensured accuracy and easy implementation so that effective air quality monitoring and active PM2.5 filtering are achieved in a movable manner. Furthermore, the augmented reality (AR) interface overlays real-time PM2.5 data directly onto a user’s visual field, enhancing situational awareness and enabling an immediate and intuitive assessment of air quality. As this type of control is different from that used in traditional fixed sensor networks, both broader area coverage and efficient air filtering are achieved. Finally, the experimental results demonstrate the controller’s superior performance and its potential to significantly improve indoor air quality. Full article
(This article belongs to the Special Issue Applications Based on Symmetry in Control Systems and Robotics)
Show Figures

Figure 1

31 pages, 9881 KiB  
Article
Guide Robot Based on Image Processing and Path Planning
by Chen-Hsien Yang and Jih-Gau Juang
Machines 2025, 13(7), 560; https://doi.org/10.3390/machines13070560 - 27 Jun 2025
Viewed by 180
Abstract
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm [...] Read more.
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm and guiding them to a specified destination. To enable hand-holding, we employed a camera combined with object detection to identify the human hand and a closed-loop control system to manage the robotic arm’s movements. For path planning, we implemented a Dueling Double Deep Q Network (D3QN) enhanced with a genetic algorithm. To address dynamic obstacles, the robot utilizes a depth camera alongside fuzzy logic to control its wheels and navigate around them. A 3D point cloud map is generated to determine the start and end points accurately. The D3QN algorithm, supplemented by variables defined using the genetic algorithm, is then used to plan the robot’s path. As a result, the robot can safely guide blind individuals to their destinations without collisions. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots and UAVs, 2nd Edition)
Show Figures

Figure 1

25 pages, 7065 KiB  
Article
A Planer Moving Microphone Array for Sound Source Localization
by Chuyang Wang, Karhang Chu and Yatsze Choy
Appl. Sci. 2025, 15(12), 6777; https://doi.org/10.3390/app15126777 - 16 Jun 2025
Viewed by 507
Abstract
Sound source localization (SSL) equips service robots with the ability to perceive sound similarly to humans, which is particularly valuable in complex, dark indoor environments where vision-based systems may not work. From a data collection perspective, increasing the number of microphones generally improves [...] Read more.
Sound source localization (SSL) equips service robots with the ability to perceive sound similarly to humans, which is particularly valuable in complex, dark indoor environments where vision-based systems may not work. From a data collection perspective, increasing the number of microphones generally improves SSL performance. However, a large microphone array such as a 16-microphone array configuration may occupy significant space on a robot. To address this, we propose a novel framework that uses a structure of four planar moving microphones to emulate the performance of a 16-microphone array, thereby saving space. Because of its unique design, this structure can dynamically form various spatial patterns, enabling 3D SSL, including estimation of angle, distance, and height. For experimental comparison, we also constructed a circular 6-microphone array and a planar 4 × 4 microphone array, both capable of rotation to ensure fairness. Three SSL algorithms were applied across all configurations. Experiments were conducted in a standard classroom environment, and the results show that the proposed framework achieves approximately 80–90% accuracy in angular estimation and around 85% accuracy in distance and height estimation, comparable to the performance of the 4 × 4 planar microphone array. Full article
(This article belongs to the Special Issue Noise Measurement, Acoustic Signal Processing and Noise Control)
Show Figures

Figure 1

33 pages, 12896 KiB  
Article
A Bipedal Robotic Platform Leveraging Reconfigurable Locomotion Policies for Terrestrial, Aquatic, and Aerial Mobility
by Zijie Sun, Yangmin Li and Long Teng
Biomimetics 2025, 10(6), 374; https://doi.org/10.3390/biomimetics10060374 - 5 Jun 2025
Viewed by 708
Abstract
Biological systems can adaptively navigate multi-terrain environments via morphological and behavioral flexibility. While robotic systems increasingly achieve locomotion versatility in one or two domains, integrating terrestrial, aquatic, and aerial mobility into a single platform remains an engineering challenge. This work tackles this by [...] Read more.
Biological systems can adaptively navigate multi-terrain environments via morphological and behavioral flexibility. While robotic systems increasingly achieve locomotion versatility in one or two domains, integrating terrestrial, aquatic, and aerial mobility into a single platform remains an engineering challenge. This work tackles this by introducing a bipedal robot equipped with a reconfigurable locomotion framework, enabling seven adaptive policies: (1) thrust-assisted jumping, (2) legged crawling, (3) balanced wheeling, (4) tricycle wheeling, (5) paddling-based swimming, (6) air-propelled drifting, and (7) quadcopter flight. Field experiments and indoor statistical tests validated these capabilities. The robot achieved a 3.7-m vertical jump via thrust forces counteracting gravitational forces. A unified paddling mechanism enabled seamless transitions between crawling and swimming modes, allowing amphibious mobility in transitional environments such as riverbanks. The crawling mode demonstrated the traversal on uneven substrates (e.g., medium-density grassland, soft sand, and cobblestones) while generating sufficient push forces for object transport. In contrast, wheeling modes prioritize speed and efficiency on flat terrain. The aquatic locomotion was validated through trials in static water, an open river, and a narrow stream. The flight mode was investigated with the assistance of the jumping mechanism. By bridging terrestrial, aquatic, and aerial locomotion, this platform may have the potential for search-and-rescue and environmental monitoring applications. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Figure 1

23 pages, 4909 KiB  
Article
Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF
by Xinjie Zhu, Xiaoshun Zhao, Jingyan Liu, Weijun Feng and Xiaofei Fan
Agronomy 2025, 15(6), 1373; https://doi.org/10.3390/agronomy15061373 - 3 Jun 2025
Viewed by 672
Abstract
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system [...] Read more.
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system (GNSS) signal obstruction, light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) error accumulation, and lighting-limited visual positioning. A key innovation is the integration of an extended Kalman filter (EKF) to dynamically fuse T265 visual odometry, inertial measurement unit (IMU), and GPS data, overcoming single-sensor limitations and enhancing positioning robustness in complex environments. Additionally, the study optimizes PID controller derivative parameters for tracked chassis, improving acceleration/deceleration control smoothness. The system, composed of Pixhawk 4, Raspberry Pi 4B, Silan S2L LIDAR, T265 visual odometry, and a Quectel EC200A 4G module, enables autonomous path planning, real-time obstacle avoidance, and multi-mission navigation. Indoor/outdoor tests and field experiments in Sun Village Orchard validated its autonomous cruising and obstacle avoidance capabilities under real-world orchard conditions, demonstrating feasibility for intelligent plant protection. Full article
(This article belongs to the Special Issue Smart Pest Control for Building Farm Resilience)
Show Figures

Figure 1

26 pages, 5598 KiB  
Article
DeepLabV3+-Based Semantic Annotation Refinement for SLAM in Indoor Environments
by Shuangfeng Wei, Hongrui Tang, Changchang Liu, Tong Yang, Xiaohang Zhou, Sisi Zlatanova, Junlin Fan, Liping Tu and Yaqin Mao
Sensors 2025, 25(11), 3344; https://doi.org/10.3390/s25113344 - 26 May 2025
Cited by 1 | Viewed by 364
Abstract
Visual SLAM systems frequently encounter challenges in accurately reconstructing three-dimensional scenes from monocular imagery in semantically deficient environments, which significantly compromises robotic operational efficiency. While conventional manual annotation approaches can provide supplemental semantic information, they are inherently inefficient, procedurally complex, and labor-intensive. This [...] Read more.
Visual SLAM systems frequently encounter challenges in accurately reconstructing three-dimensional scenes from monocular imagery in semantically deficient environments, which significantly compromises robotic operational efficiency. While conventional manual annotation approaches can provide supplemental semantic information, they are inherently inefficient, procedurally complex, and labor-intensive. This paper presents an optimized DeepLabV3+-based framework for visual SLAM that integrates image semantic segmentation with automated point cloud semantic annotation. The proposed method utilizes MobileNetV3 as the backbone network for DeepLabV3+ to maintain segmentation accuracy while reducing computational demands. In this paper, we introduce a parameter-adaptive Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm incorporating K-nearest neighbors and accelerated by KD-tree structures, effectively addressing the limitations of manual parameter tuning and erroneous annotations in conventional methods. Furthermore, a novel point cloud processing strategy featuring dynamic radius thresholding is developed to enhance annotation completeness and boundary precision. Experimental results demonstrate that our approach achieves significant improvements in annotation efficiency while preserving high accuracy, thereby providing reliable technical support for enhanced environmental understanding and navigation capabilities in indoor robotic applications. Full article
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)
Show Figures

Figure 1

24 pages, 9146 KiB  
Article
AI-Driven Dynamic Covariance for ROS 2 Mobile Robot Localization
by Bogdan Felician Abaza
Sensors 2025, 25(10), 3026; https://doi.org/10.3390/s25103026 - 11 May 2025
Cited by 2 | Viewed by 1134
Abstract
In the evolving field of mobile robotics, enhancing localization robustness in dynamic environments remains a critical challenge, particularly for ROS 2-based systems where sensor fusion plays a pivotal role. This study evaluates an AI-driven approach to dynamically adjust covariance parameters for improved pose [...] Read more.
In the evolving field of mobile robotics, enhancing localization robustness in dynamic environments remains a critical challenge, particularly for ROS 2-based systems where sensor fusion plays a pivotal role. This study evaluates an AI-driven approach to dynamically adjust covariance parameters for improved pose estimation in a differential-drive mobile robot. A regression model was integrated into the robot_localization package to adapt the Extended Kalman Filter (EKF) covariance in real time, with experiments conducted in a controlled indoor setting over runs comparing AI-enabled dynamic covariance prediction against a static covariance baseline across Static, Moderate, and Aggressive motion dynamics. The AI-enabled system achieved a Mean Absolute Error (MAE) of 0.0061 for pose estimation and reduced median yaw prediction errors to 0.0362 rad (static) and 0.0381 rad (moderate) with tighter interquartile ranges (0.0489 rad, 0.1069 rad) compared to the baseline (0.0222 rad, 0.1399 rad). Aggressive dynamics posed challenges, with errors up to 0.9491 rad due to data distribution bias and Random Forest model constraints. Enhanced dataset augmentation, LSTM modeling, and online learning are proposed to address these limitations. Datalogging enabled iterative re-training, supporting scalable state estimation with future focus on online learning. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 5464 KiB  
Article
An Innovative Indoor Localization Method for Agricultural Robots Based on the NLOS Base Station Identification and IBKA-BP Integration
by Jingjing Yang, Lihong Wan, Junbing Qian, Zonglun Li, Zhijie Mao, Xueming Zhang and Junjie Lei
Agriculture 2025, 15(8), 901; https://doi.org/10.3390/agriculture15080901 - 21 Apr 2025
Viewed by 443
Abstract
This study proposes an innovative indoor localization algorithm based on the base station identification and improved black kite algorithm–backpropagation (IBKA-BP) integration to address the problem of low positioning accuracy in agricultural robots operating in agricultural greenhouses and breeding farms, where the Global Navigation [...] Read more.
This study proposes an innovative indoor localization algorithm based on the base station identification and improved black kite algorithm–backpropagation (IBKA-BP) integration to address the problem of low positioning accuracy in agricultural robots operating in agricultural greenhouses and breeding farms, where the Global Navigation Satellite System is unreliable due to weak or absent signals. First, the density peaks clustering (DPC) algorithm is applied to select a subset of line-of-sight (LOS) base stations with higher positioning accuracy for backpropagation neural network modeling. Next, the collected received signal strength indication (RSSI) data are processed using Kalman filtering and Min-Max normalization, suppressing signal fluctuations and accelerating the gradient descent convergence of the distance measurement model. Finally, the improved black kite algorithm (IBKA) is enhanced with tent chaotic mapping, a lens imaging reverse learning strategy, and the golden sine strategy to optimize the weights and biases of the BP neural network, developing an RSSI-based ranging algorithm using the IBKA-BP neural network. The experimental results demonstrate that the proposed algorithm can achieve a mean error of 16.34 cm, a standard deviation of 16.32 cm, and a root mean square error of 22.87 cm, indicating its significant potential for precise indoor localization of agricultural robots. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

26 pages, 14214 KiB  
Article
Stereo Visual Odometry and Real-Time Appearance-Based SLAM for Mapping and Localization in Indoor and Outdoor Orchard Environments
by Imran Hussain, Xiongzhe Han and Jong-Woo Ha
Agriculture 2025, 15(8), 872; https://doi.org/10.3390/agriculture15080872 - 16 Apr 2025
Viewed by 1011
Abstract
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an [...] Read more.
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an increased risk of collisions affect the robot’s ability to perform tasks such as fruit harvesting, spraying, and monitoring. To address these limitations, this study integrated stereo visual odometry with real-time appearance-based mapping (RTAB-Map)-based simultaneous localization and mapping (SLAM) to improve mapping and localization in both indoor and outdoor orchard settings. The proposed system leverages stereo image pairs for precise depth estimation while utilizing RTAB-Map’s graph-based SLAM framework with loop-closure detection to ensure global map consistency. In addition, an incorporated inertial measurement unit (IMU) enhances pose estimation, thereby improving localization accuracy. Substantial improvements in both mapping and localization performance over the traditional approach were demonstrated, with an average error of 0.018 m against the ground truth for outdoor mapping and a consistent average error of 0.03 m for indoor trails with a 20.7% reduction in visual odometry trajectory deviation compared to traditional methods. Localization performance remained robust across diverse conditions, with a low RMSE of 0.207 m. Our approach provides critical insights into developing more reliable autonomous navigation systems for agricultural robots. Full article
Show Figures

Figure 1

24 pages, 10535 KiB  
Article
Enabling Navigation and Mission-Based Control on a Low-Cost Unitree Go1 Air Quadrupedal Robot
by Ntmitrii Gyrichidi, Mikhail P. Romanov, Yuriy Yu. Tsepkin and Alexey M. Romanov
Designs 2025, 9(2), 50; https://doi.org/10.3390/designs9020050 - 15 Apr 2025
Viewed by 1379
Abstract
Quadrupedal robots are now not just prototypes as they were a decade ago. This field now focuses on finding new application areas for robots rather than solving pure locomotion problems. Although the price of quadrupedal robots has decreased significantly during the last decade, [...] Read more.
Quadrupedal robots are now not just prototypes as they were a decade ago. This field now focuses on finding new application areas for robots rather than solving pure locomotion problems. Although the price of quadrupedal robots has decreased significantly during the last decade, it is still relatively high and can be considered as one of the limiting factors for research, especially for multi-agent scenarios involving multiple robots. This paper proposes a simple and easily reproducible approach to integrating the cheapest robot from the Unitree Go1 series with controllers running the ArduPilot firmware without disassembling the robot itself or modifying its hardware. Experimental studies show that the average latency introduced by the proposed control method over Wi-Fi is 206.7 ms, and its standard deviation is below 53 ms, which is suitable for following the mission route using the Global Navigation Satellite System (GNSS). At the same time, control using Ethernet reduces mean latency down to 78.3 ms and provides additional functionality (e.g., the ability to configure step height). Finally, in the range of standard Go1 speeds, both proposed control interfaces, based on Wi-Fi and Ethernet, are suitable for most practical indoor and outdoor tasks. Full article
Show Figures

Figure 1

26 pages, 13612 KiB  
Article
Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing
by Chen Zhong, Qingjia Kong, Ke Wang, Zhe Zhang, Long Cheng, Sijia Liu and Lizhu Han
Actuators 2025, 14(4), 183; https://doi.org/10.3390/act14040183 - 9 Apr 2025
Viewed by 483
Abstract
Autonomous navigation in indoor environments demands reliable perception and control strategies for nonholonomic mobile robots operating under geometric constraints. While visual servoing offers a promising framework for such tasks, conventional approaches often rely on explicit 3D feature estimation or predefined reference trajectories, limiting [...] Read more.
Autonomous navigation in indoor environments demands reliable perception and control strategies for nonholonomic mobile robots operating under geometric constraints. While visual servoing offers a promising framework for such tasks, conventional approaches often rely on explicit 3D feature estimation or predefined reference trajectories, limiting their adaptability in dynamic scenarios. In this paper, we propose a novel nonholonomic mobile robot corridor-following and doorway-passing method based on image-based visual servoing (IBVS) by using a single dioptric camera. Based on the unifying central spherical projection model, we present the projection mechanism of 3D lines and properties of line images for two 3D parallel lines under different robot poses. In the normalized image plane, we define a triangle enclosed by two polar lines in relation to line image conic features, and adopt a polar representation for visual features, which will naturally become zero when the robot follows the corridor middle line. The IBVS control law for the corridor-following task does not need to pre-calculate expected visual features or estimate the 3D information of image features, and is extended to doorway-passing by simply introducing an upper door frame to modify visual features for the control law. Simulations including straight corridor-following, anti-noise performance, convergence of the control law, doorway-passing, and loop-closed corridor-following are conducted. We develop a ROS-based IBVS system on our real robot platform; the experimental results validate that the proposed method is suitable for the autonomous indoor visual navigation control task for a nonholonomic mobile robot equipped with a single dioptric camera. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

15 pages, 10968 KiB  
Article
An Experimental Evaluation of Indoor Localization in Autonomous Mobile Robots
by Mina Khoshrangbaf, Vahid Khalilpour Akram, Moharram Challenger and Orhan Dagdeviren
Sensors 2025, 25(7), 2209; https://doi.org/10.3390/s25072209 - 31 Mar 2025
Cited by 2 | Viewed by 919
Abstract
High-precision indoor localization and tracking are essential requirements for the safe navigation and task execution of autonomous mobile robots. Despite the growing importance of mobile robots in various areas, achieving precise indoor localization remains challenging due to signal interference, multipath propagation, and complex [...] Read more.
High-precision indoor localization and tracking are essential requirements for the safe navigation and task execution of autonomous mobile robots. Despite the growing importance of mobile robots in various areas, achieving precise indoor localization remains challenging due to signal interference, multipath propagation, and complex indoor layouts. In this work, we present the first comprehensive study comparing the accuracy of Bluetooth low energy (BLE), WiFi, and ultra wideband (UWB) technologies for the indoor localization of mobile robots under various circumstances. In the performed experiments, the error margin of the WiFi-based systems reached 608.7 cm, which is not tolerable for most applications. As a commonly used technology in the existing tracking systems, the accuracy of BLE-based systems is at least 44.12% better than that of WiFi-based systems. The error margin of the BLE-based system in tracking static and mobile robots was 191.7 cm and 340.1 cm, respectively. The experiments showed that even with a limited number of UWB anchors, the system provides acceptable accuracy for tracking the mobile robots. Using only four UWB beacons in an environment of about 431 m2 area, the maximum error margin of detected positions by the UWB-based tracking system remained below 13.1 cm and 28.9 cm on average for the static and mobile robots, respectively. This error margin is 88.05% lower than that of the BLE-based system and 93.27% lower than that of the WiFi-based system on average. The high tracking precision, the need for a lower number of anchors, and the decreasing hardware costs point out that UWB will be the dominating technology in indoor tracking systems in the near future. Full article
(This article belongs to the Special Issue Multi‐sensors for Indoor Localization and Tracking: 2nd Edition)
Show Figures

Figure 1

32 pages, 4385 KiB  
Article
Influence of Environmental Factors on the Accuracy of the Ultrasonic Rangefinder in a Mobile Robotic Technical Vision System
by Andrii Rudyk, Andriy Semenov, Serhii Baraban, Olena Semenova, Pavlo Kulakov, Oleksandr Kustovskyj and Lesia Brych
Electronics 2025, 14(7), 1393; https://doi.org/10.3390/electronics14071393 - 30 Mar 2025
Viewed by 930
Abstract
The accuracy of ultrasonic rangefinders is crucial for mobile robotic navigation systems, yet environmental factors such as temperature, humidity, atmospheric pressure, and wind conditions can influence ultrasonic speed in the air. The primary objective is to investigate how environmental factors influence the output [...] Read more.
The accuracy of ultrasonic rangefinders is crucial for mobile robotic navigation systems, yet environmental factors such as temperature, humidity, atmospheric pressure, and wind conditions can influence ultrasonic speed in the air. The primary objective is to investigate how environmental factors influence the output signal of an ultrasonic emitter and to develop a method for improving the accuracy of distance measurements in both outdoor and indoor settings. The research employs a combination of theoretical modeling, statistical analysis, and experimental validation. The research employs an ultrasonic rangefinder integrated with environmental sensors (BME280, Bosch Sensortec GmbH, Kusterdingen, Germany) and wind sensors (WMT700, WINDCAP®, Vaisala Oyj, Vantaa, Finland) to account for environmental influences. Experimental studies were conducted using a prototype ultrasonic rangefinder, and statistical analysis (Student’s t-test) was performed on collected data. The results of estimation by Student’s t-test for 256 measurements demonstrate the maximum effect of air temperature and the minimum effect of relative air humidity on a piezoelectric emitter output signal both outdoors and indoors. In addition, wind parameters affect the rangefinder’s operation. The maximum range of obstacle detection depends on the reflection coefficient of the material that covers the obstacle. The results align with theoretical expectations for highly reflective surfaces. A cascade-forward artificial neural network model was developed to refine distance estimations. This study demonstrates the importance of considering environmental factors in ultrasonic rangefinder systems for mobile robots. By integrating environmental sensors and using statistical analysis, the accuracy of distance measurements can be significantly improved. The results contribute to the development of more reliable navigation systems for mobile robots operating in diverse environments. Full article
Show Figures

Figure 1

19 pages, 24555 KiB  
Article
A Multi-Strategy Visual SLAM System for Motion Blur Handling in Indoor Dynamic Environments
by Shuo Huai, Long Cao, Yang Zhou, Zhiyang Guo and Jingyao Gai
Sensors 2025, 25(6), 1696; https://doi.org/10.3390/s25061696 - 9 Mar 2025
Cited by 2 | Viewed by 928
Abstract
Typical SLAM systems adhere to the assumption of environment rigidity, which limits their functionality when deployed in the dynamic indoor environments commonly encountered by household robots. Prevailing methods address this issue by employing semantic information for the identification and processing of dynamic objects [...] Read more.
Typical SLAM systems adhere to the assumption of environment rigidity, which limits their functionality when deployed in the dynamic indoor environments commonly encountered by household robots. Prevailing methods address this issue by employing semantic information for the identification and processing of dynamic objects in scenes. However, extracting reliable semantic information remains challenging due to the presence of motion blur. In this paper, a novel visual SLAM algorithm is proposed in which various approaches are integrated to obtain more reliable semantic information, consequently reducing the impact of motion blur on visual SLAM systems. Specifically, to accurately distinguish moving objects and static objects, we introduce a missed segmentation compensation mechanism into our SLAM system for predicting and restoring semantic information, and depth and semantic information is then leveraged to generate masks of dynamic objects. Additionally, to refine keypoint filtering, a probability-based algorithm for dynamic feature detection and elimination is incorporated into our SLAM system. Evaluation experiments using the TUM and Bonn RGB-D datasets demonstrated that our SLAM system achieves lower absolute trajectory error (ATE) than existing systems in different dynamic indoor environments, particularly those with large view angle variations. Our system can be applied to enhance the autonomous navigation and scene understanding capabilities of domestic robots. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop