Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = omnidirectional camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 895 KB  
Review
Robotic Motion Techniques for Socially Aware Navigation: A Scoping Review
by Jesus Eduardo Hermosilla-Diaz, Ericka Janet Rechy-Ramirez and Antonio Marin-Hernandez
Future Internet 2025, 17(12), 552; https://doi.org/10.3390/fi17120552 - 1 Dec 2025
Viewed by 559
Abstract
The increasing inclusion of robots in social areas requires continuous improvement of behavioral strategies that robots must follow. Although behavioral strategies mainly focus on operational efficiency, other aspects should be considered to provide a reliable interaction in terms of sociability (e.g., methods for [...] Read more.
The increasing inclusion of robots in social areas requires continuous improvement of behavioral strategies that robots must follow. Although behavioral strategies mainly focus on operational efficiency, other aspects should be considered to provide a reliable interaction in terms of sociability (e.g., methods for detection and interpretation of human behaviors, how and where human–robot interaction is performed, and participant evaluation of robot behavior). This scoping review aims to answer seven research questions related to robotic motion in socially aware navigation, considering some aspects such as: type of robots used, characteristics, and type of sensors used to detect human behavioral cues, type of environment, and situations. Articles were collected on the ACM Digital Library, Emerald Insight, IEEE Xplore, ScienceDirect, MDPI, and SpringerLink databases. The PRISMA-ScR protocol was used to conduct the searches. Selected articles met the following inclusion criteria. They: (1) were published between January 2018 and August 2025, (2) were written in English, (3) were published in journals or conference proceedings, (4) focused on social robots, (5) addressed Socially Aware Navigation (SAN), and (6) involved the participation of volunteers in experiments. As a result, 22 studies were included; 77.27% of them employed mobile wheeled robots. Platforms using differential and omnidirectional drive systems were each used in 36.36% of the articles. 50% of the studies used a functional robot appearance, in contrast to bio-inspired appearances used in 31.80% of the cases. Among the frequency of sensors used to collect data from participants, vision-based technologies were the most used (with monocular cameras and 3D-vision systems each reported in 7 articles). Processing was mainly performed on board (50%) of the robot. A total of 59.1% of the studies were performed in real-world environments rather than simulations (36.36%), and a few studies were performed in hybrid environments (4.54%). Robot interactive behaviors were identified in different experiments: physical behaviors were present in all experiments. A few studies employed visual behaviors (2 times). In just over half of the studies (13 studies), participants were asked to provide post-experiment feedback. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous System)
Show Figures

Figure 1

17 pages, 490 KB  
Article
Knowledge-Guided Symbolic Regression for Interpretable Camera Calibration
by Rui Pimentel de Figueiredo
J. Imaging 2025, 11(11), 389; https://doi.org/10.3390/jimaging11110389 - 2 Nov 2025
Viewed by 773
Abstract
Calibrating cameras accurately requires the identification of projection and distortion models that effectively account for lens-specific deviations. Conventional formulations, like the pinhole model or radial–tangential corrections, often struggle to represent the asymmetric and nonlinear distortions encountered in complex environments such as autonomous navigation, [...] Read more.
Calibrating cameras accurately requires the identification of projection and distortion models that effectively account for lens-specific deviations. Conventional formulations, like the pinhole model or radial–tangential corrections, often struggle to represent the asymmetric and nonlinear distortions encountered in complex environments such as autonomous navigation, robotics, and immersive imaging. Although neural methods offer greater adaptability, they demand extensive training data, are computationally intensive, and often lack transparency. This work introduces a symbolic model discovery framework guided by physical knowledge, where symbolic regression and genetic programming (GP) are used in tandem to identify calibration models tailored to specific optical behaviors. The approach incorporates a broad class of known distortion models, including Brown–Conrady, Mei–Rives, Kannala–Brandt, and double-sphere, as modular components, while remaining extensible to any predefined or domain-specific formulation. Embedding these models directly into the symbolic search process constrains the solution space, enabling efficient parameter fitting and robust model selection without overfitting. Through empirical evaluation across a variety of lens types, including fisheye, omnidirectional, catadioptric, and traditional cameras, we show that our method produces results on par with or surpassing those of established calibration techniques. The outcome is a flexible, interpretable, and resource-efficient alternative suitable for deployment scenarios where calibration data are scarce or computational resources are constrained. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

22 pages, 59687 KB  
Article
Multi-View Omnidirectional Vision and Structured Light for High-Precision Mapping and Reconstruction
by Qihui Guo, Maksim A. Grigorev, Zihan Zhang, Ivan Kholodilin and Bing Li
Sensors 2025, 25(20), 6485; https://doi.org/10.3390/s25206485 - 20 Oct 2025
Viewed by 1317
Abstract
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient [...] Read more.
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient processing. Building on this platform, we propose and validate a reconstruction and ranging method that fuses multi-view omnidirectional images with structured-light projection. The method achieves high-precision obstacle contour reconstruction and distance estimation without extensive physical calibration or rigid hardware setups. Experiments in simulation and the real world demonstrate distance errors within 8 mm and robust performance across diverse camera configurations, highlighting the practicality of the platform for omnidirectional vision research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

30 pages, 7765 KB  
Article
Self-Controlled Autonomous Mobility System with Adaptive Spatial and Stair Recognition Using CNNs
by Hayato Mitsuhashi, Hiroyuki Kamata and Taku Itami
Appl. Sci. 2025, 15(20), 10978; https://doi.org/10.3390/app152010978 - 13 Oct 2025
Cited by 1 | Viewed by 661
Abstract
The aim of this study is to develop the next-generation fully autonomous electric wheelchair capable of operating in diverse environments. This study proposes a self-controlled autonomous mobility system that integrates a monocular camera and laser-based 3D spatial recognition, convolutional neural network-based obstacle recognition, [...] Read more.
The aim of this study is to develop the next-generation fully autonomous electric wheelchair capable of operating in diverse environments. This study proposes a self-controlled autonomous mobility system that integrates a monocular camera and laser-based 3D spatial recognition, convolutional neural network-based obstacle recognition, shape measurement, and stair structure recognition technology. Obstacle recognition and shape measurement are performed by analyzing the surrounding space using convolutional neural networks and distance calculation methods based on laser measurements. The stair structure recognition technology utilizes the stair-step characteristics from the laser’s irradiation pattern, enabling detection of distance information not captured by the camera. A principal analysis and algorithm development were conducted using a small-scale autonomous mobility system, and its feasibility was determined by application to an omnidirectional self-controlled autonomous electric wheelchair. Using the autonomous robot, we successfully demonstrated an obstacle-avoidance program based on obstacle recognition and shape measurement that is independent of environmental illumination. Additionally, 3D analysis of the number of stair steps, height, and depth was achieved. This study enhances mobility in complex environments under varying lighting conditions and lays the groundwork for inclusive mobility solutions in a barrier-free society. When the proposed method was applied to an omnidirectional self-controlled electric wheelchair, it accurately detected the distance to obstacles, their shapes, as well as the height and depth of stairs, with a maximum error of 0.8 cm. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

22 pages, 6827 KB  
Article
Metaheuristics-Assisted Placement of Omnidirectional Image Sensors for Visually Obstructed Environments
by Fernando Fausto, Gemma Corona, Adrian Gonzalez and Marco Pérez-Cisneros
Biomimetics 2025, 10(9), 579; https://doi.org/10.3390/biomimetics10090579 - 2 Sep 2025
Viewed by 646
Abstract
Optimal camera placement (OCP) is a crucial task for ensuring adequate surveillance of both indoor and outdoor environments. While several solutions to this problem have been documented in the literature, there are still research gaps related to the maximization of surveillance coverage, particularly [...] Read more.
Optimal camera placement (OCP) is a crucial task for ensuring adequate surveillance of both indoor and outdoor environments. While several solutions to this problem have been documented in the literature, there are still research gaps related to the maximization of surveillance coverage, particularly in terms of optimal placement of omnidirectional camera (OC) sensors in indoor and partially occluded environments via metaheuristic optimization algorithms (MOAs). In this paper, we present a study centered on several popular MOAs and their application to OCP for OC sensors in indoor environments. For our experiments we considered two experimental layouts consisting of both a deployment area, and visual obstructions, as well as two different omnidirectional camera models. The tested MOAs include popular algorithms such as PSO, GWO, SSO, GSA, SMS, SA, DE, GA, and CMA-ES. Experimental results suggest that the success in MOA-based OCP is strongly tied with the specific search strategy applied by the metaheuristic method, thus making certain approaches preferred over others for this kind of problem. Full article
Show Figures

Figure 1

32 pages, 1435 KB  
Review
Smart Safety Helmets with Integrated Vision Systems for Industrial Infrastructure Inspection: A Comprehensive Review of VSLAM-Enabled Technologies
by Emmanuel A. Merchán-Cruz, Samuel Moveh, Oleksandr Pasha, Reinis Tocelovskis, Alexander Grakovski, Alexander Krainyukov, Nikita Ostrovenecs, Ivans Gercevs and Vladimirs Petrovs
Sensors 2025, 25(15), 4834; https://doi.org/10.3390/s25154834 - 6 Aug 2025
Cited by 1 | Viewed by 4618
Abstract
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused [...] Read more.
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused inspection platforms, highlighting how modern helmets leverage real-time visual SLAM algorithms to map environments and assist inspectors. A systematic literature search was conducted targeting high-impact journals, patents, and industry reports. We classify helmet-integrated camera systems into monocular, stereo, and omnidirectional types and compare their capabilities for infrastructure inspection. We examine core VSLAM algorithms (feature-based, direct, hybrid, and deep-learning-enhanced) and discuss their adaptation to wearable platforms. Multi-sensor fusion approaches integrating inertial, LiDAR, and GNSS data are reviewed, along with edge/cloud processing architectures enabling real-time performance. This paper compiles numerous industrial use cases, from bridges and tunnels to plants and power facilities, demonstrating significant improvements in inspection efficiency, data quality, and worker safety. Key challenges are analyzed, including technical hurdles (battery life, processing limits, and harsh environments), human factors (ergonomics, training, and cognitive load), and regulatory issues (safety certification and data privacy). We also identify emerging trends, such as semantic SLAM, AI-driven defect recognition, hardware miniaturization, and collaborative multi-helmet systems. This review finds that VSLAM-equipped smart helmets offer a transformative approach to infrastructure inspection, enabling real-time mapping, augmented awareness, and safer workflows. We conclude by highlighting current research gaps, notably in standardizing systems and integrating with asset management, and provide recommendations for industry adoption and future research directions. Full article
Show Figures

Figure 1

30 pages, 63876 KB  
Article
A Low-Cost 3D Mapping System for Indoor Scenes Based on 2D LiDAR and Monocular Cameras
by Xiaojun Li, Xinrui Li, Guiting Hu, Qi Niu and Luping Xu
Remote Sens. 2024, 16(24), 4712; https://doi.org/10.3390/rs16244712 - 17 Dec 2024
Cited by 4 | Viewed by 5734
Abstract
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system [...] Read more.
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system consists of two two-dimensional (2D) LiDARs, six monocular cameras, and a servo motor. The point clouds are fused with imagery using a pixel-spatial dual-constrained depth gradient adaptive regularization (PS-DGAR) algorithm to produce dense 3D color point clouds. During fusion, the point cloud is reconstructed inversely based on the predicted pixel depth values, compensating for areas of sparse spatial features. For indoor scene reconstruction, a globally consistent alignment algorithm based on particle filter and iterative closest point (PF-ICP) is proposed, which incorporates adjacent frame registration and global pose optimization to reduce mapping errors. Experimental results demonstrate that the proposed density enhancement method achieves an average error of 1.5 cm, significantly improving the density and geometric integrity of sparse point clouds. The registration algorithm achieves a root mean square error (RMSE) of 0.0217 and a runtime of less than 4 s, both of which outperform traditional iterative closest point (ICP) variants. Furthermore, the proposed low-cost omnidirectional 3D color LiDAR mapping system demonstrates superior measurement accuracy in indoor environments. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Graphical abstract

18 pages, 7693 KB  
Article
Contributions to the Development of Tetrahedral Mobile Robots with Omnidirectional Locomotion Units
by Anca-Corina Simerean and Mihai Olimpiu Tătar
Machines 2024, 12(12), 852; https://doi.org/10.3390/machines12120852 - 26 Nov 2024
Cited by 2 | Viewed by 1327
Abstract
In this paper, the authors present the process of modeling, building, and testing two prototypes of tetrahedral robots with omnidirectional locomotion units. The paper begins with a detailed description of the first tetrahedral robot prototype, highlighting its strengths as well as the limitations [...] Read more.
In this paper, the authors present the process of modeling, building, and testing two prototypes of tetrahedral robots with omnidirectional locomotion units. The paper begins with a detailed description of the first tetrahedral robot prototype, highlighting its strengths as well as the limitations that led to the need for improvements. The robot’s omnidirectional movement allowed it to move in all directions, but certain challenges related to stability and adaptability were identified. The second prototype is presented as an advanced and improved version of the first model, integrating significant modifications in both the structural design and the robot’s functionality. The authors emphasize how these optimizations were achieved, detailing the solutions adopted and their impact on the robot’s overall performance. This paper includes an in-depth comparative analysis between the two prototypes. The analysis highlights the considerable advantages of the second prototype, demonstrating its superiority. The conclusions of the paper summarize the main findings of the research and emphasize the significant progress made from the first to the second prototype. Finally, future research directions are discussed, which include refining control algorithms, miniaturizing the robot, improving structural performance by integrating shock-absorbing dampers, and integrating lighting systems and video cameras. Full article
(This article belongs to the Special Issue Biped Robotics: Bridging the Gap Between Humans and Machines)
Show Figures

Figure 1

15 pages, 5588 KB  
Article
Rolling Shutter-Based Underwater Optical Camera Communication (UWOCC) with Side Glow Optical Fiber (SGOF)
by Jia-Fu Li, Yun-Han Chang, Yung-Jie Chen and Chi-Wai Chow
Appl. Sci. 2024, 14(17), 7840; https://doi.org/10.3390/app14177840 - 4 Sep 2024
Cited by 4 | Viewed by 2092
Abstract
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss [...] Read more.
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss and limited bandwidth. In this work, we present and demonstrate a rolling shutter (RS)-based underwater optical camera communication (UWOCC) system utilizing a long short-term memory neural network (LSTM-NN) with side glow optical fiber (SGOF). SGOF is made of poly-methyl methacrylate (PMMA) SGOF. It is lightweight and flexibly bendable. Most importantly, SGOF is water resistant; hence, it can be installed in an underwater environment to provide 360° “omni-directional” uniform radial light emission around its circumference. This large FOV can fascinate the optical detection in underwater turbulent environments. The proposed LSTM-NN has the time-memorizing characteristics to enhance UWOCC signal decoding. The proposed LSTM-NN is also compared with other decoding methods in the literature, such as the PPB-NN. The experimental results demonstrated that the proposed LSTM-NN outperforms the PPB-NN in the UWOCC system. A data rate of 2.7 kbit/s can be achieved in UWOCC, satisfying the pre-forward error correction (FEC) condition (i.e., bit error rate, BER ≤ 3.8 × 10−3). We also found that thin fiber also allows performing spatial multiplexing to enhance transmission capacity. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

23 pages, 19898 KB  
Article
Optimizing an Autonomous Robot’s Path to Increase Movement Speed
by Damian Gorgoteanu, Cristian Molder, Vlad-Gabriel Popescu, Lucian Ștefăniță Grigore and Ionica Oncioiu
Electronics 2024, 13(10), 1892; https://doi.org/10.3390/electronics13101892 - 11 May 2024
Cited by 1 | Viewed by 2465
Abstract
The goal of this study is to address the challenges associated with identifying and planning a mobile land robot’s path to optimize its speed in a stationary environment. Our focus was on devising routes that navigate around obstacles in various spatial arrangements. To [...] Read more.
The goal of this study is to address the challenges associated with identifying and planning a mobile land robot’s path to optimize its speed in a stationary environment. Our focus was on devising routes that navigate around obstacles in various spatial arrangements. To achieve this, we employed MATLAB R2023b for trajectory simulation and optimization. On-board data processing was conducted, while obstacle detection relied on the omnidirectional video processing system integrated into the robot. Odometry was facilitated by engine encoders and optical flow sensors. Additionally, an external video system was utilized to verify the experimental data pertaining to the robot’s movement. Last but not least, the algorithms and hardware equipment used enabled the robot to go along the path at greater speeds. Limiting the amount of time and energy required to travel allowed us to avoid obstacles. Full article
(This article belongs to the Special Issue Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

14 pages, 6996 KB  
Article
A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
by Yao Du, Carlos Mateo and Omar Tahri
Sensors 2024, 24(7), 2246; https://doi.org/10.3390/s24072246 - 31 Mar 2024
Cited by 2 | Viewed by 1672
Abstract
This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field [...] Read more.
This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 7518 KB  
Article
Omni-OTPE: Omnidirectional Optimal Real-Time Ground Target Position Estimation System for Moving Lightweight Unmanned Aerial Vehicle
by Yi Ding, Jiaxing Che, Zhiming Zhou and Jingyuan Bian
Sensors 2024, 24(5), 1709; https://doi.org/10.3390/s24051709 - 6 Mar 2024
Viewed by 2271
Abstract
Ground target detection and positioning systems based on lightweight unmanned aerial vehicles (UAVs) are increasing in value for aerial reconnaissance and surveillance. However, the current method for estimating the target’s position is limited by the field of view angle, rendering it challenging to [...] Read more.
Ground target detection and positioning systems based on lightweight unmanned aerial vehicles (UAVs) are increasing in value for aerial reconnaissance and surveillance. However, the current method for estimating the target’s position is limited by the field of view angle, rendering it challenging to fulfill the demands of a real-time omnidirectional reconnaissance operation. To address this issue, we propose an Omnidirectional Optimal Real-Time Ground Target Position Estimation System (Omni-OTPE) that utilizes a fisheye camera and LiDAR sensors. The object of interest is first identified in the fisheye image, and then, the image-based target position is obtained by solving using the fisheye projection model and the target center extraction algorithm based on the detected edge information. Next, the LiDAR’s real-time point cloud data are filtered based on position–direction constraints using the image-based target position information. This step allows for the determination of point cloud clusters that are relevant to the characterization of the target’s position information. Finally, the target positions obtained from the two methods are fused using an optimal Kalman fuser to obtain the optimal target position information. In order to evaluate the positioning accuracy, we designed a hardware and software setup, mounted on a lightweight UAV, and tested it in a real scenario. The experimental results validate that our method exhibits significant advantages over traditional methods and achieves a real-time high-performance ground target position estimation function. Full article
Show Figures

Figure 1

18 pages, 10954 KB  
Article
Using a Robot for Indoor Navigation and Door Opening Control Based on Image Processing
by Chun-Hsiang Hsu and Jih-Gau Juang
Actuators 2024, 13(2), 78; https://doi.org/10.3390/act13020078 - 16 Feb 2024
Cited by 2 | Viewed by 2734
Abstract
This study used real-time image processing to realize obstacle avoidance and indoor navigation with an omnidirectional wheeled mobile robot (WMR). The distance between an obstacle and the WMR was obtained using a depth camera. Real-time images were used to control the robot’s movements. [...] Read more.
This study used real-time image processing to realize obstacle avoidance and indoor navigation with an omnidirectional wheeled mobile robot (WMR). The distance between an obstacle and the WMR was obtained using a depth camera. Real-time images were used to control the robot’s movements. The WMR can extract obstacle distance data from a depth map and apply fuzzy theory to avoid obstacles in indoor environments. A fuzzy control system was integrated into the control scheme. After detecting a doorknob, the robot could track the target and open the door. We used the speeded up robust features matching algorithm to recognize the WMR’s movement direction. The proposed control scheme ensures that the WMR can avoid obstacles, move to a designated location, and open a door. Like humans, the robot performs the described task only using visual sensors. Full article
(This article belongs to the Special Issue Actuators in Robotic Control—2nd Edition)
Show Figures

Figure 1

23 pages, 27063 KB  
Article
A Smart Cane Based on 2D LiDAR and RGB-D Camera Sensor-Realizing Navigation and Obstacle Recognition
by Chunming Mai, Huaze Chen, Lina Zeng, Zaijin Li, Guojun Liu, Zhongliang Qiao, Yi Qu, Lianhe Li and Lin Li
Sensors 2024, 24(3), 870; https://doi.org/10.3390/s24030870 - 29 Jan 2024
Cited by 16 | Viewed by 13894
Abstract
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, [...] Read more.
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, STM32, and other hardware. The main advantage of the intelligent guide system proposed by us is that the distance between the smart cane and obstacles can be measured by 2D LiDAR based on the cartographer algorithm, thus achieving simultaneous localization and mapping (SLAM). At the same time, through the improved YOLOv5 algorithm, pedestrians, vehicles, pedestrian crosswalks, traffic lights, warning posts, stone piers, tactile paving, and other objects in front of the visually impaired can be quickly and effectively identified. Laser SLAM and improved YOLOv5 obstacle identification tests were carried out inside a teaching building on the campus of Hainan Normal University and on a pedestrian crossing on Longkun South Road in Haikou City, Hainan Province. The results show that the intelligent guide system developed by us can drive the omnidirectional wheels at the bottom of the smart cane and provide the smart cane with a self-leading blind guide function, like a “guide dog”, which can effectively guide the visually impaired to avoid obstacles and reach their predetermined destination, and can quickly and effectively identify the obstacles on the way out. The mapping and positioning accuracy of the system’s laser SLAM is 1 m ± 7 cm, and the laser SLAM speed of this system is 25~31 FPS, which can realize the short-distance obstacle avoidance and navigation function both in indoor and outdoor environments. The improved YOLOv5 helps to identify 86 types of objects. The recognition rates for pedestrian crosswalks and for vehicles are 84.6% and 71.8%, respectively; the overall recognition rate for 86 types of objects is 61.2%, and the obstacle recognition rate of the intelligent guide system is 25–26 FPS. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 9434 KB  
Article
Omnidirectional-Sensor-System-Based Texture Noise Correction in Large-Scale 3D Reconstruction
by Wenya Xie and Xiaoping Hong
Sensors 2024, 24(1), 78; https://doi.org/10.3390/s24010078 - 22 Dec 2023
Cited by 1 | Viewed by 1781
Abstract
The evolution of cameras and LiDAR has propelled the techniques and applications of three-dimensional (3D) reconstruction. However, due to inherent sensor limitations and environmental interference, the reconstruction process often entails significant texture noise, such as specular highlight, color inconsistency, and object occlusion. Traditional [...] Read more.
The evolution of cameras and LiDAR has propelled the techniques and applications of three-dimensional (3D) reconstruction. However, due to inherent sensor limitations and environmental interference, the reconstruction process often entails significant texture noise, such as specular highlight, color inconsistency, and object occlusion. Traditional methodologies grapple to mitigate such noise, particularly in large-scale scenes, due to the voluminous data produced by imaging sensors. In response, this paper introduces an omnidirectional-sensor-system-based texture noise correction framework for large-scale scenes, which consists of three parts. Initially, we obtain a colored point cloud with luminance value through LiDAR points and RGB images organization. Next, we apply a voxel hashing algorithm during the geometry reconstruction to accelerate the computation speed and save the computer memory. Finally, we propose the key innovation of our paper, the frame-voting rendering and the neighbor-aided rendering mechanisms, which effectively eliminates the aforementioned texture noise. From the experimental results, the processing rate of one million points per second shows its real-time applicability, and the output figures of texture optimization exhibit a significant reduction in texture noise. These results indicate that our framework has advanced performance in correcting multiple texture noise in large-scale 3D reconstruction. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision: 2nd Edition)
Show Figures

Figure 1

Back to TopTop