Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = vision-guided walking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 13220 KiB  
Article
YOLOv8-Based XR Smart Glasses Mobility Assistive System for Aiding Outdoor Walking of Visually Impaired Individuals in South Korea
by Incheol Jeong, Kapyol Kim, Jungil Jung and Jinsoo Cho
Electronics 2025, 14(3), 425; https://doi.org/10.3390/electronics14030425 - 22 Jan 2025
Cited by 1 | Viewed by 3210
Abstract
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and [...] Read more.
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and provide appropriate guidance to the user. The core components of the system are Xreal Light Smart Glasses and an Android-based smartphone, which are operated through a mobile application developed using the Unity game engine. The system divides the user’s field of vision into nine zones, assesses the level of danger in each zone, and guides the user along a safe walking path. The YOLOv8n model was trained to recognize sidewalks, pedestrian crossings, bus stops, subway exits, and various obstacles on a smartphone connected to XR glasses and demonstrated an average processing time of 583 ms and an average memory usage of 80 MB, making it suitable for real-time use. The experiments were conducted on a 3.3 km route around Bokjeong Station in South Korea and confirmed that the system works effectively in a variety of walking environments, but recognized the need to improve performance in low-light environments and further testing with visually impaired people. By proposing an innovative walking assistance system that combines XR technology and artificial intelligence, this study is expected to contribute to improving the independent mobility of visually impaired people. Future research will further validate the effectiveness of the system by integrating it with real-time public transport information and conducting extensive experiments with users with varying degrees of visual impairment. Full article
Show Figures

Figure 1

22 pages, 8875 KiB  
Article
Cognitive IoT Vision System Using Weighted Guided Harris Corner Feature Detector for Visually Impaired People
by Manoranjitham Rajendran, Punitha Stephan, Thompson Stephan, Saurabh Agarwal and Hyunsung Kim
Sustainability 2022, 14(15), 9063; https://doi.org/10.3390/su14159063 - 24 Jul 2022
Cited by 1 | Viewed by 2121
Abstract
India has an estimated 12 million visually impaired people and is home to the world’s largest number in any country. Smart walking stick devices use various technologies including machine vision and different sensors for improving the safe movement of visually impaired persons. In [...] Read more.
India has an estimated 12 million visually impaired people and is home to the world’s largest number in any country. Smart walking stick devices use various technologies including machine vision and different sensors for improving the safe movement of visually impaired persons. In machine vision, accurately recognizing an object that is near to them is still a challenging task. This paper provides a system to enable safe navigation and guidance for visually impaired people by implementing an object recognition module in the smart walking stick that uses a local feature extraction method to recognize an object under different image transformations. To provide stability and robustness, the Weighted Guided Harris Corner Feature Detector (WGHCFD) method is proposed to extract feature points from the image. WGHCFD discriminates image features competently and is suitable for different real-world conditions. The WGHCFD method evaluates the most popular Oxford benchmark datasets, and it achieves greater repeatability and matching score than existing feature detectors. In addition, the proposed WGHCFD method is tested with a smart stick and achieves 99.8% recognition rate under different transformation conditions for the safe navigation of visually impaired people. Full article
Show Figures

Figure 1

16 pages, 1090 KiB  
Article
Training Computers to See the Built Environment Related to Physical Activity: Detection of Microscale Walkability Features Using Computer Vision
by Marc A. Adams, Christine B. Phillips, Akshar Patel and Ariane Middel
Int. J. Environ. Res. Public Health 2022, 19(8), 4548; https://doi.org/10.3390/ijerph19084548 - 9 Apr 2022
Cited by 28 | Viewed by 3645
Abstract
The study purpose was to train and validate a deep learning approach to detect microscale streetscape features related to pedestrian physical activity. This work innovates by combining computer vision techniques with Google Street View (GSV) images to overcome impediments to conducting audits (e.g., [...] Read more.
The study purpose was to train and validate a deep learning approach to detect microscale streetscape features related to pedestrian physical activity. This work innovates by combining computer vision techniques with Google Street View (GSV) images to overcome impediments to conducting audits (e.g., time, safety, and expert labor cost). The EfficientNETB5 architecture was used to build deep learning models for eight microscale features guided by the Microscale Audit of Pedestrian Streetscapes Mini tool: sidewalks, sidewalk buffers, curb cuts, zebra and line crosswalks, walk signals, bike symbols, and streetlights. We used a train–correct loop, whereby images were trained on a training dataset, evaluated using a separate validation dataset, and trained further until acceptable performance metrics were achieved. Further, we used trained models to audit participant (N = 512) neighborhoods in the WalkIT Arizona trial. Correlations were explored between microscale features and GIS-measured and participant-reported neighborhood macroscale walkability. Classifier precision, recall, and overall accuracy were all over >84%. Total microscale was associated with overall macroscale walkability (r = 0.30, p < 0.001). Positive associations were found between model-detected and self-reported sidewalks (r = 0.41, p < 0.001) and sidewalk buffers (r = 0.26, p < 0.001). The computer vision model results suggest an alternative to trained human raters, allowing for audits of hundreds or thousands of neighborhoods for population surveillance or hypothesis testing. Full article
Show Figures

Figure 1

22 pages, 57613 KiB  
Article
Vision-Guided Six-Legged Walking of Little Crabster Using a Kinect Sensor
by Jung-Yup Kim, Min-Jong Park, Sungjun Kim and Dongjun Shin
Appl. Sci. 2022, 12(4), 2140; https://doi.org/10.3390/app12042140 - 18 Feb 2022
Cited by 2 | Viewed by 1885
Abstract
A conventional blind walking algorithm has low walking stability on uneven terrain because a robot cannot rapidly respond to height changes of the ground due to limited information from foot force sensors. In order to cope with rough terrain, it is essential to [...] Read more.
A conventional blind walking algorithm has low walking stability on uneven terrain because a robot cannot rapidly respond to height changes of the ground due to limited information from foot force sensors. In order to cope with rough terrain, it is essential to obtain 3D ground information. Therefore, this paper proposes a vision-guided six-legged walking algorithm for stable walking on uneven terrain. We obtained noise-filtered 3D ground information by using a Kinect sensor and experimentally derived coordinate transformation information between the Kinect sensor and robot body. While generating landing positions of the six feet from the predefined walking parameters, the proposed algorithm modifies the landing positions in terms of reliability and safety using the obtained 3D ground information. For continuous walking, we also propose a ground merging algorithm and successfully validate the performance of the proposed algorithms through walking experiments on a treadmill with obstacles. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

14 pages, 3251 KiB  
Article
Object Identification and Safe Route Recommendation Based on Human Flow for the Visually Impaired
by Yusuke Kajiwara and Haruhiko Kimura
Sensors 2019, 19(24), 5343; https://doi.org/10.3390/s19245343 - 4 Dec 2019
Cited by 5 | Viewed by 3757
Abstract
It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning [...] Read more.
It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning and obstacle detection has been actively researched and developed. However, when these obstacle detection methods are used in high-traffic passages, since many pedestrians cause an occlusion problem that obstructs the shape and color of obstacles, these obstacle detection methods significantly decrease in accuracy. To solve this problem, we developed an application “Follow me!”. The application recommends a safe route by machine learning the gait and walking route of many pedestrians obtained from the monocular camera images of a smartphone. As a result of the experiment, pedestrians walking in the same direction as visually impaired people, oncoming pedestrians, and steps were identified with an average accuracy of 0.92 based on the gait and walking route of pedestrians acquired from monocular camera images. Furthermore, the results of the recommended safe route based on the identification results showed that the visually impaired people were guided to a safe route with 100% accuracy. In addition, visually impaired people avoided obstacles that had to be detoured during construction and signage by walking along the recommended route. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

24 pages, 11899 KiB  
Article
Multimedia Vision for the Visually Impaired through 2D Multiarray Braille Display
by Seondae Kim, Eun-Soo Park and Eun-Seok Ryu
Appl. Sci. 2019, 9(5), 878; https://doi.org/10.3390/app9050878 - 1 Mar 2019
Cited by 7 | Viewed by 4576
Abstract
Visual impairments cause very limited and low vision, leading to difficulties in processing information such as obstacles, objects, multimedia contents (e.g., video, photographs, and paintings), and reading in outdoor and indoor environments. Therefore, there are assistive devices and aids for visually impaired (VI) [...] Read more.
Visual impairments cause very limited and low vision, leading to difficulties in processing information such as obstacles, objects, multimedia contents (e.g., video, photographs, and paintings), and reading in outdoor and indoor environments. Therefore, there are assistive devices and aids for visually impaired (VI) people. In general, such devices provide guidance or some supportive information that can be used along with guide dogs, walking canes, and braille devices. However, these devices have functional limitations; for example, they cannot help in the processing of multimedia contents such as images and videos. Additionally, most of the available braille displays for the VI represent the text as a single line with several braille cells. Although these devices are sufficient to read and understand text, they have difficulty in converting multimedia contents or massive text contents to braille. This paper describes a methodology to effectively convert multimedia contents to braille using 2D braille display. Furthermore, this research also proposes the transformation of Digital Accessible Information SYstem (DAISY) and electronic publication (EPUB) formats into 2D braille display. In addition, it introduces interesting research considering efficient communication for the VI. Thus, this study proposes an eBook reader application for DAISY and EPUB formats, which can correctly render and display text, images, audios, and videos on a 2D multiarray braille display. This approach is expected to provide better braille service for the VI when implemented and verified in real-time. Full article
Show Figures

Figure 1

21 pages, 607 KiB  
Article
Non-Linearity Analysis of Depth and Angular Indexes for Optimal Stereo SLAM
by Luis M. Bergasa, Pablo F. Alcantarilla and David Schleicher
Sensors 2010, 10(4), 4159-4179; https://doi.org/10.3390/s100404159 - 26 Apr 2010
Cited by 4 | Viewed by 11661
Abstract
In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3–5 km/h. We present the [...] Read more.
In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3–5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented. Full article
(This article belongs to the Special Issue Motion Detectors)
Show Figures

Back to TopTop