Next Article in Journal
High-Fold 3D Gaussian Splatting Model Pruning Method Assisted by Opacity
Next Article in Special Issue
Battery Life Prediction for Ensuring Robust Operation of IoT Devices in Remote Metering
Previous Article in Journal
Potentials and Limitations of Using Sentinel Data for Power System Operation and Control: Case Study of Protection Against Forest Fires and Aerosol Contamination
Previous Article in Special Issue
Analysis of Enterprise Internet of Things Maturity Models: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Driving Safety of Personal Mobility Vehicles Using On-Board Technologies

by
Eru Choi
,
Tuan Anh Dinh
and
Min Choi
*
Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1534; https://doi.org/10.3390/app15031534
Submission received: 28 November 2024 / Revised: 22 January 2025 / Accepted: 31 January 2025 / Published: 3 February 2025
(This article belongs to the Special Issue Recent Advances in Internet of Things and System Design)

Abstract

:
Accidents involving electric wheelchairs are a growing concern, with users frequently encountering obstacles that lead to collisions, tipping, or loss of balance. These incidents underscore the need for advanced safety technologies tailored to electric wheelchair users. This research addresses this need by developing a driving assistance system to prevent accidents and enhance user safety. The system incorporates ultrasonic sensors and a front-facing camera to detect obstacles and provide real-time warnings. The proposed system operates independently of stable server communication and employs embedded hardware for fast object detection and environmental recognition, ensuring immediate guidance in various scenarios. In this research, we utilized the existing yolov8 model as is. But we attempted to improve performance by hardware acceleration of convolutional neural networks, supporting various layers such as convolution, deconvolution, pooling, batch normalization, and others. Thus, the YOLO model was accelerated during inference on the specialized hardware in our experiments. Performance was evaluated in diverse environments to assess its usability. Results demonstrated high accuracy in detecting obstacles and providing timely warnings. Leveraging hardware acceleration for YOLOv8 delivers faster, scalable, and robust object detection, making it a great platform for enhancing driving safety on edge and embedded devices. These findings provide a strong foundation for future advancements in safety assistance systems for electric wheelchairs and other mobility devices. Future research will focus on enhancing system performance and integrating additional features to create a safer environment for electric wheelchair users.

1. Introduction

The demand for electric wheelchairs has steadily increased in response to the aging population. According to a report from the Korea Consumer Agency (Figure 1), 35.5% of users of motorized assistive devices (including electric wheelchairs and scooters) have experienced accidents [1,2]. These accidents stem from a variety of causes, including catching on curbs or obstacles (42.2%), collisions with external obstacles (36.3%), unexpected halting during operation (32.4%), collisions with vehicles (24.5%), collisions with pedestrians (22.5%), loss of balance due to wheelchair tilt (12.7%), entrapment in device components (10.8%), and mechanical malfunctions or fires (8.8%). Such accidents can lead to severe injury and inconvenience to users. In fact, the most recent statistics on assistive device accidents from researchers indicate the comparatively low level of attention paid to the issue compared to other safety equipment [2,3].
Personal mobility devices can have various sensors and components, including ultra-small front/rear/side cameras, impact sensors, communication modules, and microcontrollers. When applied, this system enables an electric kickboard to detect its surroundings autonomously, identify abnormal signs, adjust driving speed, or report damage or accidents to a control center in real time. To improve personal mobility safety, this study incorporates a vision recognition function that detects and recognizes the surroundings using an ultra-small camera. This feature can be enhanced to recognize pedestrians ahead, execute an emergency stop, or detect and reduce speed when operating on sidewalks rather than dedicated roads. In addition, integrating a GPS receiver can enable automatic speed adjustments when entering child and elderly protection zones through more precise location tracking. Furthermore, combining the vision recognition function with impact sensors allows comprehensive road condition assessment and automatic speed adjustments when necessary.
As shown in Figure 2 below, the steady increase in the use of personal mobility devices has led to a rise in accident rates. Users of electric wheelchairs are more prone to accidents due to low postural stability and greater exposure to external impacts, which increases their physical and psychological burdens. However, institutional measures regarding support policies for electric mobility aids, accident prevention strategies, and protective equipment remain insufficient.
To prevent these accidents and ensure the safety of electric wheelchair users, obstacle recognition-based warning and sensing-based emergency stop play key roles in this paper. So, our research aims to develop a system to enhance the safety of electric wheelchairs using those two key technologies: obstacle recognition-based warning and sensing-based emergency stop.
In particular, the reason why this study can achieve fast performance without relying on high-performance computation support and API calls through existing large servers is as follows. First, this study is implemented by utilizing an embedded board with on-device AI technology that implements AI applications with a GPU. In addition, this study utilizes a hardware accelerator for deep-learning inference to perform computationally intensive deep-learning tasks, such as convolution, much more efficiently than CPUs.
The contribution of this paper to safety-enhancing techniques of personal mobile vehicles (PMVs) is that the driving safety enhancement device we have developed performs object detection on the embedded board itself using hardware acceleration. It is also equipped with control logic for emergency stops. With the recent trend of increasing accidents involving personal mobile device users, we believe that the results of this research and development will be of interest and competitiveness. Moreover, we also make use of techniques for increasing performance by a specialized hardware component optimized for deep-learning inference. This accelerates the processing in various layers, such as convolution, deconvolution, pooling, ReLU, comparison, etc., that a neural network performs during inference.
The remainder of this paper is structured as follows. Section 2 introduces related research, and Section 3 explains the proposed system architecture, design, and implementation. Section 4 covers experimental results and analysis. Finally, Section 5 presents the conclusion of this paper.

2. Related Works

Previous research in the field of object detection can be broadly categorized into two main areas. The first category focuses on fundamental obstacle detection technologies, while the second category encompasses real-time situational awareness and warning systems that use on-device AI and object detection. This study reviews previous research with an emphasis on these two areas.

2.1. Driving Safety Enhancement Techniques by Obstacle Detection

Various studies have investigated obstacle detection technology to enhance the safety of electric wheelchairs and personal mobility devices. Kim, Y.-P. designed and evaluated controllers and drive mechanisms to improve safe navigation while considering user body types and achieving performance goals such as continuous driving time, turning radius, and maximum load capacity during driving tests [4]. Despite these achievements, the authors noted that further testing is required to verify the stability of the system in outdoor environments. Additionally, Seo and Kim developed an obstacle detection system for electric wheelchair operation using a 3D depth camera [5]. Their system employs a KINECT depth camera, enabling real-time obstacle detection and avoidance. However, system performance can be affected by the environmental limitations of camera sensors, such as lighting conditions.
Recent studies have concentrated on enhancing the safety and performance of electric wheelchairs through advanced detection technologies employing various approaches. Several investigations have examined the development and improvement of electric wheelchair systems, including user posture change functionalities and requirements analysis [6,7,8]. Innovations in specialized detection systems for electric wheelchairs encompass the development of management applications [9], torque ripple reduction in BLDC traction motors for improved ride comfort [10], and independent safety measures for wheelchair users in automated vehicles [11]. Ji et al. developed an intelligent wheelchair system utilizing situation awareness and obstacle detection [12], while other studies have explored safety enhancements for wheelchair users within autonomous vehicle environments [13].
Furthermore, previous research [14,15] has focused on developing obstacle avoidance systems, incorporating depth cameras, fuzzy logic control, and SLAM-based navigation techniques. Additional efforts have explored hybrid approaches that integrate multiple technologies, such as the implementation of electric wheelchairs using hybrid energy storage devices [16] and the development of weakly supervised object detection models for smart city applications [17], as well as deep-learning-based 3D multi-object tracking using multi-modal fusion in smart cities [18].

2.2. Driving Safety Enhancement Techniques by On-Device AI

Research on the application of on-device AI and object detection technology is also an active area of investigation. The development of solar-powered electric wheelchairs with foldable panels [19] has been explored. Recent developments include the SPPT: Siamese Pyramid Pooling Transformer for visual object tracking [20] and the impact of noise on YOLO-based object detection systems [21]. Kim and Cho developed a smart integrated control board to improve the safe driving of personal electric wheelchairs [22], while Lin et al. proposed a lightweight method for automatic road damage detection based on deep learning [23]. Additionally, Alam et al. introduced a smart electric wheelchair with a multipurpose health monitoring system [24].
Significant advancements have been made in wheelchair-specific technologies. Yang et al. introduced the Siamese Pyramid Pooling Transformer approach for visual object tracking [20], while Oh et al. focused on safety improvements by developing standard specifications [25]. The integration of AI has also been advanced through research on an electric wheelchair-driving robotic module [26] and obstacle detection and unmanned driving management systems [27].
Advanced control systems have also been developed, including EEG-based control [28] and health monitoring systems [24]. Research has further explored specialized features such as moving obstacle avoidance [29] and smart-integrated control boards [22]. Recent studies have focused on high-performance electric wheelchairs with postural adjustment functions [30] and tailored systems designed for users with upper limb disabilities [28]. Zhang et al. proposed a multi-scale key-points-feature fusion network for 3D object detection from point clouds, further contributing to advancements in object detection technology [31].

2.3. Situational Awareness and Warning System

Significant developments have been made in fast monitoring and safety systems for electric wheelchairs. Several studies have focused on improving autonomous navigation and incorporating robust safety features [32,33,34,35,36,37]. Recent research efforts include the development of solar-powered solutions [38] and robotic modules for standardized circuits [39]. For example, Heo developed a management application for powered wheelchairs [40], while other studies have focused on improving ride comfort by reducing the torque ripple [41,42,43,44]. Additional challenges addressed in recent studies include step detection and ascent capabilities [8], as well as the integration of SSVEP-BCI systems [32].
Current research trends indicate a growing focus on integrating multiple technologies, as demonstrated by studies that combine various detection methods [28,35]. These technological advancements are crucial for improving the accessibility and safety of electric wheelchairs while maintaining cost-effectiveness for users. The integration of these solutions aims to develop third-party devices that can be retrofitted to existing electric wheelchairs, thus providing advanced safety features without requiring new systems.

2.4. Additional Considerations for Electric Wheelchair Safety and Performance

Recent studies further highlight the importance of advanced design strategies and integrated solutions to enhance the safety and ride quality of personal mobility vehicles. For instance, Haraguchi and Kaneko [42] proposed a PMV design with an inward tilt mechanism to reduce steering disturbances caused by uneven road surfaces, effectively improving vehicle stability and reducing user fatigue. Similarly, Omori et al. [43] developed an autonomous navigation approach for PMVs that accounts for passenger tolerance to approaching pedestrians, ensuring both safety and comfort in congested environments.
Jian et al. [44] introduced a concept of federated personal mobility services in autonomous transportation systems, enabling real-time communication between vehicles, infrastructure, and control modules. This framework could facilitate safer, more efficient journeys by optimizing the interaction between users and transportation networks.
Beyond navigation and control, energy management also plays a critical role in ensuring consistent performance and safety. Kim et al. [16] presented an electric wheelchair that utilized hybrid energy storage devices. It was designed to maintain reliable power output under various operating conditions. By integrating these advanced technologies, future personal mobility vehicles can significantly improve both accessibility and safety while remaining cost-effective. In this way, recent developments—from tilt-based stabilization to hybrid power systems—illustrate a growing emphasis on holistic, user-centered design in electric wheelchair research. Through such innovations, the next generation of electric wheelchairs is poised to offer enhanced safety, stability, and convenience for a wide range of users.

3. System Architecture, Design, and Implementation

Building on the recognized need for enhanced safety measures and the limitations of current support policies, this study aims to develop a fast obstacle detection and warning system for electric wheelchairs. The proposed system incorporates multiple sensors and a front-facing camera to detect obstacles and provide timely warnings, thereby enhancing user safety in diverse environments. Furthermore, employing on-device processing minimizes reliance on external server communication, ensuring stable operation even in connectivity-limited scenarios.
Figure 3 illustrates the architecture of a web application developed using the Spring Framework, incorporating key components such as Spring Boot, Spring Security, Spring Data JPA, Thymeleaf, Stomp, and REST API. Spring Boot provides the foundational structure, and Spring Security manages authentication and authorization. The Spring Controller processes user requests, and Thymeleaf and HTML5 render the user interface. Data management is performed via JPA and supported by technologies such as MariaDB, Hibernate, and QueryDSL. The Stomp protocol manages fast communication with devices such as cameras and PLCs, thereby enabling efficient data processing between web clients and the server and enhancing the system’s scalability and maintainability.
To improve the driving safety of personal mobility devices, we designed the algorithm depicted in Figure 4. The personal mobility device has a front camera and an ultrasonic sensor to recognize obstacles. A computing unit onboard the device processes the recognition and executes the algorithm. Although the user operates the device, the front camera first identifies distant obstacles, and the ultrasonic sensor recognizes nearby obstacles. The main computer relays recognition results to the user via TTS. Additionally, if the user does not stop despite obstacle detection, the system activates an automatic brake function to stop the device, ensuring safety.

3.1. Enhancing Driving Safety by Forward Obstacle Detection

Object detection models can be broadly categorized into one-stage and two-stage detection methods. Two-stage methods, such as Faster Region-Based Convolutional Neural Network (Faster R-CNN), first propose regions where objects are likely located and then perform detailed detection within those regions. Although this approach achieves high accuracy, its computational complexity and multi-step processing make it unsuitable for real-time applications.
In contrast, one-stage detection methods process an entire image in a single computational step while simultaneously predicting object locations and classifications, providing a significant speed advantage. Prominent one-stage models include the Single Shot Multibox Detector (SSD), EfficientDet, RetinaNet, and You Only Look Once (YOLO) models. However, its limitations in detecting small objects pose challenges for forward obstacle detection, where identifying small and dynamic objects is essential.
EfficientDet employs a Bi-directional Feature Pyramid Network (BiFPN) to enhance multi-scale information transfer and balance computational efficiency and detection performance. Although EfficientDet performs effectively in general-purpose applications, its capability in detecting small objects is inferior to that of YOLOv8, particularly in environments requiring high accuracy under diverse conditions. RetinaNet, equipped with Focal Loss to address class imbalance, excels in detecting objects of varying sizes. However, its slower processing speeds relative to other one-stage models make it less suitable for real-time applications, such as obstacle detection for electric wheelchairs, where rapid responses are crucial.
In this paper, we utilized the existing YOLOv8 version as is. However, we exploited hardware acceleration techniques to boost performance. Therefore, we reduced the overhead of inference operations by hardware acceleration of convolutional neural networks, supporting various layers such as convolution, deconvolution, pooling, batch normalization, and others. So, some components in the YOLO model, especially the colored components in Figure 5, were accelerated through the hardware in our experiment.
As shown in Figure 5 and Figure 6, the YOLOv8 architecture is composed of three key components: the backbone, neck, and detection layers, each optimized for efficient and accurate object detection. The backbone employs CSPDarknet53 as its backbone, incorporating the C2F module with dense and residual connections to enhance gradient flow and feature representation while maintaining a lightweight design. It downsamples the input image to retain the essential features and prepares it for subsequent stages. The Neck module employs a Feature Pyramid Network (FPN) to apply upsampling to enhance small object detection by creating a multi-scale feature pyramid, resulting in hierarchical feature fusion, enabling robust detection across objects of varying scales. Finally, the detection head replaces traditional anchors with a task-aligned assigner, improving accuracy and robustness by better aligning predictions with ground truths. It outputs bounding boxes, class probabilities, and confidence scores, refining accuracy through loss functions. The Backbone–Neck–Head structure enables YOLOv8 to achieve multi-scale detection at high speeds, making it suitable for real-time applications such as obstacle detection in electric wheelchairs.
YOLOv8 was selected as the optimal model for this study due to its exceptional combination of speed and accuracy. The innovative architecture of YOLOv8—comprising Backbone, Neck, and Head modules—enables rapid and precise detection by simultaneously predicting bounding boxes and class probabilities. Optimized with TensorRT on AGX Orin hardware, YOLOv8 demonstrated exceptional performance in small object detection and inference speed, meeting the stringent requirements of forward obstacle detection in real-world scenarios. The experimental results confirmed YOLOv8’s superiority over other models, making it a highly effective solution for enhancing situational awareness in electric wheelchairs.
Various object detection models were evaluated to optimize the performance of the forward situational awareness system. Due to the need for high processing speed and accuracy in real-time awareness, the assessment focused on one-stage models, including YOLOv8, SSD, EfficientDet, RetinaNet, and the two-stage model Faster R-CNN. As shown in Figure 7, the object detection models were evaluated in an experimental setup using a camera and an embedded board.
The efficiency and accuracy of object detection are important. An object detector is an object detection that performs image recognition tasks by taking an image as input and then predicting bounding boxes and class probabilities for each object in the image. This is because, today, object detection technology plays a key role in enhancing driving safety in our research. Currently, state-of-the-art real-time object detectors are mainly based on YOLO-like convolutional one-stage object detection algorithms. For this reason, in this study, we also pursued near-real-time fast performance in the process of performing the object detection algorithm using YOLO, a one-stage algorithm.
After evaluating the performance and efficiency of the real-time object detection models, we selected YOLOv8 as the final model. As discussed earlier, two-stage models, such as Faster R-CNN, offer high accuracy through a refined multi-step process. However, their computational complexity makes them unsuitable for real-time applications. In contrast, one-stage models, such as YOLO, process an entire image in a single step, providing the speed required for real-time environments while maintaining a balance between accuracy and efficiency. Models such as SSD, EfficientDet, and RetinaNet were also considered. However, they did not meet the requirements of this study. Despite its speed advantages, SSD was excluded due to its low accuracy in detecting small objects. Although EfficientDet offers exceptional computational efficiency, its speed and small object detection performance are inferior to that of YOLOv8. Similarly, while RetinaNet effectively detects objects of various sizes, its slower processing speed makes it unsuitable for real-time applications.
To ensure an informed decision, we reviewed the performance data from Ultralytics and conducted comparative experiments with YOLOv5, a widely used model in various applications. The experimental results indicate that YOLOv8 significantly outperformed YOLOv5 in terms of both accuracy and inference speed. YOLOv8 excels at small object detection and offers faster processing, making it suitable for real-time applications such as forward obstacle detection for electric wheelchairs.
Additionally, YOLOv8 is highly compatible with modern hardware and operates efficiently on low-spec embedded systems, making it versatile across various environments. These advantages led to the selection of YOLOv8 as the optimal model for this study. A comprehensive analysis of YOLOv8’s performance, including a comparative evaluation with YOLOv5, is provided in the performance evaluation section.

3.2. Enhancing Driving Safety by Ultrasonic Sensors and Emergency Stop

Personal mobility devices have low posture stability, and the rider is exposed to the outside, so there is a risk of overturning or falling accidents. As shown in Figure 8, the device developed in this research can be manufactured in a form that can be attached to a personal mobility device. In addition, this device analyzes the information acquired through the front camera and front and rear sensors in the on-device controller and provides text-to-speech service to the driver. Therefore, by utilizing this device, the driver of the personal mobility device can receive additional voice guidance while keeping an eye on the situation ahead. The following sensor and emergency stop technology were applied to help passengers of personal mobility devices recognize dangerous situations and to notify them of obstacle detection to prevent accidents.
As a specific implementation method, some sensors (such as ultrasonic wave) modules are connected to a microprocessor and are used to generate an emergency stop of the personal mobility device when an obstacle is detected.
The sensors recognize obstacles in front through ultrasonic sensors and process the signal from the sensors. The controller board notifies the forward collision warning if the vehicle is within a predetermined range. It can stop by itself when approaching an obstacle by recognizing objects and can ask for help from people around it in various ways through smartphone applications.
As shown in Figure 9, we designed and implemented a system to enhance the safety of personal mobility devices, such as electric wheelchairs, by integrating a camera with various sensors. The main components of the proposed system include an ultrasonic sensor, a front camera, an embedded platform (NVIDIA Orin board), Arduino, and multiple sensor modules. The system detects objects, predicts potential hazards using sensor data and front camera images, and alerts users with visual impairment.
An emergency stop is a safety mechanism that shuts down machinery in an emergency when other methods are inadequate. It is also known as a kill switch, emergency brake, emergency off, or emergency power off. This is a kind of emergency technique that can allow an attendant to stop a powerchair via a controller depending on the situation. It is great for carers and parents of young children and the elderly, or in situations where a carer needs to step in to prevent an accident or injury. This provides carers and attendants greater peace of mind when out and about with a wheelchair user.
When an emergency stop is generated, the emergency stop control device latches in and must be manually released. The emergency stop function should take precedence over all other functions and should shut off the energy supply to hazardous drives as quickly as possible.
The server system architecture facilitates universal execution. As shown in Figure 10, the smartphone application was built using Progressive Web App (PWA) technology on a React and Vite framework and provides a native app-like experience with offline functionality.
The on-device object recognition device (see Figure 11) was mounted on an electric wheelchair to detect forward obstacles and provide voice guidance. This setup enhances the safe mobility of visually impaired users and improves the quality of life of various vulnerable social groups.
To perform object detection on embedded devices, we perform learning on an existing server rather than learning the model on the embedded board, and then we deploy it to the edge of hardware with limited computing resources. In conventional embedded boards, it is advantageous to use 8-bit integer computation for the weights reflected in the neural network. If the weights are simply rounded up in the process of converting from floating point to integer, it may decrease in terms of accuracy and performance when the dynamic range of the weights is wide.
In this paper, we also make use of YOLOv8 to implement forward obstacle detection in near-real time. Moreover, we also focused on enhancing performance with a specialized hardware component optimized for deep-learning inference. This accelerates the processing in various layers, such as convolution, deconvolution, pooling, ReLU, comparison, etc., that a neural network performs during inference.
Figure 12 is a prototype of an electric wheelchair equipped with a forward situational awareness function developed in this study. As shown in Figure 12, the electric wheelchair is equipped with a processing board with computing capabilities, a sensor, and a camera that captures the forward situation. In addition, the monitor located at the top of Figure 12 was installed for debugging purposes to confirm whether the situational awareness is working properly in the experimental environment. Therefore, it is not necessarily needed in real situations.
Figure 13 shows a scene where object recognition is performed in real time on the processing board through images coming from the front camera to perform forward situation recognition. In the experimental environment, it can be confirmed that a person located in front is normally recognized and a bounding box is processed at an accurate location on the monitor screen. At the same time, the speaker connected to the processing board informs the electric wheelchair rider that there is a person in front.

3.3. Data Acquisition and Statistics

For this study, the KITTI Dataset and a fusion sensor dataset from an autonomous wheelchair guidance system for individuals with disabilities were selected after reviewing several options. These datasets were deemed most appropriate for aligning with the study’s objectives and environment.
As shown in Table 1, the KITTI Dataset, which is widely used in autonomous driving research, provides high-resolution video and LiDAR data across diverse road environments for obstacle recognition, thereby enhancing the model’s generalizability in the complex road conditions encountered by electric wheelchairs. Additionally, the autonomous wheelchair fusion sensor dataset, which reflects real-world operational data, provides critical validation for obstacle recognition technology performance.
Large datasets, such as Cityscapes, BDD100K, and A2D2, were reviewed for broader testing. The Cityscapes dataset supports urban scene understanding and obstacle detection, while BDD100K enhances the model’s adaptability with its diverse weather and road conditions. A2D2, developed by Audi, supports improved obstacle detection near roadways. Finally, the KITTI and autonomous wheelchair datasets were selected to align with the requirements of this study, ensuring robust performance evaluation in real-world settings for electric wheelchair applications.

4. Experimental Results

As shown above, we designed and implemented the forward situational detection and awareness system using object detection. In this experiment, we ran the inference on a Jetson AGX Orin board with 2048 cores of 1.3 GHz CPU, 64 Tensor cores, 12 cores of 2.2 GHz Arm Cortex-A78, 64 GB LPDDR5 memory with a throughput of 204.8 GB/s. With this environment, we conducted the experiment using the YOLOv8 model with the TensorRT and measured the performance so we could leverage the ability to perform computations for accelerating deep-learning workloads by some fixed-function hardware that accelerates deep-learning workloads on our embedded platform. We made use of a sort of hardware acceleration technique, Deep-Learning Accelerator (DLA) [45], to optimize the yolov8m’s performance on Jetson AGX Orin. We trained the model, exported it to the ONNX format, and converted it into a TensorRT engine optimized for the hardware. When the TensorRT engine was loaded in the YOLO framework, tasks, especially convolution and relu, are automatically assigned to DLA for supported operations, significantly accelerating the inference process. It works seamlessly with TensorRT for additional model optimization, including quantization and layer fusion, further improving the speed of YOLOv8.

4.1. Confusion Matrix

In this section, we present a confusion matrix to analyze the forward situational awareness performance of the system in detail through Figure 14. The figure shows the correlation between the predicted value and the actual value for all classes as a ratio. This allows us to quantitatively evaluate the prediction accuracy of the model for each class. In addition, we can check the reliability and consistency of the model for a specific class. For example, the dark color on the diagonal indicates that the model achieved high accuracy in the corresponding class. This means that the system developed in this study has a high rate of correctly identifying forward obstacles and objects. On the other hand, the values outside the diagonal showcases where confusion occurred, suggesting which inter-class correlations/confusions occurred when the prediction was incorrect.
Figure 15 is an unnormalized confusion matrix, which shows the prediction performance for each class in terms of absolute values. This is useful for detecting the presence of data imbalance and model performance degradation in a specific class. For example, if there are few data samples for a specific class, the prediction frequency may appear relatively low. These visualizations help us evaluate whether the model is relatively better trained in a class or whether additional data acquisition or training optimization is needed. These two visualizations serve as important tools to evaluate the strengths and weaknesses of the YOLOv8 model in detecting and classifying various objects. In particular, they provide meaningful information for diagnosing the model’s suitability and potential for improvement in real-time applications where reliability and accuracy are essential, such as obstacle detection.

4.2. System Implementation and Experiments for Auto-Stop and TTS

The techniques used in this research to improve the driving safety of personal mobility devices include forward obstacle recognition using a vision camera, TTS technology that provides voice guidance based on the forward recognition results, and an automatic stop function using an ultrasonic sensor when a forward collision is imminent.
First, our system was developed to analyze image data collected from personal mobility devices via an Object Detection service. This application recognizes target objects and integrates with electric wheelchairs and personal mobility devices, offering Text-to-Speech (TTS)-based voice guidance and warnings.
// Define the options for the fetch request
const options = {
        headers: {
              “content-type”: “application/json; charset=UTF-8”,
        },
        body: JSON.stringify(data),
        method: “POST”
};
fetch(api_url, options)
        .then((response) => {
              if (!response.ok) {
                      throw new Error(“Error with Text to Speech conversion”);
              }
              response.json().then((data) => {
                      const audioContent = data.audioContent; // base64 encoded audio
                      const audioBuffer = Buffer.from(audioContent, “base64”);
                      res.send(audioBuffer);
              });
        })
        .catch((error) => {
              res.status(500).send({ error: error.message });
        });
// Function to perform Text-to-Speech (TTS) API call and play the resulting audio.
function ttsApi() {
        fetch(‘/tts’)
              .then(response => {
                      if (!response.ok) {
                              throw new Error(response.statusText);
                      }
                      return response.arrayBuffer();
              })
              .then(arrayBuffer => {
                      const audioContent = arrayBufferToString(arrayBuffer);
                      const audioContext = new (window.AudioContext || window.webkitAudioContext)();
                      const source = audioContext.createBufferSource();
                      const audioData = base64ToArrayBuffer(audioContent);
                      audioContext.decodeAudioData(audioData, function (buffer) {
                              source.buffer = buffer;
                              source.connect(audioContext.destination);
                              source.start(0);
                      });
              })
            .catch(error => {
                    console.error(‘Error:’, error);
            });
}
Although the TTS service was not implemented entirely in this study, a typical TTS library API was used. However, considering that the user of a personal mobile device cannot continuously watch the monitor while driving, a user guidance function utilizing the TTS service is necessary. As shown in Figure 16, a demonstration of the text-to-speech implementation highlights its potential benefits for user guidance while driving.
Likewise, the implementation and demonstration of the user guidance function using TTS service through forward obstacle recognition can be found at the following URL link: https://youtu.be/ObcSwF8kDp8 (accessed on 15 January 2025).
Next, in Figure 17, we describe feature that automatically stops the personal mobility device when it approaches an obstacle ahead. In the case of a user driving a personal mobility device, the field of vision must be focused on the front and rear, and both hands must be held on the steering wheel. Therefore, the method of conveying the input image information to the user is limited. Therefore, in this study, the method of notifying the user of the results obtained through the object recognition process was adopted as the method of alarm sound and TTS guidance. In other words, the results of the forward situation recognition processed through the vision camera and embedded board are provided to the user while driving through methods such as warning sound and voice guidance.
However, if the vehicle does not stop despite the warning sound and voice guidance, it can be determined that the user has neglected to look forward and rearward, or that the user is in a situation where it is difficult to operate quickly. Therefore, in such cases, the controller operates an automatic stop to ensure the safety of the user of the personal mobility device. As shown in Figure 18, after the auto-stop function is activated, the vehicle comes to a halt in front of an obstacle. A demonstration of the automatic stop function in the event of a front obstacle approach can be found at the following URL link: https://youtu.be/Qn0PP426HS4 (accessed on 15 January 2025).
The ultrasonic sensor, specifically the HC-SR04 model, detects obstacles within a close range (less than 20 cm) and issues an alert when objects approach this distance. The performance of the HC-SR04 model was evaluated in both indoor and outdoor environments, including simulated rainy conditions. In the indoor setting, plastic objects within 150 cm were detected with an average error of 3 mm, whereas metal objects were detected with slightly higher accuracy, with an average error of 2 mm. An average error of approximately 4 mm was observed within a 120 cm range for irregularly shaped objects such as wood. In outdoor environments, the performances varied under different weather conditions. Detection errors increased by approximately 20% in humid conditions due to signal interference, while stable detection up to 200 cm was maintained in clear weather.
However, obstacles with soft surfaces, such as fabric or rubber, posed challenges because these materials absorbed or distorted the ultrasonic signal, resulting in over 50% detection failure. To address these limitations, the proposed system integrates both a camera and an ultrasonic sensor, leveraging the visual data from the camera to enhance overall reliability. The HC-SR04 sensor’s effective range was limited to 200 cm due to increased error rates beyond this distance, making it most suitable for close-range detection. Within this range, the sensor exhibited high reliability in indoor and outdoor environments, achieving an average accuracy of approximately 3 mm.

4.3. Forward Obstacle Detection Performance

We chose forward situational awareness as the object detection technique due to its exceptional balance between speed and accuracy. A detailed performance evaluation using metrics such as mAP50, mAP50-95, precision, and recall confirmed YOLOv8 suitability for forward situational awareness. The experimental results demonstrated that YOLOv8 achieved a mAP50 score of 74%, which indicates reliability in detecting small objects. In addition, with a mAP50-95 score of 50.6%, YOLOv8 exhibited consistent performance across objects of varying sizes and positions. The model achieved a precision of 80.8%, indicating an excellent ability to reduce false detections, and a recall of 63.3%, ensuring adequate sensitivity for real-time detection.
The train/box_loss graph represents the loss incurred by the model when predicting the object locations during training. As training progressed, the loss decreased, indicating that the model’s ability to accurately predict object positions was improved.
The train/box_loss graph in Figure 19 shows a consistent decrease in loss as training progressed. The sharp decline in loss during the initial epochs indicates the rapid learning of the basics of predicting object locations, while the gradual reduction in later stages indicates continued refinement of the model’s localization accuracy. This stable training progression highlights YOLOv8’s effectiveness in achieving precise obstacle detection, which meets the objectives of this study.
The train/cls_loss graph in Figure 20 illustrates the model’s consistent improvement in object classification during training. The sharp decline in the initial loss, followed by a steady reduction, indicates effective learning and refinement. This trend highlights YOLOv8’s capability to distinguish between various obstacle types, making it well-suited for real-time applications, such as electric wheelchair navigation, where accurately categorizing different obstacle types is essential.
As shown in Figure 21, Distribution Focal Loss (DFL) evaluates the model’s accuracy in predicting object locations and sizes. The consistent decrease in loss during training indicates improved accuracy in predicting object boundaries. This capability is particularly critical for detecting small obstacles or objects in complex backgrounds, thereby ensuring safer navigation in scenarios where such obstacles pose safety risks.
The Metrics/Precision graph in Figure 22 demonstrates YOLOv8’s effectiveness in identifying true positives while minimizing false positives. The consistently high precision throughout the training process demonstrates the model’s reliability in accurately detecting obstacles without generating excessive false alerts. This capability is essential for providing reliable obstacle warnings to users.
The Metrics/Recall graph in Figure 23 evaluates the model’s sensitivity in detecting obstacles. Although recall remained high throughout the training process, minor fluctuations indicated potential areas for optimization, particularly for detecting small or partially occluded objects. Future work could focus on enhancing recall to improve the detection of such challenging obstacles.
The Val/Box Loss graph in Figure 24 indicates the model’s effectiveness in generalizing bounding box predictions on unseen data. The reduction in the loss in the validation data highlights YOLOv8’s strong generalizability, ensuring reliable performance in real-world scenarios. This robustness is critical for reliable obstacle detection in different environments.
The Val/Cls Loss graph in Figure 25 illustrates the model’s accuracy in classifying objects in the validation dataset. The stabilization of loss at a low value indicates reliable identification of various obstacle types, demonstrating the proposed model’s ability to maintain accurate classification even with new data, which is essential for reliable real-world performance.
The Val/DFL Loss graph in Figure 26 shows the model’s consistency in predicting precise object boundaries within the validation data. The downward trend in loss confirms that the model effectively applies knowledge from training to unseen data, which ensures stable performance and detection of small obstacles.
The mAP@50 metric in Figure 27 measures the average precision at an IoU threshold of 0.5. YOLOv8 achieved a performance score of 74%, indicating high reliability, particularly in detecting small objects. The high mAP50 score underscores the accuracy of the YOLOv8, which makes it suitable for obstacle-detection applications that require precise object detection.
The mAP@50-95 metric in Figure 28 evaluates precision over a range of IoU thresholds from 0.5 to 0.95. YOLOv8 achieved a score of 50.6%, and it exhibited consistent performance across different object sizes and positions. This result highlights the model’s ability to maintain accurate detection despite differences in size and location, which is an essential factor for reliable performance in diverse environments.
In summary, YOLOv8 exhibited exceptional performance in both precision and mAP50, confirming its suitability for real-time forward situational awareness in electric wheelchairs. Although recall can benefit from further optimization, the high accuracy and inference speed of YOLOv8 make it a strong candidate for the proposed real-time obstacle detection system.

4.4. Situational Obstacle Detection Performance Analysis

To evaluate the real-time obstacle detection performance of the system in electric wheelchair driving environments, experiments were conducted using images captured in various road and intersection scenarios. The analysis evaluated the proposed system’s ability to recognize obstacles, pedestrians, vehicles, and stationary objects commonly encountered during driving. This assessment measured the proposed system’s object recognition accuracy in real-world situations faced by electric wheelchair users.
Figure 29 shows the obstacles detected on the road ahead, including a cyclist, a bench, and parked vehicles. The YOLOv8 model accurately identified various obstacles, including pedestrians, stationary objects (e.g., a bench), and vehicles, with confidence levels ranging from 0.26 to 0.44, although vehicles were detected at a slightly lower confidence level. These results highlight the model’s ability to identify critical objects, such as stationary obstacles or vehicles that electric wheelchair users must avoid on sidewalks or pathways. Promptly detecting obstacles on the driving path and alerting users enhances navigation safety. However, further research is required to improve the reliability of detecting dynamic objects, such as moving vehicles.
Figure 30 illustrates the results from an intersection crowded with pedestrians and vehicles, a common scenario for electric wheelchair users. The model detected pedestrians with high confidence (0.61 to 0.77), successfully identifying most individuals despite their large number and varied positions. This capability enables wheelchair users to anticipate pedestrian movements for safe navigation.
Vehicles at the intersection were detected with a lower confidence level of 0.26 than pedestrians, which varied depending on road conditions, vehicle size, and position. For wheelchair users near roads, prompt vehicle identification and appropriate responses are essential for maintaining safety, particularly in assessing vehicle positions and maintaining a safe distance when crossing roads.
Since intersections are the environments where pedestrians and vehicles coexist, assessing the model’s performance in such scenarios is vital. The results indicate that the proposed model effectively detected multiple objects even in crowded situations, demonstrating its potential to enhance the safe navigation of electric wheelchairs in complex urban environments.
Figure 31 shows the detection results for vehicles, pedestrians, and traffic lights on the highway. Vehicles were detected with confidence levels up to 0.89, while pedestrians and traffic lights were also identified appropriately. The high confidence level in vehicle detection emphasizes the system’s ability to prevent collisions on roadways, especially near intersections or adjacent areas where electric wheelchairs may operate. Rapid and accurate vehicle detection ensures safe navigation, and traffic light detection improves safety when crossing intersections. Pedestrians, which represent a crucial detection target on roads and intersections, may require additional training to improve detection performance under diverse conditions.

4.5. Experimental Tests Under Weather Conditions, Lighting Conditions, and Road Types

The experimental results demonstrate that the model effectively recognizes obstacles encountered in electric wheelchair-driving environments. The proposed model detected key objects, including pedestrians, vehicles, and stationary items (e.g., benches and bicycles), on both roads and sidewalks, thereby underscoring its ability to provide swift and accurate obstacle detection.
However, certain cases exhibited slightly lower confidence levels, especially for essential objects, such as pedestrians and stationary obstacles on sidewalks. This issue was more pronounced in environments in which multiple objects were present simultaneously. Improving pedestrian detection, which is a critical aspect of wheelchair safety, and increasing the recognition accuracy of stationary obstacles, such as benches and streetlights, remain key areas for improvement.
To this end, we conducted additional experiments according to various weather conditions, lighting conditions, and road types, and they reflected the results in the paper as follows.
(1)
Additional Experiments Based on Lighting Conditions
Looking at Figure 32, it can be observed that under nighttime lighting conditions, unlike during the daytime, the following characteristics are evident. In both the first and second images, vehicles were clearly detected, with confidence levels reaching up to 0.93. In particular, the second image demonstrates that vehicles in various angles and positions were accurately identified.
As such, the system’s high detection confidence even in low-light environments highlights its ability to prevent collisions in nighttime traffic conditions. This is particularly significant in areas prone to accidents, such as intersections or adjacent areas where electric wheelchairs may operate.
(2)
Additional Experiments Based on Road Type
As shown in Figure 33, differences in the performance of the object detection system are observed depending on the road type, whether it is a sidewalk or a roadway. In the first image, vehicles and trucks located on the sidewalk were detected with confidence levels of 0.65 and 0.46, respectively. This demonstrates that even smaller objects can be detected in environments like sidewalks.
In the second image, a vehicle on the roadway was detected with a high confidence level of 0.92, indicating that object detection tends to perform more clearly in roadway environments. In the third image, vehicles were accurately detected with a confidence level of 0.83, even in an alleyway with mixed shadows and bright sunlight.
In conclusion, when the road type is a sidewalk, the relatively simple surroundings allow for the detection of various objects. On the other hand, on roadways, larger objects such as vehicles can be detected with high confidence. These results demonstrate the system’s ability to perform effectively regardless of road type.
(3)
Additional Experiments Based on Weather Conditions
As shown in Figure 34, differences in the object detection system’s performance are observed under varying weather conditions, such as clear and snowy weather. In the first and second images, taken during nighttime with no precipitation, vehicles and trucks were detected with confidence levels of up to 0.86 and 0.71, respectively. These results indicate that the system performs well even in low-light environments when the weather is clear.
In the third image, captured during a cloudy day, multiple objects, such as persons, skateboards, and benches, were detected. Although the confidence levels varied (e.g., people detected with confidence levels up to 0.69), the system successfully recognized a wide range of objects in an urban environment with diffused lighting conditions.
Overall, the system demonstrated higher accuracy under clear weather conditions, where visibility is better. While object detection is still possible under adverse weather conditions like snow, the accuracy and confidence levels may decrease slightly due to occlusions and reflections caused by the snow. These findings highlight the need for further optimization to improve detection performance under challenging weather conditions.
As a result, the following conclusions can be drawn: First, the detection performance is better on clear days compared to rainy or snowy conditions, as the system demonstrates higher accuracy and confidence under favorable weather conditions with improved visibility. Second, there is no significant difference in obstacle detection performance based on road type. However, the system shows slightly better results on roadways than on sidewalks. Lastly, the detection system performs more effectively during the daytime compared to nighttime, where improved lighting conditions naturally enhance the system’s accuracy.
The experimental results demonstrate that the model effectively recognizes obstacles encountered in electric wheelchair-driving environments. The proposed model detected key objects, including pedestrians, vehicles, and stationary items (e.g., benches and bicycles), on both roads and sidewalks, thereby underscoring its ability to provide swift and accurate obstacle detection.

5. Conclusions

This study integrates information and communication technologies (ICT) into personal mobility devices, such as electric wheelchairs, to enhance the quality of life of vulnerable populations, including the elderly, visually impaired, and youth. The proposed object recognition system employs sensors and cameras to enhance navigation safety by detecting obstacles in real time and providing immediate Text-to-Speech (TTS) guidance. The YOLOv8 model demonstrated robust performance in recognizing various obstacles in diverse driving environments, including pedestrians, vehicles, and stationary objects. However, our experiments revealed limitations in terms of detecting essential objects, such as pedestrians and stationary obstacles (e.g., benches and streetlights). These limitations were more pronounced in environments with multiple overlapping objects, which reduced the confidence levels. Addressing these issues is critical for improving the safety and usability of the proposed system for wheelchair users.
To enhance system performance, future research will optimize pedestrian detection algorithms and improve the accuracy of stationary obstacle recognition. Combining datasets, such as the KITTI Dataset and AI HUB’s autonomous wheelchair sensor data, will further refine the detection capabilities under diverse conditions. Additionally, integrating a multi-ultrasonic sensor array and infrared sensors will address the challenges of detecting soft or irregular objects such as fabrics or rubber. Future research will also enhance inference speed and accuracy under adverse weather and nighttime conditions to ensure practical usability. These enhancements deliver safer and more reliable navigation for personal mobility devices, ultimately improving the quality of life of diverse social groups. Future work includes extending the system’s applicability to other equipment, such as agricultural machinery.

Author Contributions

Conceptualization, M.C. Funding acquisition, M.C. Investigation and methodology, M.C., T.A.D. and E.C. Project administration, M.C. Resources, M.C. Supervision, M.C. Writing of the original draft, E.C. and T.A.D. Writing of the review and editing, M.C. Software, E.C., T.A.D. and M.C. Validation, E.C., T.A.D. and M.C. Formal analysis, E.C. and M.C. Data curation, E.C., T.A.D. and M.C. Visualization, E.C., T.A.D. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (IITP-2025-RS-2020-II201462).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed at the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. K. C. Agency. Survey on the Usage of Motorized Assistive Devices (Electric Wheelchairs, Electric Scooters); K. C. Agency: Seoul, Republic of Korea, 2015. [Google Scholar]
  2. Lee, S.H. The Importance of Electric Wheelchairs and Assistive Devices. Kyunghyang Daily Article. Available online: https://www.khan.co.kr/economy/market-trend/article/201604191200001 (accessed on 19 April 2016).
  3. Kim, M.S. The Need for Accident Prevention in Electric Wheelchairs. Seoul Daily Article. Available online: https://www.seoul.co.kr/news/society/2022/01/29/20220129500061 (accessed on 29 January 2022).
  4. Kim, Y.-P.; Ham, H.-J.; Hong, S.-H.; Ko, S.-C. Design and Manufacture of Improved Obstacle-Overcoming type Indoor Moving and Lifting Electric Wheelchair. J. Korea Acad.-Ind. Coop. Soc. 2020, 21, 851–860. [Google Scholar] [CrossRef]
  5. Seo, J.; Kim, C.W. 3D Depth Camera-based Obstacle Detection in the Active Safety System of an Electric Wheelchair. J. Inst. Control. Robot. Syst. 2016, 22, 552–556. [Google Scholar] [CrossRef]
  6. Kang, J.S.; Hong, E.-P.; Chang, Y. Development of a high-performance electric wheelchair: Electric wheelchair users’ posture change function usage and requirements analysis. J. Rehabil. Welf. Eng. Assist. Technol. 2024, 18, 102–112. [Google Scholar]
  7. Kim, D.; Lee, W.-Y.; Shin, J.-W.; Lee, E.-H. A Study on the Assistive System for Safe Elevator Get on of Wheelchair Users with Upper Limb Disability. In Proceedings of the 2023 International Conference on Electronics, Information, and Communication (ICEIC), Singapore, 5–8 February 2023; pp. 1–4. [Google Scholar]
  8. Na, R.; Hu, C.; Sun, Y.; Wang, S.; Zhang, S.; Han, M.; Yin, W.; Zhang, J.; Chen, X.; Zheng, D. An Embedded Lightweight SSVEP-BCI Electric Wheelchair with Hybrid Stimulator. Digit. Signal Process. 2021, 116, 103101. [Google Scholar] [CrossRef]
  9. Heo, D.G. Developments of Indoor Auto-Driving System and Safety Management Smartphone App for Powered Wheelchair User. Master’s Thesis, Deptartment of Rehabilitation Welfare Engineering, Daegu Universit, Gyeongsan, Republic of Korea, 2017. Available online: https://www.riss.kr/link?id=T14604844 (accessed on 15 January 2025).
  10. Dae-Kee, K.; Dong-Min, K.; Jin-Cheol, P.; Soo-Gyung, L.; Jihyung, Y.; Myung-Seop, L. Torque Ripple Reduction of BLDC Traction Motor of Electric Wheelchair for Ride Comfort Improvement. J. Electr. Eng. Technol. 2022, 17, 351–360. [Google Scholar] [CrossRef]
  11. Klinich, K.D.; Orton, N.R.; Manary, M.A.; McCurry, E.; Lanigan, T. Independent Safety for Wheelchair Users in Automated Vehicles; Technical Report; UMTRI: Ann Arbor, MI, USA, 2023. [Google Scholar]
  12. Ji, Y.; Hwang, J.; Kim, E.Y. An Intelligent Wheelchair Using Situation Awareness and Obstacle Detection. Procedia Soc. Behav. Sci. 2013, 97, 620–628. [Google Scholar] [CrossRef]
  13. Lee, Y.R.; Yang, M.H.; Yu, Y.T.; Choi, M.J.; Kwak, J.H.; Lee, S.J. Improving Public Acceptance of Autonomous Vehicles Based on Explainable Artificial Intelligence (XAI): Developing a Real-Time Road Obstacle Detection Model Using YOLOv5 and Grad-CAM. In Proceedings of the KOR-KST Conference, Seoul, Republic of Korea, 27 March 2024; pp. 220–224. [Google Scholar]
  14. Torres-Vega, J.G.; Cuevas-Tello, J.C.; Puente, C.; Nunez-Varela, J.; Soubervielle-Montalvo, C. Towards an Intelligent Electric Wheelchair: Computer Vision Module. In Proceedings of the Intelligent Sustainable Systems; Nagar, A.K., Singh Jat, D., Mishra, D.K., Joshi, A., Eds.; Springer Nature: Singapore, 2023; pp. 253–261. [Google Scholar]
  15. Erturk, E.; Kim, S.; Lee, D. Driving Assistance System with Obstacle Avoidance for Electric Wheelchairs. Sensors 2024, 24, 4644. [Google Scholar] [CrossRef] [PubMed]
  16. Kim, S.-J.; Lee, Y.-B.; Lee, M.-H.; Kim, J.-M.; Choi, S.-M.; Rho, D.-S. Implementation of the Electric Wheelchair using Hybrid Energy Storage Devices. J. Korea Acad.-Ind. Coop. Soc. 2024, 25, 1–9. [Google Scholar] [CrossRef]
  17. Jianguo, J.; Shiyi, X.; Jinsheng, X.; Qingshan, H.O.U.; Bosheng, L.; Jiao, S. A Weakly Supervised Object Detection Model for Cyborgs in Smart Cities. Hum.-Centric Comput. Inf. Sci. 2023, 13, 57. [Google Scholar] [CrossRef]
  18. Li, H.; Liu, X.; Jia, H.; Ahanger, T.A.; Xu, L.; Alzamil, Z.; Li, X. Deep Learning-Based 3D Multi-Object Tracking Using Multi-modal Fusion in Smart Cities. Hum.-Centric Comput. Inf. Sci. 2024, 14, 47. [Google Scholar] [CrossRef]
  19. Sanghv, J.J.; Shah, M.Y.; Fofaria, J.K. Solar Electric Wheelchair with a Foldable Panel. Epra Int. J. Res. Dev. 2021, 6. [Google Scholar] [CrossRef]
  20. Yang, F.; Bailian, X.; Bingbing, J.; Xuhui, K.; Yan, L. SPPT: Siamese Pyramid Pooling Transformer for Visual Object Tracking. Hum.-Centric Comput. Inf. Sci. 2023, 13, 59. [Google Scholar] [CrossRef]
  21. Kim, R.Y.; Cha, H.-J.; Kang, A.R. A Study on the Impact of Noise on YOLO-Based Object Detection in Autonomous Driving Environments. J. Korea Soc. Comput. Inf. 2024, 29, 69–75. [Google Scholar]
  22. Kim, J.; Cho, Y.-B. Research of Smart Integrated Control Board Function Improvement for Personal Electric Wheelchair’s Safe Driving. J. Digit. Contents Soc. 2018, 19, 1507–1514. [Google Scholar] [CrossRef]
  23. Lin, L.; Zhi, X.; Zhang, Z. A Lightweight Method for Automatic Road Damage Detection Based on Deep Learning. In Proceedings of the 2023 3rd International Conference on Electronic Information Engineering and Computer Science (EIECS), Changchun, China, 22–24 September 2023; pp. 81–85. [Google Scholar]
  24. Alam, M.N.; Rian, S.H.; Rahat, I.A.; Ahmed, S.S.; Akhand, M.K.H. A Smart Electric Wheelchair with Multipurpose Health Monitoring System. In Proceedings of the 2022 3rd International Conference for Emerging Technology (INCET), Belgaum, India, 27–29 May 2022; pp. 1–6. [Google Scholar]
  25. Oh, J.; Park, Y.H.; Hwang, I.H.; Park, S.G. Safety Improvement by Comparison of Standard Specifications for Electric Wheelchairs. J. Rehabil. Welf. Eng. Assist. Technol. 2024, 18, 85–89. [Google Scholar]
  26. Leblong, E.; Fraudet, B.; Devigne, L.; Babel, M.; Pasteau, F.; Nicolas, B.; Gallien, P. SWADAPT1: Assessment of an Electric Wheelchair-Driving Robotic Module in Standardized Circuits: A Prospective, Controlled Repeated Measure Design Pilot Study. J. Neuroeng. Rehabil. 2021, 18, 140. [Google Scholar] [CrossRef] [PubMed]
  27. Kim, B.-J.; Jeon, H.-G.; Lee, K.-H. Obstacle Detection and Unmanned Driving Management System in Drivable Area. Proc. 2023 Summer Conf. Korea Soc. Comput. Inf. 2023, 31, 287–289. [Google Scholar]
  28. Ngo, B.-V.; Nguyen, T.-H.; Tran, D.-K.; Vo, D.-D. Control of a Smart Electric Wheelchair Based on EEG Signal and Graphical User Interface for Disabled People. In Proceedings of the 2021 International Conference on System Science and Engineering (ICSSE), Ho Chi Minh City, Vietnam, 26–28 August 2021; pp. 257–262. [Google Scholar]
  29. Matsuura, J.; Nakamura, H. Moving Obstacle Avoidance of Electric Wheelchair by Estimating Velocity of Point Cloud. In Proceedings of the 2021 International Automatic Control Conference (CACS), Chiayi, Taiwan, 3–6 November 2021; pp. 1–6. [Google Scholar]
  30. Botta, A.; Bellincioni, R.; Quaglia, G. Autonomous Detection and Ascent of a Step for an Electric Wheelchair. Mechatronics 2022, 86, 102838. [Google Scholar] [CrossRef]
  31. Zhang, X.; Bai, L.; Zhang, Z.; Li, Y. Multi-Scale Keypoints Feature Fusion Network for 3D Object Detection from Point Clouds. Hum.-Centric Comput. Inf. Sci. 2022, 12, 373–387. [Google Scholar] [CrossRef]
  32. Gwangmin, Y.; Kim, N.-H.; Choi, G.-M. Implementation of an Integrated Management System Using Forklift-Type Autonomous Transport Robots and YOLOv8 Object Detection Technology. J. Digit. Contents Soc. 2024, 25, 3013–3019. [Google Scholar]
  33. Kim, B.-J.; Lee, H.-E.; Yang, Y.-H.; Kang, S.-G. Feature Attention-Based Region Proposal Augmentation and Teacher-Student Method-Based Object Detection Model. J. KIIT 2024, 22, 35–41. [Google Scholar] [CrossRef]
  34. An Improved Performance Radar Sensor for K-Band Automotive Radars. Available online: https://www.mdpi.com/1424-8220/23/16/7070 (accessed on 27 November 2024).
  35. Object Detection Capabilities and Performance Evaluation of 3D LiDAR Systems in Urban Air Mobility Environments. Journal of Advanced Navigation Technology|Korea Science. Available online: https://koreascience.kr/article/JAKO202421243329132.page (accessed on 27 November 2024).
  36. Ding, J.; Xue, N.; Xia, G.-S.; Bai, X.; Yang, W.; Yang, M.Y.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; et al. Object Detection in Aerial Images: A Large-Scale Benchmark and Challenges. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7778–7796. [Google Scholar] [CrossRef] [PubMed]
  37. Fajrin, H.R.; Bariton, S.; Irfan, M.; Rachmawati, P. Accelerometer Based Electric Wheelchair. In Proceedings of the 2020 1st International Conference on Information Technology, Advanced Mechanical and Electrical Engineering (ICITAMEE), Yogyakarta, Indonesia, 13–14 October 2020; pp. 199–203. [Google Scholar]
  38. Sun, J.; Yu, X.; Cao, X.; Kong, X.; Gao, P.; Luo, H. SLAM Based Indoor Autonomous Navigation System For Electric Wheelchair. In Proceedings of the 2022 7th International Conference on Automation, Control and Robotics Engineering (CACRE), Xi’an, China, 14–16 July 2022; pp. 269–274. [Google Scholar]
  39. Rabhi, Y.; Tlig, L.; Mrabet, M.; Sayadi, M. A Fuzzy Logic Based Control System for Electric Wheelchair Obstacle Avoidance. In Proceedings of the 2022 5th International Conference on Advanced Systems and Emergent Technologies (IC_ASET), Hammamet, Tunisia, 22–25 March 2022; pp. 313–318. [Google Scholar]
  40. Ko, H.; Lee, J.; Choi, H.; Koo, K.H.; Kim, H. Lightweight Method for Road Obstacle Detection Model Based on SSD Using IQR Normalization. In Proceedings of the 2024 Summer Conference of the Korean Institute of Electronics Engineers, Jeju, Republic of Korea, 24 June 2024; pp. 2427–2429. [Google Scholar]
  41. Lee, Y.-J.H.; Choi, D.-S. Development of Crash Prevention System for Electric Scooter Using Depth Camera and Deep Learning. In Proceedings of the KIIT Conference, Jeju, Republic of Korea, 23 November 2023; pp. 330–332. [Google Scholar]
  42. Haraguchi, T.; Kaneko, T. Design Requirements for Personal Mobility Vehicle (PMV) with Inward Tilt Mechanism to Minimize Steering Disturbances Caused by Uneven Road Surface. Inventions 2023, 8, 37. [Google Scholar] [CrossRef]
  43. Omori, M.; Yoshitake, H.; Shino, M. Autonomous Navigation for Personal Mobility Vehicles Considering Passenger Tolerance to Approaching Pedestrians. Appl. Sci. 2024, 14, 11622. [Google Scholar] [CrossRef]
  44. Jian, W.; Chen, K.; He, J.; Wu, S.; Li, H.; Cai, M. A Federated Personal Mobility Service in Autonomous Transportation Systems. Mathematics 2023, 11, 2693. [Google Scholar] [CrossRef]
  45. NVDLA, The NVIDIA Deep Learning Accelerator (NVDLA). Available online: https://en.wikipedia.org/wiki/NVDLA (accessed on 14 January 2025).
Figure 1. Types of accidents experienced by motorized assistive device users [1].
Figure 1. Types of accidents experienced by motorized assistive device users [1].
Applsci 15 01534 g001
Figure 2. Supply trends of electric and manual wheelchairs (2017–2023).
Figure 2. Supply trends of electric and manual wheelchairs (2017–2023).
Applsci 15 01534 g002
Figure 3. Server and client (device) architecture.
Figure 3. Server and client (device) architecture.
Applsci 15 01534 g003
Figure 4. Enhancing safety of personal mobility vehicle strategy.
Figure 4. Enhancing safety of personal mobility vehicle strategy.
Applsci 15 01534 g004
Figure 5. YOLOv8 and its accelerating architecture exploited on this research.
Figure 5. YOLOv8 and its accelerating architecture exploited on this research.
Applsci 15 01534 g005
Figure 6. One-stage object detection algorithm exploited on our platform.
Figure 6. One-stage object detection algorithm exploited on our platform.
Applsci 15 01534 g006
Figure 7. Experimental setup for object detection using a camera and embedded board.
Figure 7. Experimental setup for object detection using a camera and embedded board.
Applsci 15 01534 g007
Figure 8. Schematics for sensing forward obstacles, and for automatic brake system.
Figure 8. Schematics for sensing forward obstacles, and for automatic brake system.
Applsci 15 01534 g008
Figure 9. Hardware components and control interface of the motorized wheelchair system.
Figure 9. Hardware components and control interface of the motorized wheelchair system.
Applsci 15 01534 g009
Figure 10. Front-end operation screen.
Figure 10. Front-end operation screen.
Applsci 15 01534 g010
Figure 11. Embedded evaluation board and front camera mounted on electric wheelchair.
Figure 11. Embedded evaluation board and front camera mounted on electric wheelchair.
Applsci 15 01534 g011
Figure 12. A prototype of an electric wheelchair equipped with a forward situational awareness function was developed in this study.
Figure 12. A prototype of an electric wheelchair equipped with a forward situational awareness function was developed in this study.
Applsci 15 01534 g012
Figure 13. Forward situational awareness operation situation by object recognition technique.
Figure 13. Forward situational awareness operation situation by object recognition technique.
Applsci 15 01534 g013
Figure 14. Correlation between the predicted value and the actual value for all classes as a ratio.
Figure 14. Correlation between the predicted value and the actual value for all classes as a ratio.
Applsci 15 01534 g014
Figure 15. Unnormalized confusion matrix.
Figure 15. Unnormalized confusion matrix.
Applsci 15 01534 g015
Figure 16. Text-to-speech implementation and demonstration.
Figure 16. Text-to-speech implementation and demonstration.
Applsci 15 01534 g016
Figure 17. Motorized wheelchair with integrated obstacle detection system.
Figure 17. Motorized wheelchair with integrated obstacle detection system.
Applsci 15 01534 g017
Figure 18. After auto-stop is activated in front of obstacles.
Figure 18. After auto-stop is activated in front of obstacles.
Applsci 15 01534 g018
Figure 19. Train/box loss (bounding box loss for training data).
Figure 19. Train/box loss (bounding box loss for training data).
Applsci 15 01534 g019
Figure 20. Train/Cls loss (class loss for training data).
Figure 20. Train/Cls loss (class loss for training data).
Applsci 15 01534 g020
Figure 21. Train/DFL loss (DFL loss for training data).
Figure 21. Train/DFL loss (DFL loss for training data).
Applsci 15 01534 g021
Figure 22. Metrics/Precision.
Figure 22. Metrics/Precision.
Applsci 15 01534 g022
Figure 23. Metrics/Recall.
Figure 23. Metrics/Recall.
Applsci 15 01534 g023
Figure 24. Val/Box Loss (bounding box loss for validation data).
Figure 24. Val/Box Loss (bounding box loss for validation data).
Applsci 15 01534 g024
Figure 25. Val/Cls loss (class loss for validation data).
Figure 25. Val/Cls loss (class loss for validation data).
Applsci 15 01534 g025
Figure 26. Val/DFL loss (DFL loss for validation data).
Figure 26. Val/DFL loss (DFL loss for validation data).
Applsci 15 01534 g026
Figure 27. Metrics/mAP50 (mean average precision at IoU 0.5).
Figure 27. Metrics/mAP50 (mean average precision at IoU 0.5).
Applsci 15 01534 g027
Figure 28. Metrics/mAP50-95 (mean average precision from IoU 0.5 to 0.95).
Figure 28. Metrics/mAP50-95 (mean average precision from IoU 0.5 to 0.95).
Applsci 15 01534 g028
Figure 29. Obstacle detection on a roadway.
Figure 29. Obstacle detection on a roadway.
Applsci 15 01534 g029
Figure 30. Obstacle detection in a crowded pedestrian road.
Figure 30. Obstacle detection in a crowded pedestrian road.
Applsci 15 01534 g030
Figure 31. Obstacle detection on a highway.
Figure 31. Obstacle detection on a highway.
Applsci 15 01534 g031
Figure 32. Object detection performance under nighttime lighting conditions.
Figure 32. Object detection performance under nighttime lighting conditions.
Applsci 15 01534 g032
Figure 33. Additional experiments based on road type.
Figure 33. Additional experiments based on road type.
Applsci 15 01534 g033
Figure 34. Additional experiments based on weather conditions.
Figure 34. Additional experiments based on weather conditions.
Applsci 15 01534 g034
Table 1. System components for evaluation.
Table 1. System components for evaluation.
SensorClassDataset
TrainingValidationTest
ImageLeft115,20014,40014,400
Right115,20014,40014,400
TotalImage230,40028,80028,800
SensorClassDataset
TrainingValidationTest
LiDARLiDAR115,20014,40014,400
TotalLiDAR115,20014,40014,400
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, E.; Dinh, T.A.; Choi, M. Enhancing Driving Safety of Personal Mobility Vehicles Using On-Board Technologies. Appl. Sci. 2025, 15, 1534. https://doi.org/10.3390/app15031534

AMA Style

Choi E, Dinh TA, Choi M. Enhancing Driving Safety of Personal Mobility Vehicles Using On-Board Technologies. Applied Sciences. 2025; 15(3):1534. https://doi.org/10.3390/app15031534

Chicago/Turabian Style

Choi, Eru, Tuan Anh Dinh, and Min Choi. 2025. "Enhancing Driving Safety of Personal Mobility Vehicles Using On-Board Technologies" Applied Sciences 15, no. 3: 1534. https://doi.org/10.3390/app15031534

APA Style

Choi, E., Dinh, T. A., & Choi, M. (2025). Enhancing Driving Safety of Personal Mobility Vehicles Using On-Board Technologies. Applied Sciences, 15(3), 1534. https://doi.org/10.3390/app15031534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop