1. Introduction
According to the World Health Organization (WHO), at least 2.2 billion people have some form of visual impairment, either near or far [
1]. Population growth and aging are expected to increase the risk of visual impairment for more and more people. Between 2015 and 2050, the proportion of the world’s population over the age of 60 will nearly double from 12% to 22% [
2]. Therefore, there is a great need to develop robotic systems that can help blind and elderly people to be independent and feel autonomous.
Eighty percent of the information we need for our daily lives comes from our eyes. This means that most of the skills we possess, the knowledge we acquire, and the activities we perform are based on visual information. Vision is fundamental to the autonomy and development of every human being. For blind people, today the white cane is the most effective and widely used mobility tool. This mechanical device detects obstacles on the ground, uneven surfaces, holes, steps, and other hazards. However, this device does not provide the information necessary to determine how to reach a destination given the position and orientation. Similarly, guide dogs are a very valuable support to facilitate the mobility of blind people, as they not only help them to navigate, but also provide emotional support. However, their use is limited by economic factors, the availability of trained dogs, and the effort required to care for and train them. In response to these limitations, several technological solutions have been proposed, such as smart canes with ultrasonic sensors, smart glasses, haptic devices, and robotic guides. These tools can warn of obstacles at different heights and can improve awareness of the environment. However, many of these technologies still have limitations in terms of accessibility, cost, and ease of use. The three-wheeled RG-I robot proposed in [
3] can navigate indoors efficiently thanks to its laser rangefinder and radio frequency identification (RFID) sensors. In [
4], the GuideCane robot is presented as a lightweight design with which users can interact. The robot incorporates ultrasonic sensors that detect obstacles and provide a safe path. In [
5], the robotic system is equipped with a visual sensor, a laser rangefinder, and a speaker to provide blind people with information about their environment. Clustering techniques analyze the laser data to detect obstacles, steps, and stairs. In [
6], the iPath intelligent guidance robot is presented. It has the capability to follow a marked path on the floor and detect user movements and obstacles using infrared and ultrasonic sensors. In [
7], the Co-Robotic Cane, a new robotic navigation aid that uses a three-dimensional camera to estimate position and recognize objects in indoor environments, is presented. In [
8], a robot is designed to help blind people navigate unfamiliar terrain safely and independently. It uses ultrasonic, LIDAR, and infrared sensors, as well as a camera system, to perceive and analyze its dynamic environment. The robot is controlled by voice commands from a smartphone application. Other related designs are proposed in [
9,
10,
11,
12]. The use of legged robots to guide blind people has been limited due to the complexity of their control systems, resulting in a paucity of literature in this area. As described in [
13], the HexGuide is a hexapod guide robot designed to help blind people navigate challenging environments, such as airports and intersections. It accomplishes this using an inertial measurement unit (IMU), a depth camera, a LIDAR system, and voice commands. The Mini Cheetah quadruped robot [
14], used in [
15], incorporates a LiDAR sensor for localization and a depth camera to detect obstacles and guide an assisted person using a leash through confined indoor spaces. Similarly, the Laikago quadruped robot system [
16] is used in [
17], incorporating a LiDAR, a camera, and a controllable traction device. This device is capable of adjusting the length and strength of the interaction between the robot and the blind person. This ensures safe and comfortable assistance in confined indoor environments. Nippon Seiko Kabushiki-gaisha (NSK) developed a robotic guide dog with wheels at the end of each leg. It uses these wheels to move around and its mechanical legs to climb obstacles, such as stairs. A distance image sensor on its head allows it to detect steps, and proximity sensors obtain information from its legs [
18]. In [
19], a mobile guide robot designed for outdoor environments is presented. The robot plans its path using data from ultrasonic radar and IMU sensors. Its vision system identifies obstacles and their distance using a deep learning and binocular ranging algorithm, then informs the blind person via voice. In [
20], a wheeled hexapod robot with the capability to detect traffic lights and adjust its speed between itself and a blind person is proposed. In terms of commercial solutions, Lysa [
21] is the first robotic guide dog designed specifically to assist blind people. It has two motors, eight infrared sensors, a LiDAR sensor, and a camera to detect potholes, obstacles, and collision risks at different heights. In addition, Lysa provides warnings through recorded voice messages and can analyze the environment to find safe alternative routes. Other solutions that have been developed to assist blind people in different environments are wearable devices, such as those presented in [
22,
23,
24].
Global demographic change is a reality. As a result, the number of older people is increasing in both developed and developing countries. As this population segment grows, so does the demand for health and social services, such as elderly care. This raises a number of social and economic issues. Many elderly people prefer to live alone rather than with family members who could care for them. Robots can improve elderly care by providing assistance, companionship, and monitoring services. Several types of service robots have been designed to assist the elderly in their daily lives, improve their quality of life, and ensure their autonomy. Companion and emotional support robots have been designed. For example, Paro is shaped like a seal and responds to touch and voice. Paro is designed to have three types of effects: psychological (e.g., relaxation and motivation), physiological (e.g., improved vital signs), and social (e.g., facilitating communication between patients and caregivers) [
25]. There are also home assistance robots. For example, Roomba is an autonomous vacuum cleaner that helps keep homes clean without the intervention of elderly people [
26]. Health monitoring and medication reminder robots are also available. One example is Mabu, an interactive robot that reminds users to take their medication and collects health information to share with caregivers or doctors [
27]. Other solutions are mobility and physical support robots, such as LEA (Lean Empowering Assistant) [
28]. LEA is specifically designed to help the elderly and people with reduced mobility move around and assist them in becoming more independent. Telepresence robots, such as the Beam robot, allow family members to interact with elderly people via videoconferencing, giving them a sense of companionship [
29]. There are also cognitive assistance robots, such as Elli-Q [
30]. Elli-Q remembers tasks, helps in routines, and suggests activities to promote a healthy lifestyle, such as reading or walking. Emergency assistance robots have also been proposed. For example, Temi is designed to detect falls and send alerts to family members or emergency services [
31]. There are several rehabilitation robots available. One example is the Armeo, which is specifically designed to assist in the rehabilitation of the arms and hands [
32]. It is particularly useful for elderly people who have suffered a stroke, had neurological injuries, or had a condition that affects the mobility of their upper extremities. Other robots designed to assist the elderly with movement include Kompaï Assist [
33], Care-O-bot [
34], and Zora [
35]. The Kompaï robot provides multiple services, including day and night surveillance, mobility assistance, fall detection, shopping list management, agenda organization, social connectivity, cognitive stimulation, and health monitoring [
36]. It features speech recognition and can navigate autonomously in unfamiliar environments while detecting obstacles and assessing risks. Its user interface combines touchscreen controls with voice commands, and it has a handle to make standing up easier. On the other hand, the Care-O-bot is a versatile robotic system that can perform various functions, such as picking up and delivering objects, supporting independent living, assisting in security and surveillance, and welcoming and accompanying people in public spaces, like shops and museums [
37]. Similar to Kompaï, Care-O-bot allows interaction via touch screen and voice commands and has a handle for ease of use. Finally, Zora is a humanoid robot equipped with sensors for visual and auditory perception. It is mainly used in rehabilitation environments and for social and leisure activities for elderly people [
38].
However, the use of current guide robots is limited by several factors. The high cost resulting from advanced technologies, such as sensors, artificial intelligence (AI), and specialized materials, limit their accessibility. In addition, human-robot interaction is a barrier, as many blind and elderly people may lack experience with advanced technological devices, which hinder their use and acceptance. The configuration and customization of these robots often require specialized technical assistance, which increases the difficulty of their implementation. On the other hand, the mobility and navigation capabilities of these robots are limited, especially in dynamic, unstructured, or crowded environments. These factors highlight the need to improve their accessibility, to reduce their cost, and to facilitate their interaction with users so that these robots can have a real impact on the daily lives of blind and elderly people after proper user-oriented operation modes are established.
This paper presents the development of a guide robot designed to assist blind and elderly people in their mobility. The need for assistive solutions for these user groups is analyzed with key requirements including safe navigation, reliable human–robot interaction, adaptability to environmental variability, and affordability. Existing solutions often rely on either purely wheeled or purely legged locomotion, which can limit terrain adaptability or energy efficiency.
Based on an analysis of these requirements, an innovative guide robot was designed featuring a hybrid locomotion system. The initial prototype consisted of a hexagonal platform with four symmetrically arranged wheels (two motorized at the rear and two passive) and two front legs equipped with linear actuators and wheels. This configuration aims to combine the efficiency of wheeled locomotion with the adaptability of leg-assisted motion to handle minor unevenness and improve user safety.
Following the fabrication of the first prototype, preliminary tests were performed in indoor environments, including straight-line displacement and obstacle-overcoming trials. These tests revealed several limitations related to load distribution and stability, such as excessive loading of the central wheels, loss of platform horizontality, and insufficient rear wheel traction. To address these issues, the robot was structurally redesigned. The revised prototype adopts a triangular stability configuration with a single central wheel and dual rear wheels, resulting in improved static and dynamic stability and better overall load distribution.
This paper describes the design process and experimental evaluation of the prototype, as well as the insights gained from the redesign. The results demonstrate the feasibility of the hybrid locomotion approach and lay the groundwork for further development toward a fully functional assistive guide robot.
The manuscript is organized as follows.
Section 2 discusses the problems and requirements for the design of a guide robot for blind and elderly people. It also presents the conceptual design and modeling using CAD tools.
Section 3 presents the results of the design and evaluation of the prototype.
Section 4 provides a critical discussion of these results, analyzing their significance and possible implications. Finally,
Section 5 presents the conclusions of the work.
3. Results
To analyze the feasibility of the designed guide robot during rectilinear displacement, a numerical simulation was carried out using Autodesk Inventor Professional (Version 2025). A constant angular velocity input of 15.38 rad/s was applied to the rear drive wheels, which have a diameter of 65 mm. Gravity was considered in the −
Z direction, and the solver was set to an accuracy of 1 × 10
–6. Two reference points were selected for the kinematic and dynamic analyses: point A that is located at the geometric center of the hexagonal platform constituting the robot’s main body, with an initial position of (−314,522, −20,414, 54,473) mm, and point B that is located at the right front wheel support, with an initial coordinate of (63,896, −63,023, 175,473) mm. The motion of point B enables direct evaluation of the active adjustment mechanism of the right front leg, whose initial length and angle were set to 12 mm and 173.31°, respectively. While the displacement of point A allows an evaluation of the overall displacement of the main platform, the trajectory of the center of mass (CoM) is analyzed as a global indicator of robot stability. The combination of points A, B, and CoM gives a global evaluation of the motion behavior of leg–wheels and overall hexapod. To ensure realistic and stable behavior, the contact between the wheels and the ground was modeled using 3D contacts with a friction coefficient of 0.6, a stiffness of 1,000,000 N/mm, and a damping of 1000 N·s/mm. The friction and stiffness parameters of the wheel-ground contact are selected from standard values available in literature such as [
47], with the aim of modeling realistic behavior on a hard surface, such as asphalt, without direct empirical validation. The stiffness value is chosen to model the wheels as rigid bodies, which is appropriate for the scope of the simulation. A robot weight is considered of 1.9 kg and a payload of 2 kg for onboard equipment.
Figure 7 shows a snapshot of the simulated rectilinear motion of the designed guide robot.
The feasibility of the proposed design as in
Figure 5 and
Figure 6 and the efficient operation performance were evaluated by considering the output calculation from the simulated motion as in
Table 5 so that it was proceeded with the construction of a prototype as in
Figure 8.
Figure 8 shows the prototype of the guide robot built at LARM
2 (
https://larm2.ing.uniroma2.it/, visited on 15 September 2025). The central platform structure, made of plastic and rectangular geometry, has been adapted with squares that give it a shape similar to the hexagonal geometry of the conceptual design. This structure houses electronic components and battery. The prototype’s layout shows the spatial distribution of the subsystems and the feasibility of the hybrid locomotion concept.
It should be noted that the physical prototype differs slightly from the initial CAD model. While the CAD design employed a hexagonal platform with six wheels, the built prototype adopted a five-wheel configuration with a central wheel and dual rear wheels arranged in a triangular stability layout. This modification was introduced to improve static and dynamic operation following testing feedback. In addition, minor adjustments in dimensions and mounting details were made during prototype assembly to accommodate available components and to simplify construction, without altering the fundamental locomotion principle of the proposed hybrid wheel–leg system.
After assembling the prototype shown in
Figure 8, a critical defect was identified in the design of the front leg mechanism. The mechanism, which consists of a linear actuator, a leg motor housing, a front wheel motor mount, and a front wheel motor, generates a combined torque that exceeds the capacity of the MG996R servo motor that actuates it. This resulted in two issues: (1) an insufficient driving force, which prevented lifting movements and caused spontaneous gravitational restart, and (2) forward weight distribution, which caused instability and risk of rear wheel lift.
To mitigate these issues, the L12-100-50-6R actuator was replaced with the L16-50-63-6-R linear actuator [
48]. This modification was crucial, as the shorter stroke of the L16 linear actuator (50 mm) meant a shorter length of the actuator and, therefore, of the leg mechanism, reducing the torque required for its movement. Changing the actuator solved the problem of excessive torque, ensuring stable and reliable leg-lifting movement.
After an experimental evaluation of the initial prototype shown in
Figure 8 through straight-line displacement and obstacle-overcoming tests, stability issues related to the mechanical configuration were identified. The tests revealed that the system’s center of gravity displaced towards the front leg relative to the geometric center of the platform. This misalignment resulted in three main operational limitations, namely the following:
Overloading of the central wheels: This significantly restricts robot freedom of rotation.
Loss of platform horizontality: Relying solely on support from the center and rear wheels compromised the effectiveness of the front leg adjustment when overcoming obstacles.
Insufficient rear wheel traction: Poor load distribution limited the ability to propel forward.
To address these deficiencies, the prototype underwent a structural redesign, as shown in
Figure 9b. Modifications included strategically repositioning the center wheels and adopting a triangular wheel configuration. This new configuration consists of a single center wheel paired with a set of dual rear wheels to improve the system’s static and dynamic stability.
Table 6 shows the main characteristics of the designed five-wheeled guide robot prototype.
The prototype was experimentally characterized in terms of speed, autonomy, and terrain-handling capability. Straight-line tests over a measured distance revealed an average cruising speed of approximately 0.1 m/s. Although this value is below the target design range of 0.5 to 1.5 m/s, it confirms the feasibility of basic movement and highlights the stability of the hybrid wheel-and-leg configuration. The limited speed of the prototype is primarily due to conservative control settings and the limited power of the current servomotors. These aspects will be improved in future iterations of the design. In terms of energy autonomy, the platform uses a 14.8 V, 2.6 Ah lithium polymer battery that provides 38 Wh of available energy. Based on a measured average power consumption of 6 W, the expected autonomy is 6.4 h. Considering conversion inefficiencies and actual operating conditions, the practical autonomy can be estimated at 4.5 to 5 h.
The total cost of prototype assembly, including mechanical and electronic components, is about 255 euros, well below the target threshold of 500 euros. This cost includes expenditures for actuators, sensors, structure materials, and control unit, demonstrating that the proposed design can be realized with affordable, readily available market components.
The designed test protocol includes repeated trials of straight-line motion, turning maneuvers, and overcoming obstacles. The robot prototype demonstrated proper and consistent performance in both linear displacement and rotational movements, confirming efficiency in locomotion and control strategy. For overcoming obstacles, the test results showed a 100% success rate when attaching steps of up to 30 mm, while higher obstacles revealed limitations due to the maximum torque of the leg servomotors (about 1 Nm), which prevented reliable clearance of the obstacles.
Validation tests confirmed that the new design meets the considered functional requirements. The new design demonstrates improved obstacle negotiation and more balanced mass distribution.
Figure 9b shows the redesigned prototype overcoming a 40-mm-high step.
The main improvements introduced by this redesign are the following:
Optimization of mass distribution through geometric reconfiguration of the chassis.
Implementation of a triangular support configuration improves stability in motion and at rest.
Maintaining the horizontal orientation of the platform during front leg adjustment.
Reduced rear wheel slip due to improved traction distribution.
This redesign effectively addresses the shortcomings observed in the initial prototype in
Figure 9a, while maintaining the guidance robot’s fundamental functional objectives, particularly its ability to navigate autonomously around obstacles.
Figure 10 shows a snapshot from an experimental test in which the redesigned prototype successfully overpasses a 20-mm-high step. The first image shows the robot approaching the step while keeping its platform horizontal. The second and third images depict the coordinated deployment of the front legs’ linear actuator, which raises the front wheels and positions them over the obstacle. The fourth image shows the rest of the robot body ascending stably, with the platform preserving its horizontal orientation throughout the maneuver. This performance is attributed to optimal mass distribution, triangular support configuration, and optimized wheel traction, which ensure continuous movement and robust stability when overcoming obstacles.
Figure 9b shows the robot overcoming a 40 mm high step which corresponds to the kinematic limit of the leg–wheel assembly and the available actuator torque. Beyond this height, slippage and loss of stability were observed preventing the obstacle overpassing. In
Figure 10 the prototype overpasses successfully a 20 mm obstacle with success rate of 100% of 10 tests while
Figure 9b shows a successively overpassing of an obstacle of 40 mm with a success rate of 40% from 10 tests yet.
Test results show that the prototype meets the objectives that are defined in the design requirements regarding weight, cost, and autonomy, while speed optimization and improved obstacle-handling capacity will be addressed in future work.
The sensor equipment has been selected to meet the user’s main needs and requirements as in the conceptual design in
Figure 11. It allows users to be aware of their environment while moving, and to move safely with adequate interaction with the guide robot through voice feedback. The guide robot’s autonomous navigation capability is based on proper path planning and obstacle avoidance algorithms, as well as appropriate sensors. The IMU will be installed in the central part of the robot’s body, close to its main processing unit. This position enables the robot to take precise measurements of its inclination, acceleration, and orientation in real time. The LiDAR sensor, mounted on the front of the robot, performs 360-degree scanning of the surrounding environment. It generates a three-dimensional map and detects obstacles at varying distances and heights, enabling precise localization and dynamic path planning. A front-facing camera complements the LiDAR by capturing visual information in the direction of travel. This allows the robot to recognize objects and visual landmarks, detect humans, and perform real-time scene analysis to enhance the user’s situational awareness. In addition to the primary sensors, a distributed array of infrared (IR) sensors is strategically placed around the platform. These sensors provide short-range obstacle detection and edge-following capabilities. These capabilities are especially useful for avoiding low-lying objects and detecting changes in terrain, such as curbs and stair edges. An ultrasonic sensor is installed near the front of the robot to enhance obstacle detection in areas where IR or LiDAR might struggle, such as with transparent surfaces or soft materials. This sensor improves the robot’s ability to perceive its surroundings under various environmental conditions. To support global localization and outdoor navigation, the robot is equipped with a GPS module, enabling geolocation and route tracking in open spaces. To support effective user interaction, the robot incorporates directional microphones and a voice command interface, placed at the top section of the robot. These components enable speech recognition for receiving verbal instructions and provide voice feedback for auditory guidance, supporting inclusive and adaptive human–robot interaction. Together, these sensors allow the guide robot to effectively operate in dynamic environments, maintain situational awareness, and provide intelligent, personalized assistance to blind and elderly users.
Figure 11 aims to illustrate the layout and physical integration of different sensors in the prototype design. However, details on accuracy data for the LiDAR and IMU are not included, since the scope of the paper focuses on the mechanical design and feasibility of the hybrid locomotion system.
In view of the intended use with blind and elderly users, particular attention is given to safety considerations. While the presented prototype focuses on validating design feasibility, a final design will incorporate several key safety aspects. These features will include a physical emergency stop button, pre-programmed speed limits to prevent sudden movements, and fail-safe modes that ensure the robot enters a safe state in the event of system failure. Additionally, sensor redundancy will be implemented for attaching critical situations like obstacle detection and operating in complex environments. Additionally, the design and operation will be analyzed for formal risk assessment as a function of the relevant safety standards for service robots.
4. Discussion
The current prototype has a total mass of 1.9 kg, which is slightly below the 2–3 kg target range defined in the design requirements. This limited weight ensures portability and ease of handling. However, future iterations will reinforce certain structural elements to align more closely with the specified range. With a cost of about 255 euros, the prototype is well below the established threshold of 500 euros, confirming the affordability of the proposed design. Experimental tests indicated a practical endurance of approximately 4.5–5 h, which is consistent with the design objective. Though the measured average speed of 0.1 m/s is below the initial specification range, it is sufficient for demonstrating the proper operation of the hybrid locomotion system. Future work will focus on improving the drive system and control strategy to increase cruising speed. Additionally, battery discharge tests will be performed under realistic load conditions to validate endurance in operational scenarios.
The current prototype integrates two inertial measurement units (IMUs) as its primary sensing elements. One IMU is installed near the geometric center of the platform, close to the main processing unit, where it provides real-time measurements of inclination, acceleration, and orientation for maintaining stable locomotion and correcting the robot’s trajectory. The second IMU is mounted directly on the right front wheel motor to monitor the movement of the corresponding leg. These measurements are used to analyze the kinematics of the leg–wheel assembly during obstacle-overcoming tests and to ensure stable operation.
Additional perception and interaction modules—including LiDAR for environmental mapping, a front-facing camera for object detection, infrared and ultrasonic sensors for short-range obstacle detection, and a GPS module for outdoor navigation—are not yet implemented but are planned for future iterations. Their integration will enable autonomous navigation, robust localization, and operation in more complex environments.
Likewise, the voice interaction interface, consisting of directional microphones and speech synthesis for feedback, is still at the design stage. Its implementation, together with structured human–robot interaction (HRI) trials using standard metrics such as guidance accuracy, perceived safety, and user workload, is planned for future work.
Although limited, the implemented sensorization in the prototype ensures that the robot expected operation and basic sensing have been validated first, providing a reliable foundation for the subsequent integration of navigation and interaction capabilities.
Simulation work has been carried out both to define the design solution and to check its feasibility whereas the performance characteristics have been determined with experimental tests also in more demanding situations. Because of that, a comparison between simulation computations and test results was not possible and indeed not useful for a characterization of the prototype performance while the validation of the design model can be considered obtained with respect to the design requirements.
The experimental results from the built prototype identify the problems of the mechanical design for mobility and stability of the proposed guide robot that were attached with modifications as discussed referring to
Figure 8 and
Figure 9. The implementation of a new design configuration improved the robot’s ability to maintain its horizontal orientation during complex maneuvers and facilitated a more efficient action of the front legs, which are equipped with a linear actuator. Experimental validation of the redesigned prototype demonstrated that the robot can overcome obstacles without compromising its stability. In addition, the platform was verified to remain levelled during the process, which is important to protect the electronics and ensure user safety. Wheel slippage was also eliminated, which improved trajectory control during travel.
The proposed mobile guide robot can be considered a mobile platform for housing sensorization and human-machine interaction units to provide an intelligent responsive service robot in assistance blind and elderly people with the following features:
The lightweight small design ensures portability also in possibility that users can handle the mobile guide robot for easy transportation. The user-oriented design solution with market components provides low-cost system and low-cost operation and maintenance. The modular design challenges the customized features in selecting and adjusting the on-board equipment as function of the specific needs and capability of a user.
The experimental protocol included repeated straight-line motion tests, turning maneuvers, and obstacle-crossing trials in indoor environments. These trials confirmed the design feasibility of the hybrid locomotion system and highlighted the robot’s ability to reliably overcome steps of up to 30 mm. However, the current evaluation does not cover outdoor surfaces, variable slopes, multiple obstacle heights beyond this threshold, or operation under different payload conditions. These aspects will be addressed in future testing campaigns to provide a broader and more structured validation of performance.
Quantitative experimental evidence is presented with the reported test results to validate the design feasibility of the proposed design as proof-of-concept and to characterize the performance of the prototype with respect to the planned targets in terms of locomotion operation.
Table 6 summarizing the characteristics of the prototype shows a satisfactory demonstration of respect of the requirements in
Table 2 in the proposed solution.
Future developments can be planned in improvements in design and operation aspects as following results of a proper testing of the built prototype on field in home and outdoor environments by potential users with their differentiated characteristics.