1. Introduction
Electric golf carts are widely utilized for transportation in large or limited areas such as hospitals and university campuses due to their energy efficiency and compact design. These vehicles are particularly useful in carrying passengers and luggage, offering a practical alternative to conventional cars. Additionally, since they are electrically powered, they produce zero carbon emissions, making them an environmentally friendly choice for public spaces.
Effective transportation management is crucial in ensuring both efficiency and passenger safety. Studies indicate that human factors, such as driver fatigue and reduced attentiveness, are the primary causes of scheduling errors and accidents. This problem is intensified when drivers work irregular shifts, such as alternating between day and night, which elevates their workloads and safety risks. Moreover, controlling the vehicle speed and steering according to the road conditions is vital for passenger comfort and system reliability. Distracted driving has been shown to increase crash risks and impair driver performance [
1]. To reduce the impacts of human error, the cited report highlights interventions such as automated speed enforcement and vehicle safety technologies, including electronic stability control and advanced braking systems, which can support drivers and mitigate the consequences of unsafe behavior. Furthermore, recent studies on trust assessment in autonomous golf carts emphasize the importance of user confidence and interaction in the deployment of such systems [
2].
Regarding the above challenges, autonomous electric golf carts present a viable solution. In this research, we used the Club Car Precedent four-seat electric golf cart (as shown in
Figure 1) as the base platform for the development of an autonomous system. Through the integration of an optimized transportation system, the vehicle can enhance both time management and safety while minimizing human-related risks. Autonomous vehicle systems are designed to sense, interpret, and act with minimal or no human intervention. According to the SAE J3016 standard, their levels of automation range from fully manual (level 0) to fully autonomous (level 5) [
3]. As these systems evolve, they are typically structured around three main modules: perception, planning, and control [
4]. A notable example of full autonomy is represented by the successful 103 km journey of the Mercedes-Benz S-Class S 500 INTELLIGENT DRIVE along the historic Bertha Benz Memorial Route in Germany [
5]. Moreover, several recent developments demonstrate the implementation of autonomous navigation and control systems in electric golf carts, such as the intelligent driving assistance system proposed by Liu et al. [
6], the self-driving CART platform using LiDAR and stereo vision presented by AlSamhouri et al. [
7], and the deep learning-based navigation approach described by Panomruttanarug et al. [
8].
Sensor technology plays a fundamental role in the perception systems of autonomous vehicles. According to Hirz et al. [
9], sensor selection must align with the functional requirements defined according to the levels of automated driving, with current and emerging technologies evaluated for their suitability for object detection and classification. Cameras and LiDAR are widely used; cameras assist with object recognition, while LiDAR offers precise depth and spatial measurements that are independent of lighting. Chen et al. [
10] proposed a multi-view 3D detection framework that fuses LiDAR point clouds and RGB images to enhance the accuracy of 3D object detection. Similarly, many autonomous systems adopt sensor fusion strategies to improve perception reliability. Recent studies highlight the effectiveness of multi-sensor fusion and segmentation using deep reinforcement learning and DQN-based frameworks [
11], as well as enhanced obstacle avoidance through adaptive fusion algorithms combining LiDAR, radar, and cameras [
12]. For cooperative environments, LiDAR–depth camera fusion has also been applied to improve safety in human–robot interaction systems [
13]. The emergence of neural rendering and robust 3D object detection techniques, such as SplatAD [
14], robustness-aware models [
15], and advanced voxel-based approaches [
16], has led to further enhancements in environmental understanding. For autonomous golf carts, LiDAR remains the preferred sensing modality due to its robustness under variable lighting conditions [
17].
In addition to real-time sensing, high-definition (HD) maps are essential for localization, route planning, and situational awareness in autonomous systems. Poggenhans et al. [
18] introduced Lanelet2, an open-source HD map framework designed for a wide range of applications in highly automated driving, including but not limited to localization and motion planning. To achieve accurate localization, various map representations, such as Gaussian mixture models and 3D point cloud maps, are aligned with real-time sensor data using registration algorithms such as Iterative Closest Point (ICP) or Normal Distributions Transform (NDT). Wolcott and Eustice [
19] demonstrated robust LiDAR-based localization using multi-resolution Gaussian mixture maps. Liu et al. [
20] employed the NDT within a SLAM framework to reconstruct high-precision point cloud maps. More recent advancements, including LiDAR–IMU fusion with uncertainty estimation [
21], long-term odometry and mapping systems [
22], and integrated GNSS/IMU/LiDAR mapping [
23], have demonstrated centimeter-level precision. A comprehensive review by Fan et al. [
24] further categorized multi-sensor fusion SLAM systems into LiDAR–IMU, visual–IMU, LiDAR–visual, and LiDAR–IMU–visual combinations and emphasized the advantages of hybrid localization methods for autonomous platforms.
Object detection is critical for autonomous vehicles to navigate safely in dynamic environments. You Only Look Once (YOLO) is a widely used real-time algorithm for 2D object detection that processes images using a single neural network for fast and efficient identification [
25]. For 3D perception, models such as MV3D integrate RGB images with LiDAR point clouds to generate accurate 3D bounding boxes, enhancing the spatial understanding of surrounding objects [
10]. Recent research has extended this capability with YOLOv11 for high-precision vehicle detection [
26] and object recognition in complex environments [
27].
In this study, all key modules, including perception, localization, planning, and control, were integrated into the Club Car Precedent electric golf cart platform. A LiDAR sensor and camera were installed, along with onboard CPU and GPU units for real-time processing. Localization was achieved using HD maps based on point cloud data, with alignment performed through the Normal Distributions Transform (NDT) algorithm. Both 2D and 3D object detection techniques were incorporated into the system to identify static and dynamic obstacles. The final autonomous system possessed self-driving capabilities, including automated acceleration, braking, steering, and collision avoidance, being suitable for low-speed environments such as campuses and public facilities. This paper evaluates the performance of such an autonomous golf cart system under controlled campus environments to validate its operational accuracy and reliability.
2. Electric Golf Cart and Sensor Testing
The steer-by-wire mechanism integrates an electrical motor and encoder, employing electrical signals to regulate the cart’s turning angle. The throttle-by-wire system is designed to control the electrical voltage to the motor, with both systems governed by closed-loop control and utilizing an Arduino UNO R3 for PID control. Meanwhile, the brake-by-wire system manages the fluid pressure in the motor brake system. The throttle, brake, and steering wheel setup used in this research is illustrated in
Figure 2. These systems, designed for the study of PID control [
28], collectively enhance the electric golf cart’s driving control.
The electric golf cart is equipped with a Logitech C920 webcam and a Velodyne Puck VLP-16 LiDAR (as shown in
Figure 3). To ensure efficient real-time processing in handling both the camera and LiDAR data, the Mini-ITX PC with an Intel CORE I7-11700K and Nvidia Geforce RTX3080Ti serve as the central processor unit and graphic processor unit, respectively, and are installed at the rear of the cart, as shown in
Figure 4. The LiDAR operates on a 12 Vdc supply from the battery, while the cart’s battery provides 48 Vdc to the processor unit. The webcam is connected to the central processor unit via USB.
The camera and LiDAR components necessitate dedicated software for the processing of the generated data. The camera transmits data in the form of a three-layer array, while the LiDAR provides point cloud scanning data. However, the utilization of a camera necessitates calibration to compensate for errors and distortions in the acquired images. This is achieved through intrinsic camera calibration, which involves capturing images of a checkerboard at various angles (demonstrated in
Figure 5). The data from these images are then employed to calculate a transform matrix, allowing the precise adjustment and correction of the camera’s output.
4. Autonomous Driving System
Claudine et al. [
31] stated that the architecture of the autonomous system in a self-driving car is typically divided into two main components: the perception system and the decision-making system. In this section, the development of the autonomous system is described in the context of this structure, relying on the robot operating system (ROS) and utilizing prepared data obtained from sensors combined with high-definition map data. The acquired data are subsequently processed to facilitate the effective control and handling of the golf cart.
4.1. Localization
The use of the Normal Distributions Transform (NDT) algorithm for autonomous vehicle localization has been explored in several studies. Naoki et al. [
32] generated a 3D point cloud map of the Gwangju Institute of Science and Technology using a VLP-32 LiDAR and demonstrated that NDT enables real-time localization. Likewise, Liu et al. [
33] proposed using NDT as an alternative to GPS in environments where satellite signals are obstructed by buildings. They showed that the mean square error in NDT localization was comparable to that of GPS, supporting its effectiveness in GPS-denied environments.
Building upon this foundation, our research employs NDT localization within a controlled service area using a point cloud map generated from LiDAR data. The localization process begins with configuring commands in the robot operating system (ROS) to acquire both the pre-built HD map and real-time LiDAR scan data. These datasets are processed using the NDT algorithm, as demonstrated in the indoor testing illustration shown in
Figure 6.
During localization, the system first initializes its position by assigning an origin point, typically (0, 0, 0), within the map’s coordinate frame. To evaluate the localization performance, twelve reference points were selected along the golf cart’s path. Coordinates for these points were extracted from the point cloud map and compared to manually measured data using CloudCompare. At each reference point, the localization origin was set as the midpoint between the rear tires of the cart. These physical locations were marked on the testing ground using tape, as illustrated in
Figure 7. The calibration positions are presented in
Figure 8.
4.2. Two-Dimensional Object Detection
The training data were collected using the golf cart, aiming to capture images containing humans, cars, and motorbikes. YOLOv11 was selected as the model in this work because it is the latest version in the YOLO series, featuring architectural improvements such as C3K2 blocks, Spatial Pyramid Pooling Fast (SPFF), and C2PSA attention mechanisms. These enhancements improve the detection performance for small and partially occluded objects, allowing the earlier and more reliable recognition of pedestrians, vehicles, and motorcycles, which may appear smaller or partly hidden in the scene. Such capabilities are particularly valuable for autonomous driving systems, where rapid detection in complex outdoor environments supports safe navigation and collision avoidance. After model selection, the dataset was annotated using Supervise.ly and divided into training and testing subsets, representing 80% and 20% of the total images, respectively. The model was trained and tested using data captured under daytime conditions, aligning with the golf cart’s intended inter-building operation during daylight hours.
To quantify the object detection performance in the test set, the mean average precision (mAP) is calculated, serving as a metric to assess the overall effectiveness of the YOLOv11 model. The results are compared with those of the YOLOv11 COCO model to verify the environment encountered by the golf cart. The model’s weights are further tested under varying times of day to assess its positioning accuracy and precision in determining the size of the detected object.
Figure 9 demonstrates how the person and cart classes are labeled, and
Figure 10 demonstrates the car and motorcycle classes’ labeling.
4.3. Point Cloud Clustering
Euclidean clustering is employed to segment point cloud data into distinct groups representing individual objects in the environment. Anjani et al. [
34] demonstrated the effectiveness of this method for real-time object identification around autonomous vehicles. In their approach, point cloud data obtained from LiDAR were first filtered using a voxel grid filter and then divided into smaller sections through slicing. The ground plane was removed using the RANSAC algorithm, which enabled the identification and exclusion of points lying on the same plane. The remaining non-ground points were then processed using Euclidean clustering, grouping nearby points and identifying them as individual objects.
In this study, the Euclidean clustering method is implemented by scripting commands to extract LiDAR data and initiate the clustering process. The primary objective is to support three-dimensional object detection. After clustering, the results are evaluated in terms of the object positioning accuracy and the precision of object size estimation. The results of the point cloud clustering process are illustrated in
Figure 11.
4.4. Three-Dimensional Object Detection
In this work, 3D object detection in the autonomous driving system involves the utilization of the robot operating system to process the clustered point cloud data obtained from the surroundings of the golf cart. These clustered data reflect both objects and the ambient environment and are derived through bounding box data generated by the YOLOv11 object detection model. The approach employs the middle point of the clustered point cloud associated with each object, which is then compared with the middle pixel of the object within the detected image. Furthermore, a comparative analysis is conducted by leveraging extrinsic camera–LiDAR calibration techniques.
This process involves the creation of 3D bounding boxes encapsulating each group of point cloud data. These bounding boxes serve a dual purpose: elucidating the dimensions of the detected object and contributing to the localization of this object within a grid map. The 3D object detection process is demonstrated in
Figure 12.
4.5. Grid Map Generation
Autonomous driving operations are managed through decision-making based on a 2D map or a grid map. Here, the grid map is generated using the high-definition map, point cloud data, traffic path information, cart localization data, 3D object detection results, and point cloud scanning data from the cart. The processing of these data is executed with a precision of 0.05 cm.
In grid map generation, each point is assigned a value or either 1 to indicate the position of an object or 0 to indicate a feasible path for the cart. Precision in positioning is achieved through data integration from point cloud scanning and point cloud map sources. Additionally, the height of each object is considered in this process, and restrictions are imposed to ensure that the cart is not permitted to traverse areas identified as occupied in both the point cloud scanning and point cloud map data.
4.5.1. Path Planning
Sedighi et al. (2019) [
35] proposed the use of the A* algorithm for autonomous vehicle path planning in parking lot scenarios. Their simulations, conducted over 1000 runs under various conditions, showed that the A* algorithm enabled the efficient identification of the shortest path and the successful avoidance of collisions with both static and dynamic obstacles. Considering a different domain, Liu et al. (2019) [
20] addressed the limitations of the traditional A* algorithm in maritime navigation, where additional factors such as obstacle risks, traffic separation rules, vessel maneuverability, and water currents must be considered. They proposed an improved A* algorithm that integrates these risk models to balance the path length with navigation safety, demonstrating its effectiveness through simulations and real-world scenarios.
Building on these insights, our path planning system applies the A* algorithm using data extracted from the grid map, enabling the identification of feasible paths for the golf cart, in combination with information from the high-definition map. The data are processed to generate a collision-free path, using A* to evaluate multiple route options and selecting the one with the lowest cost. This approach ensures that the golf cart navigates safely around obstacles while adhering to defined traffic lanes.
4.5.2. Path Tracking Control
Path tracking control involves two widely used approaches: pure pursuit control and model predictive control (MPC). Rokonuzzaman et al. (2021) [
4] compared both methods and found that pure pursuit is more suitable for low-speed autonomous vehicles due to its simplicity, lower computational demands, and reliable tracking performance when the vehicle starts on-path. It operates by generating a curved path toward a target point at a specified look-ahead distance using the vehicle’s kinematic model. In contrast, MPC evaluates the system’s future states based on a dynamic model and updates the control commands in real time, offering higher accuracy and adaptability but at the cost of greater computational resource consumption.
In this study, pure pursuit (as shown in
Figure 13) is implemented, leveraging path data derived either from the path planning module or from center-lane tracking. The algorithm processes these data to control the vehicle’s steering based on the golf cart’s kinematic model. The main output is the steering angle command, a critical element of low-level control. The proper tuning of the look-ahead distance is essential to optimize system performance, enabling the vehicle to anticipate and respond to path changes effectively while maintaining stability and smooth navigation.
4.5.3. Curve Speed Control
The forces acting along the lateral and longitudinal axes play a critical role in determining vehicle stability and passenger comfort during autonomous operation. According to Gill et al. [
36], an understanding of these directional forces is essential in designing effective control mechanisms, particularly in systems involving four-wheel drive dynamics. Building on this, Il Bae (2019) [
37] found that, for optimal control and ride comfort, the acceleration values should remain within the range of
to
along both axes. Moreover, their study highlighted that human sensitivity to vibrations is highest within the frequency range of 4 to 16.5 Hz in both the lateral and longitudinal directions. Maintaining the control inputs within these thresholds ensures smoother navigation and reduces discomfort caused by excessive motion or vibrations.
In alignment with the above, the current research incorporates the Xsense MTi-30 inertial sensor (Xsens Technologies B.V., Enschede, The Netherlands), installed at the point nearest to the center of gravity on the golf cart (illustrated in
Figure 14). Experimental trials were conducted on a designated testing route, wherein the autonomous driving cart maintained a constant speed. Acceleration measurements in the lateral and longitudinal axes were recorded as the vehicle entered curves within the testing area. The tests were divided into five phases, varying the speed between 5 and 10 km per hour. The tests were intended to ascertain the most suitable speed for entering curves in the testing area, subsequently informing the optimization of the autonomous driving system.
5. Results and Discussion
5.1. Generation of High-Definition Map
A point cloud map was transformed into 3D maps using Normal Distributions Transform (NDT) mapping, resulting in three separate maps. To evaluate the system, the golf cart was driven for three laps along a designated lane. Calibration was performed by comparing the sizes of the point cloud maps with the actual physical area. Eight reference points on the point cloud maps were measured using the CloudCompare software and compared to real-world measurements.
Figure 15 shows the top view of the high-definition map generated from the point cloud data, while
Figure 16 illustrates the locations where measurements were taken for comparison. The localization errors calculated from this comparison are summarized in
Table 1, with distance errors of 1.15%, 1.17%, and 1.28% for maps 1, 2, and 3, respectively. Localization accuracy tests yielded mean absolute error (MAE) values of 0.082 m, 0.078 m, and 0.075 m for tests 1, 2, and 3, respectively.
The largest error in map 1 was the lowest compared with the other maps, with the errors remaining at the centimeter level. The maximum error value observed was only 0.32 m, corresponding to 2.57%. Therefore, map 1 was selected as the high-definition map for this research.
The centimeter-level mapping errors demonstrate that the NDT-based mapping system provides sufficient accuracy for low-speed autonomous navigation, where deviations below ±0.5 m are generally acceptable for safe path tracking. The slightly higher variation among maps is likely caused by LiDAR alignment offsets, surface reflectivity differences, and environmental lighting during data collection. Compared with previous research on low-speed autonomous platforms, the obtained 0.32 m maximum error indicates a reliable mapping precision within the operational threshold, confirming the system’s readiness for subsequent localization and testing in autonomous driving systems.
5.2. Localization System
The localization system was evaluated using an autonomous golf cart operating on the roadway in front of the test building, which featured sufficiently wide paths. The cart’s estimated positions, derived by applying the Normal Distributions Transform (NDT) to the point cloud data, were compared against the actual positions measured at 12 designated test points. The positional errors along the x- and y-axes are presented in the bar charts shown in
Figure 17 and
Figure 18, respectively.
The average localization errors were 0.082 m, 0.078 m, and 0.075 m for tests 1, 2, and 3, respectively, demonstrating a consistent accuracy across trials. The distribution of the x-axis errors varied slightly among the test points, mainly influenced by small differences in the cart’s manual initialization and the iterative alignment characteristic of the NDT algorithm. Because the vehicle’s starting pose was manually adjusted before each run, even minor offsets in orientation or position could affect the scan matching. Likewise, the probabilistic surface-fitting process in the NDT can result in convergence to slightly different local minima depending on the point density, resulting in small positional shifts. The highest x-axis error of 0.184 m occurred at point 2 in test 1, while the overall average x-axis error remained at 0.079 m, confirming the stable centimeter-level precision across tests.
Similarly, the y-axis error distribution was uneven but exhibited comparable average errors across all tests. The maximum y-axis error of 0.190 m occurred at point 10 in test 2, with an overall average error value of 0.078 m. These findings indicate that the localization system maintained a stable and comparable performance in both the x and y directions.
The localization accuracy results show that the NDT-based approach provided stable and precise position estimation across all three tests. The x- and y-axis errors remained below 0.2 m, with averages of 0.079 m and 0.078 m, respectively, indicating a high level of consistency in both directions. These results confirm that the system could accurately align the LiDAR-derived point clouds with the high-definition map, ensuring reliable position updates during operation. The small variations observed among the test points mainly resulted from LiDAR alignment offsets, NDT computation delays, and manual initialization during testing.
5.3. Results of Two-Dimensional Object Detection
The data consisted of 1726 images, containing 1064 human targets, 2312 car targets, 874 motorcycle targets, and 586 golf cart targets. The dataset was split according to a 9:1 ratio, with 1554 images for training and 172 images for testing. Using a batch size of 64 and running 100,100 iterations, the loss at the final iteration was 0.3176.
The evaluation metrics are presented in
Table 2. The mean average precision (mAP) at a 0.5 intersection over union (IoU) for each class exceeded 70%. This level of performance surpasses that of the YOLOv11 model trained on the COCO dataset (mAP of 55.3%), mainly because the model used in this study was pretrained for autonomous golf cart operation, using a dataset captured in the target environment that contained fewer object classes and more consistent contextual features. This environment-specific dataset allowed the detector to better focus on relevant visual patterns, resulting in a higher detection accuracy within the golf cart’s operational area. Examples of detections are illustrated in
Figure 19 and
Figure 20.
The results confirm that the pretrained YOLOv11 provides a sufficient accuracy for real-time perception in low-speed autonomous driving. The simplified class set and consistent scene context improved the detection reliability compared with generic models trained on large, diverse datasets. Minor accuracy reductions were observed under intense sunlight or partial occlusion, indicating that the performance could be further improved by incorporating additional lighting conditions in future datasets.
5.4. Results of Three-Dimensional Object Detection
The 2D object detection model described in the previous subsection was integrated with point cloud segmentation data to enable 3D object detection and localization. This evaluation focused specifically on the 3D detection and localization of the human class, with the results summarized in
Table 3.
Testing revealed that the effective operational range of the 3D detection system was limited to 2–9 m. Objects located too closely or beyond this range could not be accurately localized in 3D. At close distances, large objects produced dense point cloud data that could not be reliably mapped to the corresponding 2D pixel coordinates. As a result, when the 2D detection centroid pointed to uncalibrated point cloud data, 3D localization failed, even though 2D detection and point cloud clustering remained possible. At distances beyond 9 m, while 2D detection continued to take place, the point cloud data became too sparse for consistent grouping due to the increasing spacing between LiDAR points at greater ranges, resulting from the angular scanning characteristic of the sensor.
The 2–9 m effective range corresponds to the practical detection zone of the LiDAR sensor, which supports collision avoidance in combination with the YOLOv11 model. Since YOLOv11 can detect small objects and pedestrians in 2D images, the system uses this visual information to anticipate potential obstacles before they enter the critical LiDAR range. Within the 2–9 m distance range, the LiDAR provides a sufficient point cloud density for accurate 3D localization and distance estimation, enabling timely braking or avoidance. This range is therefore suitable for low-speed golf cart operation at 5–10 km/h, where it aligns with the vehicle’s braking distance and safety margins. Future work could integrate the 2D detection output for predictive collision forecasting, allowing the system to initiate early responses before an object enters the LiDAR detection range.
5.5. Results of Curve Speed Control
The curve navigation performance of the autonomous golf cart was evaluated at constant speeds ranging from 5 to 10 km/h. The actual trajectory was compared with the planned path, as shown in
Figure 21 and
Figure 22. The results indicate that higher speeds reduced the vehicle’s ability to follow the curved path due to the limited response time of the low-level steer-by-wire system. At 5 km/h, the cart accurately followed the intended path, but the deviations increased with greater speeds.
The steering lag was mainly caused by actuator response delays and control loop latency. The steering motor has a finite rise time, and the control signal passes through multiple stages of feedback and communication, including PID processing and CAN bus transfer, resulting in a total delay of about 150–200 ms. At higher speeds, this delay leads to greater lateral deviations before correction, while minor backlash in the steering linkage adds a slight delay in the angular response when encountering sharp curves.
Ride comfort was evaluated based on lateral acceleration, interpreted through the ISO 2631-1 whole-body vibration framework [
39]. According to this standard, the boundary between reduced and unacceptable comfort corresponds to weighted RMS accelerations of approximately 0.58–0.9 m/s
2 for lateral motion. Therefore, a threshold of ±0.9 m/s
2 was adopted as the upper comfort limit for low-speed curve driving. At 5 km/h, the lateral acceleration remained well below this limit across all curves, while 6 km/h produced a few peaks near the threshold at curves 2 and 3. Considering both trajectory accuracy and ride comfort, 5 km/h was identified as the optimal speed for autonomous curve navigation.
The results confirm that the steer-by-wire system exhibits a stable performance and ride comfort at 5 km/h, consistent with the golf cart’s normal inter-building operation scenario. The observed 150–200 ms response delay indicates a control limitation that could be mitigated by using faster actuators or predictive steering algorithms for higher-speed applications. This response time is critical to the overall autonomous driving system, as accurate steering control directly affects the path tracking precision and the vehicle’s ability to execute collision avoidance maneuvers. A stable steering performance at the tested speed range therefore ensures safe and reliable operation within the intended use of the autonomous golf cart.
5.6. Autonomous Driving
The evaluation of the overall autonomous driving system was divided into three parts: the self-driving test, the self-braking test, and the collision avoidance test. These tests collectively assessed the vehicle’s navigation accuracy, braking performance, and obstacle avoidance capabilities under autonomous operation.
5.6.1. Self-Driving System Test
The self-driving test evaluated the system’s ability to autonomously navigate from a starting point to a designated destination using a high-definition map. The golf cart was driven along a predefined route at 5 km/h on curves and 10 km/h on straight segments, as shown in
Figure 23. Steering control was managed using the pure pursuit algorithm with a look-ahead distance of 2.25 m, allowing the vehicle to follow the planned trajectory while adjusting for curvature.
The system successfully completed the route, and, when deviations occurred, corrective steering realigned the vehicle to the intended path. On straight segments, an average deviation of about 0.4 m was observed, primarily due to steering response delays and slight wheel misalignment. Additional test results showing consistent path following behavior are presented in
Appendix A (
Figure A4 and
Figure A5), confirming stable operation and a repeatable performance across runs.
The results demonstrate that the self-driving system maintains accurate path tracking within the centimeter-level localization precision achieved earlier. The small deviation observed corresponds to the combined influence of steering delays and localization offsets, both remaining within the ±0.5 m tolerance typical for low-speed autonomous navigation. This control accuracy ensures safe and reliable movement along inter-building routes and provides a strong foundation for coordination with higher-level modules such as those for autonomous driving control and collision avoidance.
5.6.2. Self-Braking System Test
The self-braking test evaluated the system’s ability to detect obstacles and perform automatic braking during autonomous driving. A stationary obstacle was introduced into the golf cart’s path while operating at 10 km/h, as shown in
Figure 24. The obstacle was detected within the LiDAR’s effective range, triggering the braking sequence through the drive-by-wire controller. The vehicle decelerated smoothly and came to a complete stop at an average distance of 5.63 m from the obstacle. Additional trials showing consistent behavior are presented in
Figure 25 and
Figure 26.
The 0.63 m difference from the 5.0 m target distance remains acceptable for low-speed operation. This variation results from small localization offsets and the discrete control behavior of the brake-by-wire actuator near zero velocity. The recorded braking sequence shows a total response time of approximately 1.8 s, giving an average deceleration value of 1.55 m/s2, which satisfies the comfort and safety limits for passenger transport. Overall, the braking module demonstrates consistent stopping capabilities within the LiDAR’s 2–9 m detection range, ensuring a sufficient margin to account for control and sensing delays.
5.6.3. Collision Avoidance System Test
The collision avoidance test evaluated the system’s ability to detect and avoid obstacles within a designated lane using the 3D object detection and A* path planning modules. A human subject acted as a stationary obstacle, placed near or directly within the golf cart’s planned path, as shown in
Figure 27. Once the obstacle was detected within the LiDAR’s 2–9 m range, the system generated an alternative local path that preserved the lane boundaries while maintaining safe clearance from the obstacle.
In the first test (
Figure 28), the obstacle was positioned at approximately x = 21 m and y = –2.2 m relative to the high-definition map. The golf cart deviated by about 1 m toward the –y direction to pass safely on the side with greater clearance. In the second test (
Figure 29), a moving obstacle crossed the lane from –y to +y, prompting two avoidance maneuvers. The planner first chose a near-center path and then recalculated with a wider deviation as the obstacle advanced, before returning smoothly to the original route. The third test (
Figure 30) produced a mirrored response, where the vehicle avoided a central obstacle by deviating toward the +y side.
Across all trials, the A* planner consistently selected the optimal avoidance direction and maintained smooth trajectories within the lane boundaries. The vehicle preserved sufficient clearance during each maneuver without compromising path stability or passenger comfort. The coordination of 3D perception, planning, and steering control enabled real-time obstacle avoidance without abrupt motion or a loss of localization accuracy. These results confirm that the system can safely navigate around stationary and moving obstacles in confined lanes, demonstrating its readiness for mixed pedestrian environments typical of inter-building transport.
The results obtained for all modules confirm that the autonomous golf cart system achieves reliable and coordinated operation under its defined conditions. The high-definition map provided centimeter-level spatial accuracy, while localization showed mean errors below 0.08 m, ensuring precise vehicle positioning. The perception system demonstrated the effective detection of pedestrians and vehicles through YOLOv11 and 3D LiDAR integration, with an operational range of 2–9 m, suitable for low-speed environments. Self-driving tests showed an average deviation of 0.4 m, remaining within the ±0.5 m tolerance for safe autonomous driving. The self-braking module consistently stopped the vehicle at 5.63 m from detected obstacles, maintaining a safety margin against the 5.0 m target, and the collision avoidance system successfully performed smooth avoidance maneuvers without lane departure. Collectively, these results demonstrate that the system can perform autonomous navigation, braking, and obstacle avoidance smoothly and safely within inter-building routes. The overall performance indicates the system’s readiness for real-world deployment in controlled campus or facility environments, with future work planned to focus on improving its robustness under variable lighting and surface conditions.
6. Conclusions
This paper describes the development of an autonomous driving system designed for electric golf carts operating in inter-building environments. The system integrates data from both camera and LiDAR sensors to perform localization, object detection, path planning, path tracking, and collision avoidance. The primary goal was to establish a reliable and safe autonomous transportation solution for semi-open environments.
A high-definition map, incorporating point cloud data and path information from the testing area, was evaluated to determine the mapping accuracy. The best map test showed a maximum positioning error of 0.32 m (2.57%), demonstrating centimeter-level accuracy and suitability for autonomous golf cart navigation.
The localization system, based on the Normal Distributions Transform (NDT) and LiDAR scanning data, achieved average positional errors of 0.079 m and 0.078 m along the x- and y-axes, respectively. This level of accuracy is adequate for maintaining stable autonomous operation.
The 2D object detection system achieved a mean average precision (mAP) exceeding 70% at an intersection over union (IoU) threshold of 0.5 across all object classes. The 3D object detection and localization system performs effectively within a range of 2 to 9 m. For objects beyond 9 m, the system continues to function by relying on accumulated point cloud data to ensure environmental awareness.
The complete autonomous driving system enables self-driving speeds between 5 and 10 km/h, with 5 km/h identified as optimal for curve navigation based on trajectory accuracy and ride comfort. Using the pure pursuit steering method, the vehicle follows the predefined path with an average lateral deviation of 0.4 m. The self-braking mechanism allows the golf cart to decelerate and stop safely, with an average braking distance of 4.3 m and a final distance of 5.63 m from the detected obstacle. Furthermore, the collision avoidance system effectively adjusts the vehicle’s path to navigate around both stationary and moving obstacles within the defined lane, ensuring safe and reliable autonomous operation.