Next Article in Journal
Integrated Design of a Modular Lower-Limb Rehabilitation Exoskeleton: Multibody Simulation, Load-Driven Structural Optimization, and Experimental Validation
Previous Article in Journal
Deep Learning-Based Gaze Estimation: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots

1
Department of Engineering Sciences, University of Agder, 4879 Grimstad, Norway
2
Mechatronics Innovation Lab, 4879 Grimstad, Norway
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2026, 15(4), 70; https://doi.org/10.3390/robotics15040070
Submission received: 2 February 2026 / Revised: 21 March 2026 / Accepted: 24 March 2026 / Published: 26 March 2026
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

This paper presents an open-source ROS 2 navigation and sensor platform for quadruped robots, demonstrated on Boston Dynamics Spot in a laboratory environment. The platform integrates SLAM Toolbox for mapping and localisation, Navigation2 with MPPI and Smac Hybrid-A* for global path planning, and a frontier-based autonomous exploration module with practical handling of unreachable frontiers. The paper validates and verifies current, open-source algorithms deployed on off-the-shelf hardware. A greedy wavefront-based frontier selection method is presented that prioritizes Time-to-Closest-Viable-Frontier (TCVF) by terminating the search as soon as a feasible frontier is identified. On a real robot dataset replayed across five goal scenarios, the method reduces median selection latency from 94.31 ms to 51.08 ms (95th percentile: 109.54 ms to 56.99 ms), corresponding to a 1.85 -times improvement in compute time compared to a standard implementation. The system also employs Zenoh middleware and Foxglove for remote monitoring and control, enabling flexible, high-bandwidth operation. The platform, including configuration files and launch scripts, is released openly to support future research and deployment on quadruped robots.

1. Introduction

Quadruped robots have become increasingly relevant in industrial inspection, environmental monitoring, and search and rescue due to their mobility in complex terrain and unstructured environments [1]. Despite recent progress, reliable autonomous navigation in cluttered indoor spaces remains challenging, as it requires robust simultaneous localization and mapping (SLAM) as well as dependable local and global planning [2,3].
The growing practical deployment of commercial quadrupeds highlights their value for inspection and data collection tasks [4]. However, many deployed systems still rely on predefined routes, which limits adaptability when the environment changes or when prior maps are unavailable [5]. This motivates an open-source and modular autonomy stack that can be deployed on commercially available quadrupeds using-off-the-shelf sensors and components, enabling reproducible experimentation and easier adaptation across use cases.
Recent reviews of field robotics and inspection platforms highlight substantial potential, but also conclude that broader real-world deployment will require a higher degree of autonomy [6,7]. These reviews further emphasize the need for improved sensor and SLAM accuracy, as well as more dependable autonomous inspection capabilities. In response, many recent works focus on increasing robustness and precision through advanced algorithms, sensor fusion, and improved path-planning methods, typically evaluated within a specific application scenario [8,9,10]. While these approaches demonstrate measurable performance gains, they are often validated on custom-built platforms tailored to the target use case, rather than on commercially available quadruped or robot systems [9,10]. Consequently, there remains limited evaluation of widely used open-source solutions when deployed on off-the-shelf components and quadruped robots.
Recent surveys of quadruped robotics note that, despite substantial progress, advances are frequently driven by subfield-specific contributions with limited integration or generalization [1]. In addition, quadruped platforms continue to face practical challenges in deployment. Locomotion introduces instability and disturbance effects that can degrade state estimation, and the resulting motion can reduce perception quality relative to more mechanically stable wheeled platforms [11,12]. These observations motivate the need for a general, reusable sensing and navigation platform for quadrupeds that lowers the barrier to experimental evaluation and supports future research and system development.
To address these gaps, this paper presents an open source ROS 2 based navigation and sensing platform for a commercially available quadruped robot. ROS 2 provides a modular software architecture with broad community support and is widely used for integrating SLAM, path planning, and perception components [13]. The system combines SLAM Toolbox for mapping, the Nav2 stack for navigation, and Zenoh middleware for communication, targeting reliable operation and remote monitoring in practical deployments. The hardware configuration uses interchangeable, off-the-shelf components and sensors, making drop-in replacements possible, provided that suitable ROS 2 drivers are available. This facilitates quick deployment and encourages future research and development for quadruped platforms.
Another key component of the platform is a custom ROS 2 package for frontier-based autonomous exploration. Frontier methods guide robots in unknown environments by selecting navigation targets at frontiers, which are boundaries between known free space and unknown space [14]. The implementation in this work builds on Wavefront Frontier Detection (WFD), which extracts frontier regions from occupancy grids using a wavefront search over reachable free space [15].
Classical frontier exploration originates from Yamauchi and later refinements such as WFD [14,15]. Recent literature continues to advance frontier exploration mainly through algorithmic directions, including learned frontier prioritization, utility-driven scoring, and coordinated multi-robot strategies [16,17,18]. These developments confirm that frontier exploration remains an active research area, with strong emphasis on improving selection logic, coordination, and decision quality.
In contrast to studies focused on increasingly sophisticated frontier-selection models, this work takes a deployment-oriented systems perspective. A wavefront frontier detector with an early-exit policy (early-stop WFD) is implemented and evaluated, rather than relying only on standard full frontier enumeration. The method is combined with feasibility filtering and robust frontier-cluster handling to improve behavior under noisy occupancy-grid conditions. The early-exit variant is then compared against baseline full-enumeration WFD, and its effect on frontier-selection latency is quantified, indicating reduced Time-to-Closest-Viable-Frontier (TCVF) in practical exploration scenarios. The complete package is integrated into an open-source ROS 2 stack to support reproducible deployment and provide a baseline for further research.
This paper evaluates the proposed platform through experiments conducted in realistic laboratory and office environments using a commercially available quadruped robot and off-the-shelf sensing and compute components. We demonstrate end-to-end operation of mapping, autonomous navigation, and frontier-based exploration using ROS 2 and widely used open-source packages. In addition, a quantitative evaluation of the computational benefit of the proposed early-exit WFD-based frontier selection is provided, showing substantial reductions in typical frontier selection runtime and the number of grid cells examined compared to a full enumeration baseline.
Overall, this work provides a modular, open-source, ROS 2-based quadruped navigation and sensing platform built from off-the-shelf components, intended as a ready-to-deploy baseline for future research and system development where general and reusable solutions are needed. The main contribution is a reproducible stack composed of widely adopted open-source modules (including SLAM Toolbox and Nav2), chosen to maximise accessibility and ease of extension. The platform further integrates Zenoh and Foxglove to enable remote monitoring, debugging, and operator intervention.
The structure of the paper is organized as follows: Section 2 presents implementation details and choices for the hardware, navigation, SLAM, and frontier-based exploration package. Section 3 presents the performance results, including navigation, path planning, and verification and validation of the WFD package. Finally, Section 4 and Section 5 provide discussions and conclusions.

2. Methodology

This work presents a complete ROS 2-based autonomy stack for operation of a quadruped robot. The system integrates mapping and localisation with autonomous navigation, enabling the robot to move towards goals while building a map of the environment online. Navigation is executed using the ROS 2 Navigation2 framework, while mapping and localisation are provided through complementary SLAM modules for navigation. On top of these components, an autonomous exploration layer generates new navigation goals online to expand map coverage without requiring pre-defined waypoints or prior maps.

2.1. Experimental Setup

The proposed payload is intended as a modular, reproducible research platform built from off-the-shelf, ROS 2-compatible components. The main focus is a flexible integration software framework that can be replicated and adapted across different quadruped, sensor and compute platforms, rather than an optimized industrial product, with emphasis on extensibility and repeatability.
An overview of the system is shown in Figure 1, illustrating the components used as well as their main functionality. The power supply provides power to the NVIDIA Jetson and LiDAR, while the camera is powered over USB. The Nvidia Jetson AGX Orin (NVIDIA Corporation, Santa Clara, CA, USA), ZED 2i camera (Stereolabs, San Francisco, CA, USA), and Livox Mid-360 LiDAR (Livox Technology Co., Ltd., Shenzhen, China) are mounted on the robot’s back using custom 3D-printed brackets, while the wiring and power supply are enclosed in a 3D-printed protective housing. The protective housing and brackets are made specifically for this system, but the sensors and components themselves are interchangeable, provided that the alternative sensor or component has a ROS 2 driver.
The platform used in this paper is the Boston Dynamics Spot quadruped robot (Boston Dynamics, Waltham, MA, USA). Spot utilises two motors in each hip and one motor in each knee, giving it twelve degrees of freedom in total to move [19]. It has five stereocameras to detect obstacles and navigate in different environments, and it supports both Ethernet and Wi-Fi connectivity. Payloads can be mounted on its back with a weight limitation of 14 kg. Spot can be controlled by the included tablet or from a computer using the Spot Application Programming Interface (API).
For onboard computation, an NVIDIA Jetson AGX Orin developer kit is used, weighing 1.58 kg. This is a compact, ARM-based computer made for robotics and AI, equipped with a 12-core Arm Cortex-A78AE CPU, an NVIDIA Ampere GPU, and 64 GB LPDDR5 RAM, with 64 GB internal and an additional 1 TB external SSD for data storage. This computer runs the ROS 2 navigation stack, SLAM modules and visualisation tools.
The primary range sensor is a Livox Mid-360 LiDAR. It is a non-repetitive scanner with a rotating mirror inside, meaning that the sensor itself is stationary inside the LiDAR housing. For our application, the most relevant characteristics are its weight of 265 g, 360 horizontal field of view, a vertical field of view from 7 to 52 , a detection range of up to approximately 40 m at 10 % reflectivity (70 m at 80 % ), and a frame rate of 10 Hz, which together provide dense 3D coverage around the robot.
A ZED 2i stereo camera from Stereolabs is used for RGB-D perception and visual odometry experiments. It is a stereo camera made for robotics, capable of RGB-D images and video. In our setup it operates at either 2 × ( 1920 × 1080 ) at 30 fps or 2 × ( 1280 × 720 ) at 60 fps, with a horizontal field of view of 110 , vertical field of view of 70 , and an effective depth range of approximately 0.3 m to 20 m. It has a weight of 230 g.
All payload electronics are powered by a MEAN WELL SD-100C-12 DC–DC converter (MEAN WELL Enterprises Co., Ltd., New Taipei City, Taiwan), a 100 W single-output module weighing 650 g. It accepts 36–72 V DC input from Spot’s internal supply and provides a regulated 12 V output with up to 8.5 A current, which is used to power the Jetson and LiDAR.
The total payload mass is 3725 g, including approximately 1000 g attributed to estimated contributions from the 3D-printed protective housings and component wiring. This payload is well within Spot’s specified maximum payload capacity of 14 kg.

2.2. Navigation and Exploration

Navigation2 (Nav2) is the standard navigation framework for ROS 2, providing a mature and modular architecture for perception, localisation, planning, control, and visualisation [20]. Navigation is orchestrated through a behaviour tree (BT) that coordinates multiple asynchronous servers configured via plug-ins, while a lifecycle manager supervises node startup and state transitions. For correct operation, Nav2 requires the standard TF transform chain, including map ⇒ odom for global localisation, odom ⇒ base_frame for odometry, and transforms between each sensor frame and the robot base frame provided via URDF and/or static TF.
Nav2 maintains both a global and a local costmap to represent the environment. The global costmap supports long-horizon planning and is typically generated from a static map (e.g., produced by SLAM) with inflation around obstacles, whereas the local costmap updates at a higher rate around the robot to support reactive obstacle avoidance. Range sensing is integrated through layered costmap plug-ins such as obstacle/voxel layers or the Spatial–Temporal Voxel Layer (STVL) [21]. Core navigation is split between the planner and controller servers [22]: the planner computes a low-cost path to the goal using the global costmap, and the controller converts this plan into velocity commands on /cmd_vel while using the local costmap for smooth, collision-free execution. If planning or execution fails, recovery behaviours (e.g., spin, wait, or backup) may be invoked, with the BT navigator supervising goal dispatch, progress monitoring, and recovery selection [20].
In this work, Nav2 was selected as the core navigation framework due to its maturity and extensibility within the ROS 2 ecosystem [20]. The default planner and controller were replaced with the Smac Hybrid-A* planner and the Model Predictive Path Integral (MPPI) controller to better handle confined indoor environments and generate smooth, feasible motion [22,23]. Planner configuration was adapted by switching the motion model from Dubins to Reeds–Shepp to enable tighter manoeuvres and allow reverse motion when required. MPPI was further tuned by adjusting critic plug-ins to reduce oscillations, improve goal alignment near the target pose, and permit reverse motion. The costmaps were also tailored to the platform by using Spot’s footprint instead of a circular approximation, resizing the local costmap to match the robot’s dimensions, and replacing the default voxel layer with STVL to better represent dynamic obstacles and utilize multiple point cloud sources [21].

2.3. Simultaneous Localization and Mapping (SLAM)

Simultaneous localisation and mapping (SLAM) estimates the robot pose while incrementally building a map of the environment [3]. In the current configuration, the platform uses the quadruped’s internal odometry for localization and mapping. The project’s architecture allows the odometry source to be treated as interchangeable, provided it is published in a consistent frame and a known transform to the robot base frame is available.
This requirement is handled through the ROS 2 tf2 transform tree, where the necessary transforms ensure that the chosen odometry frame is correctly related to the robot’s kinematic frames. In practice, switching odometry sources primarily becomes a one-time configuration of frame transforms, rather than requiring changes to the SLAM or navigation components.
SLAM Toolbox was selected because it is a widely adopted, well-maintained ROS 2 SLAM solution that integrates cleanly with Nav2 and can be deployed with minimal platform-specific tuning [24]. Its 2D formulation provides robust mapping and localization in typical indoor and factory environments, enabling a practical and easily reproducible baseline for the quadruped sensor platform. SLAM Toolbox was configured for responsive 2D mapping by lowering the minimum motion thresholds required for map updates (0.1 rad and 0.1 m). Because the Livox sensor publishes 3D point clouds, pointcloud_to_laserscan was used to project points within a selected height band onto a 2D scan [25]. The conversion was configured for a full 360 field of view, with height limits of 1  m to 1.2  m to avoid including overhead structures (e.g., door frames) that can introduce false obstacles in the occupancy grid.

2.4. ROS Middleware Interface

ROS2 uses a distributed, networked topology to discover nodes and exchange data across processes and computers. Communication is abstracted by the ROS middleware interface (RMW), which maps the ROS2 API onto an underlying transport; the default is Data Distribution Service (DDS), a real-time publish–subscribe middleware [26]. DDS provides extensive Quality of Service (QoS) controls to tune reliability and latency, but it can require careful configuration and may struggle with discovery overhead and high-bandwidth streams (e.g., point clouds and video) on some WiFi networks.
Zenoh is an alternative RMW that reduces discovery traffic by optionally using a router for node discovery and bridging, enabling efficient communication across multiple machines and even over the internet. It has been shown to handle large data streams more robustly over WiFi than DDS in comparable setups [27]. In this work, Zenoh was selected to reliably transmit high-throughput sensor data from the onboard NVIDIA Jetson to remote computers over WiFi and 4G, while keeping configuration simpler than DDS through a single, unified configuration file [27].

2.5. Autonomous Frontier-Based Exploration

Autonomous exploration is performed in an initially unknown environment using a 2D occupancy grid map produced online by ROS 2 SLAM Toolbox. The map is represented as a grid where each cell is classified as free, occupied, or unknown. We adopt the classical frontier-based exploration formulation introduced by Yamauchi, where exploration proceeds by repeatedly selecting a frontier region as the next navigation target. A frontier represents the boundary between known free space and unknown space [14]. In this work, a frontier cell is defined as a free cell that has at least one neighbouring unknown cell. Connected frontier cells form a frontier cluster, which represents a candidate exploration target.
To detect frontiers efficiently from the occupancy grid, Wavefront Frontier Detection (WFD) method [15] is used. WFD performs a wavefront breadth-first search (BFS) over the reachable free space starting from the robot cell, and identifies frontier cells at the boundary to unknown space. For goal assignment, we apply the standard greedy nearest-frontier strategy used in classical frontier-based exploration pipelines [14].
The exploration system deployed on the robot uses a latency-oriented early-exit variant of WFD, shown in Algorithm 1. The method avoids enumerating and clustering all frontier candidates before selecting a target. Instead, frontier clustering and feasibility checking are performed on demand during the wavefront search. When a frontier cell is encountered, its connected frontier cluster is extracted and evaluated immediately. If the cluster is feasible, a navigation goal is generated and the algorithm terminates early. If the cluster is rejected, its cells are marked as processed and the wavefront search continues. The method therefore terminates as soon as a feasible frontier target is found, which in practice often reduces goal-selection latency.
To quantify the computational effect of early exit, a standard full-enumeration WFD reference method is implemented for offline benchmarking, summarised in Algorithm 2 [15]. The reference method follows the standard WFD pipeline by expanding a wavefront BFS over the reachable free space, collecting all frontier cells before clustering them into connected frontier clusters. To control for differences in clustering behaviour and to improve robustness to sensor noise in the occupancy grid, the reference method uses the same 8-connected flood fill procedure as the deployed implementation. Goal selection is performed by choosing the nearest feasible frontier cluster according to traversable grid distance. This reference method is only evaluated on recorded occupancy-grid snapshots from the exploration run and is used solely for comparison with the deployed early-exit implementation (Section 3).
For Algorithms 1 and 2, the occupancy grid is denoted by M and the robot cell by s. Frontier clustering returns a frontier cluster G, defined as a connected set of frontier cells. The feasibility check Feasible ( G , τ ) filters clusters using a minimum frontier threshold. In our implementation, τ is chosen as a minimum frontier length to reduce selection of narrow frontier fragments that are unlikely to be traversable by the robot. Rejected clusters are marked as processed to avoid repeated evaluation of the same frontier region. The distance function Dist ( s , G ) denotes traversable grid distance computed as the shortest path length through free cells from s to the closest cell in G.
Algorithm 1 Greedy WFD frontier implementation (early exit)
  • Require: Occupancy grid M, robot cell s, feasibility threshold τ
      1:
Start wavefront BFS from s
      2:
while wavefront search not finished do
      3:
    if a frontier cell is encountered then
      4:
         G ClusterFrontier ( M , frontier cell )
      5:
        if Feasible ( G , τ )  then
      6:
            return G {terminate early}
      7:
        else
      8:
            mark all cells in G as processed
      9:
        end if
    10:
    end if
    11:
end while
    12:
return Home Position
Algorithm 2 Full-enumeration WFD [15]
  • Require: Occupancy grid M, robot cell s, feasibility threshold τ
      1:
F DetectFrontierCells ( M , s ) {wavefront BFS}
      2:
G ClusterFrontiers ( F ) {8-connected flood fill}
      3:
G { G G Feasible ( G , τ ) }
      4:
return arg min G G Dist ( s , G ) {traversable grid distance}
For accepted frontier clusters, a single navigation goal is computed and sent to Nav2. The goal is generated by computing the mean of the frontier cell coordinates and rounding to the nearest grid cell, providing a simple, deterministic representative goal for the frontier region.
Real-world deployments can fail even when the selected frontier is feasible in the grid map, for example due to tight passages, dynamic obstacles, or incomplete representation of certain structures. There is therefore included a lightweight unreachable-goal handler: if the robot fails to make progress for longer than a configured timeout, the goal is abandoned and the corresponding frontier is stored as rejected. Future detected frontiers are compared against this rejected set using frontier length as a simple fingerprint within a tolerance, and matching frontiers are skipped to avoid repeated goal attempts. Two additional execution features support robust exploration runs: at the start of exploration, the robot performs a short initial “wiggle” motion to trigger an early map update before frontier selection begins, and when exploration terminates, the robot returns to the recorded home pose.

2.6. System Setup

A simplified overview of the system architecture and principal software components with topics is shown in Figure 2. The incoming LiDAR point cloud is converted to a 2D LaserScan message for use with SLAM Toolbox. SLAM Toolbox uses this scan data together with odometry from the quadruped to estimate the robot pose and build a 2D occupancy grid map, which is provided to the navigation pipeline. Nav2 uses the occupancy map, the LaserScan and point cloud observations, and the quadruped odometry for localization, planning, and control. Goal poses are generated by the autonomous navigation package and sent to Nav2. The resulting velocity commands are published as Twist messages and executed by the quadruped base controller.

2.7. Experiment Description

All experiments were conducted in the Mechatronics Innovation Lab (MIL) in Grimstad, Norway, as part of a research project in collaboration with the University of Agder. The main experimental testing of WFD was performed in an office-like area consisting of an office room with desks and cabinets leading into a larger office space with seating and tables. The environment in the lab has a central corridor with several side rooms including narrow doorways, glass walls and windows, as well as metallic structures and pipes, creating occlusions and reflections that are challenging for LiDAR-based mapping. This layout provides a representative indoor scenario for testing localisation, path planning, and frontier-based exploration.
Sensor data from a single office exploration run were recorded to support practical verification and offline benchmarking of the WFD implementations. During this run, the robot periodically stopped to select a new frontier goal. At each such decision point, the current occupancy grid was exported and stored as a CSV file, resulting in a fixed set of map snapshots. The baseline and early-exit variants were evaluated offline by running each algorithm on every CSV snapshot. Each algorithm was executed 100 times per snapshot, and the repeated runs were used to compute summary statistics for comparison. All offline algorithm testing was performed on an external PC with an Intel Core i5-12450Hz (up to 4.4 GHz).

3. Results

This section presents the experimental results for SLAM mapping quality, path planning, and obstacle avoidance with Nav2, as well as the autonomous frontier-based exploration tested at the MIL office environment compared to a baseline implementation of WFD. The project repository can be accessed at https://github.com/mil-as/quadruped_navigation_ros2 (accessed on 3 May 2025).

3.1. SLAM Mapping Evaluation

The map built with SLAM Toolbox is shown in Figure 3. This is a 2D map of the lab at MIL. The LiDAR struggles to treat windows as solid walls if the windows are larger than the laser scan conversion range. This can be seen on the rightmost side of the figure, where it attempts to create a map of the outside. The three rooms at the bottom have some undefined walls. This is due to pipes running alongside the walls, blocking the LiDAR from detecting them. In addition, there are two notable wire mesh fences. Spot did not walk inside these fences but still managed to map the fence and most of the contents inside.
In this configuration, the occupancy grid map was generated with a resolution chosen to capture walls, doors, and furniture in the MIL lab with sufficient detail for navigation. The resulting map covers the entire area of the lab used in the experiments, including the main corridor and adjacent rooms.
To compare how accurate SLAM Toolbox is in terms of mapping a building, it was compared to the floor plan of MIL’s lab. Figure 4 shows the generated map compared to the floor plan. The figure shows that the map corresponds well to the lab floor plan. The main structural elements, such as walls, rooms, and corridors, align closely with the ground truth, and the largest discrepancies occur around large windows and occluded walls due to pipes and mesh fences.
Figure 5 shows how the resulting map fails to align with the ground truth without loop closure. This stresses the need to achieve loop closure for the map to be accurate, especially when covering longer paths and returning to previously visited areas.

3.2. Navigation and Path Planning Performance

Nav2 path planning, control, and obstacle avoidance were tested in the lab at MIL. Figure 6 shows the planned path. Here, it is clear that the path planner creates a path around the obstacles. When Spot reaches the goal pose, it needs to rotate 180 . Therefore, it turns and then reverses into the goal pose. During the test, it was observed that it manages to navigate over larger areas, but sometimes struggles with narrow passages. These can be door frames where the door is partially closed. Spot would stop, calculate a new path, and then walk through. Figure 7 shows a longer path plan across the entire lab.
Across the performed trials, Spot consistently reached the goal without collisions, and the global and local planners successfully avoided static obstacles and most dynamic ones. The main limitations observed were short pauses and replanning events in narrow doorways or partially closed doors, where the robot needed extra time to find a feasible path through the gap.

3.3. HMI in Foxglove

Figure 8 shows the HMI developed using Foxglove. The interface allows the operator to monitor live sensor data, send goal positions to Spot, and view the robot’s camera streams in real time. Battery percentage and E-stop status are displayed on all screens, including an indication of whether a software or hardware E-stop is active. When operating on an offline map, the corresponding map can be loaded and visualised in the HMI’s offline map view. The service call view provides buttons for issuing ROS 2 service calls, including Motors ON, sit, stand, and roll over for battery change.

3.4. Autonomous Exploration Results

This section first summarises the behaviour of the full autonomous exploration run in the office environment, including the initial map update motion and the sequence of selected frontiers. It then reports offline benchmarking results comparing the baseline WFD implementation with the proposed early-exit variant on five recorded map snapshots from the same run. An initial “wiggle” motion was performed before exploration began to trigger an early map update. The maps before and after the initial motion are shown in Figure 9.
The exploration resulted in five frontier selections and lasted just under two minutes before Spot returned to the recorded home pose. No frontiers were abandoned during this run, as all selected goals were reachable. The parameters used for this exploration were set to 22 for the minimum frontier length, a goal tolerance of 1 m, and a timeout of 20 s for the unreachable-goal handler. This corresponds to exploring essentially all accessible free space in the two-room office area.
Figure 10 shows the first three stages of exploration. The leftmost map shows the state immediately after the initial motion ended. The second shows the robot leaving the first room, and the third shows most of the area explored, with two remaining frontier candidates on either side of the robot. According to the traversable grid distance used in the software, the frontier marked in blue is closer to the robot position. The last frontier was discovered after exploring the yellow frontier, as shown in Figure 11.
No frontiers were rejected by the unreachable goal handler during the office run. However, during separate feature testing, the robot consistently abandoned unreachable goals after the configured timeout. When new frontiers were detected, previously rejected frontiers were successfully identified by the frontier-length fingerprint and skipped, as described in Section 2.
Figure 12 shows the return-to-home function. When all five frontiers were found there were no new viable frontiers; therefore, it returned to its original start position.

3.5. WFD Standard and Proposed Implementation Benchmarking

The standard WFD and the implemented variant were evaluated using the dataset collected during the office experiments. In total, five frontier goals were found. The results indicate that the proposed practical implementation reduces the average computation of TCVF from 94.43 ms to 51.08 ms, corresponding to a 1.85 -times improvement, as summarised in Table 1. As shown in Table 2, this reduction is consistent with the early-exit strategy, which requires substantially fewer searched cells than the standard implementation. During testing of the custom and baseline implementations, a valid frontier was successfully identified in every iteration.
As shown in Table 1, Goal 4 exhibits a longer computation time than the baseline method. To contextualize this result, worst-case computation times are reported in Table 3. For Goal 4, the search is initialized from a pose that is relatively far from any viable frontier, which reduces the benefit of the proposed early-termination strategy.
To further investigate this effect, the occupancy grids associated with the third, fourth, and fifth frontier-selection stages (Figure 10 and Figure 11) were evaluated under randomly sampled starting poses. In total, 100 random starting positions were generated, and each WFD implementation was executed 10 times per position. The resulting speedups are summarized in Table 4. When the start pose is near a feasible frontier, the custom implementation provides a consistent speedup across all three grids. For the fourth and fifth grids, starting positions far from any feasible frontier yield performance comparable to the baseline.

4. Discussion

The Results chapter highlights the impact of loop closure on mapping accuracy. While this is a well-established principle, the experiment reinforces that loop closure is required to reduce accumulated drift. For this reason, the autonomous navigation pipeline includes a return-to-home behaviour, which returns the robot to the start location to increase the likelihood of loop closure when it has no more frontiers to find.
A limitation of the system is reduced robustness in environments with large glazed surfaces, where LiDAR returns may pass through windows and introduce spurious structure outside the building. Projecting a 3D point cloud to a 2D laser scan mitigates this effect, but performance may still degrade when glazing exceeds the effective range of the scan projection.
The deployed system implements a variant of the common WFD pipeline that evaluates and filters frontier clusters on demand during the wavefront expansion, rather than fully enumerating all candidates before goal selection. Offline benchmarking on occupancy-grid snapshots from the office run shows that this variant reduces the TCVF compared to a full-enumeration WFD reference method, with a mean speedup of 1.85 × across five scenarios, and a mean reduction in searched cells from 18,218 to 6573.6. At the same time, the results highlight an important trade-off. When the closest viable frontier lies far from the robot in traversable distance, early termination provides limited benefit and the additional on-demand processing can remove the latency advantage. This behaviour is observed in Goal 4 in Table 1. The early-exit variant searched 20,575 cells and the regular one searched 23,747, as shown in Table 2. Goal 4 therefore approximates a near-worst-case condition for the early-exit strategy: the first feasible frontier appears late in the wavefront search and large rejected clusters are processed on demand before termination. The early-exit overhead partially cancels the benefit of not enumerating all frontiers. On the other hand, a noticeable speedup is observed when starting closer to the first valid frontier, as shown in Table 4. These findings support viewing early-exit WFD as a practical baseline variation rather than a uniformly superior frontier detector.
Although this work uses frontier length as the feasibility criterion, the same early-exit structure can accommodate alternative filters such as expected information gain without changing the overall pipeline. A practical extension is k-candidate early stopping, where the search terminates after collecting the first k feasible frontiers, enabling more advanced selection policies while still avoiding full frontier enumeration.
As a general purpose baseline, the platform is intended to support both research and development across multiple applications. The current navigation stack uses SLAM Toolbox, which performs 2D SLAM. Consequently, mapping and navigation are constrained to mainly planar environments and are effectively limited to a single floor at a time. Although this reduces applicability in multi-level or highly uneven terrain, the 2D formulation improves portability to other quadrupeds and mobile robots by requiring less configuration and avoiding platform-specific treatment of stairs and complex terrain.
Despite this 2D limitation, the quadruped platform remains suitable for a wide range of investigations, including evaluation of advanced algorithms for path planning, SLAM accuracy, sensor fusion, and techniques for mitigating locomotion-induced disturbances in state estimation. More broadly, a reusable and deployable baseline helps enable comparative, system-level research, reducing reliance on narrowly scoped subfield contributions that are difficult to transfer across platforms and tasks.

5. Conclusions

This paper presents a modular, ROS 2-based quadruped navigation and sensing platform assembled from off-the-shelf components, addressing the identified need for general and reusable solutions in quadruped research. The platform integrates and validates widely adopted open-source components for mapping and navigation on commodity hardware, providing a reproducible and deployable stack that lowers the barrier to entry for future research and development across multiple application domains. Building on this foundation, a practical greedy wavefront frontier detection (WFD) implementation is introduced and evaluated against a standard baseline.
Building on this platform, a greedy WFD-based autonomous exploration strategy is implemented that prioritizes fast selection of the closest viable frontier. Experimental results on a real-robot dataset show that the implementation reduces the Time-to-Closest-Viable-Frontier (TCVF) by 1.85 times compared to a baseline WFD implementation. Specifically, the average computation time decreases from 94.31 ms to 51.08 ms , and the 95th-percentile latency decreases from 109.54 ms to 56.99 ms . The results show that the proposed early-exit WFD implementation can improve frontier-selection responsiveness in many practical cases, while its benefit decreases on scenarios in which feasible frontiers appear late in the search.

Author Contributions

Conceptualization, J.S.; supervision, J.S. and I.T.; data curation, E.O.B., V.B. and E.D.; investigation, E.O.B., V.B. and E.D.; methodology, E.O.B., V.B. and E.D.; software, V.B. and E.D.; validation, E.O.B., V.B. and E.D.; visualisation, middleware configuration and hardware setup, E.O.B.; frontier exploration development, V.B.; mapping and ROS 2 navigation, E.D.; writing—original draft, E.O.B., V.B. and E.D.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The project repository can be accessed at: https://github.com/mil-as/quadruped_navigation_ros2 (accessed on 3 May 2025). The data used for the results are available from the corresponding author upon request.

Acknowledgments

During the preparation of this manuscript/study, the authors used ChatGPT 5.2 for the purposes of grammar correction and structural enhancement. The authors have reviewed and edited the output and take full responsibility for the content of this publication. This paper is based on a collaborative project between MIL and the University of Agder, Department of Engineering Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Q.; Cicirelli, F.; Vinci, A.; Guerrieri, A.; Qi, W.; Fortino, G. Quadruped Robots: Bridging Mechanical Design, Control, and Applications. Robotics 2025, 14, 57. [Google Scholar] [CrossRef]
  2. Fan, Y.; Pei, Z.; Wang, C.; Li, M.; Tang, Z.; Liu, Q. A Review of Quadruped Robots: Structure, Control, and Autonomous Motion. Adv. Intell. Syst. 2024, 6, 2300783. [Google Scholar] [CrossRef]
  3. Gao, X.; Zhang, T. Introduction to Visual SLAM: From Theory to Practice, 1st ed.; Springer: Singapore, 2021. [Google Scholar]
  4. Sequeda, V.; Castro-Lacouture, J.; Pinzon, D.; Aponte, F.; Sandoval, S.; Forero, D.; Lopez, J. The Use of Boston Dynamics Spot in Support of LiDAR Scanning on Active Construction Sites. In Proceedings of the 39th International Symposium on Automation and Robotics in Construction (ISARC), Bogota, Colombia, 13–15 July 2022; pp. 1103–1110. [Google Scholar] [CrossRef]
  5. Boston Dynamics. Autowalk: Autonomous Navigation for Spot. 2025. Available online: https://support.bostondynamics.com/s/article/What-Is-Autowalk-49934 (accessed on 3 May 2025).
  6. Konieczna-Fuławka, M.; Koval, A.; Nikolakopoulos, G.; Fumagalli, M.; Santas Moreu, L.; Vigara-Puche, V.; Müller, J.; Prenner, M. Autonomous Mobile Inspection Robots in Deep Underground Mining—The Current State of the Art and Future Perspectives. Sensors 2025, 25, 3598. [Google Scholar] [CrossRef] [PubMed]
  7. Kakhkharov, A.; Kim, J.-W.; Choi, J.-h. Quadruped Robots in Construction Automation: A Comprehensive Review of Applications, Localization, and Site-Level Operations. Buildings 2026, 16, 962. [Google Scholar] [CrossRef]
  8. Wan, M.; Luo, X.; Wu, J.; Li, L.; Tang, R.; Peng, Z.; Jiang, J.; Zhou, S.; Liu, Z. CK-SLAM, Crop-Row and Kinematics-Constrained SLAM for Quadruped Robots Under Corn Canopies. Agronomy 2026, 16, 95. [Google Scholar] [CrossRef]
  9. Zhu, X.; Zhao, X.; Liu, J.; Feng, W.; Fan, X. Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF. Agronomy 2025, 15, 1373. [Google Scholar] [CrossRef]
  10. Qu, J.; Gu, Y.; Qiu, Z.; Guo, K.; Zhu, Q. Development of an Orchard Inspection Robot: A ROS-Based LiDAR-SLAM System with Hybrid A*-DWA Navigation. Sensors 2025, 25, 6662. [Google Scholar] [CrossRef]
  11. Cong, Q.; Shi, X.; Wang, J.; Xiong, Y.; Su, B.; Xu, W.; Liu, H.; Zhou, K.; Jiang, L.; Tian, W. Stability Study and Simulation of Quadruped Robots with Variable Parameters. Appl. Bionics Biomech. 2022, 2022, 9968042. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, G.; Hong, L. Research on Environment Perception System of Quadruped Robots Based on LiDAR and Vision. Drones 2023, 7, 329. [Google Scholar] [CrossRef]
  13. Macenski, S.; Foote, T.; Gerkey, B. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef] [PubMed]
  14. Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97, ‘Towards New Computational Principles for Robotics and Automation’, Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar] [CrossRef]
  15. Keidar, M.; Kaminka, G.A. Efficient frontier detection for robot exploration. Int. J. Robot. Res. 2014, 33, 215–236. [Google Scholar] [CrossRef]
  16. Wang, R.; Zhang, J.; Lyu, M.; Yan, C.; Chen, Y. An improved frontier-based robot exploration strategy combined with deep reinforcement learning. Robot. Auton. Syst. 2024, 181, 104783. [Google Scholar] [CrossRef]
  17. Zhao, H.; Guo, Y.; Liu, Y.; Jin, J. Multirobot unknown environment exploration and obstacle avoidance based on a Voronoi diagram and reinforcement learning. Expert Syst. Appl. 2025, 264, 125900. [Google Scholar] [CrossRef]
  18. Liu, J.; Wang, C.; Chi, W.; Chen, G.; Sun, L. Estimated path information gain-based robot exploration under perceptual uncertainty. Robotica 2022, 40, 2748–2764. [Google Scholar] [CrossRef]
  19. Boston Dynamics. About the Spot Robot. Available online: https://support.bostondynamics.com/s/article/About-the-Spot-Robot-72005 (accessed on 3 May 2025).
  20. Macenski, S.; Martin, F.; White, R.; Clavero, J. The Marathon 2: A Navigation System. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  21. Macenski, S.; Tsai, D.; Feinberg, M. Spatio-temporal voxel layer: A view on robot perception for the dynamic world. Int. J. Adv. Robot. Syst. 2020, 17. [Google Scholar] [CrossRef]
  22. Macenski, S.; Moore, T.; Lu, D.V.; Merzlyakov, A.; Ferguson, M. From the desks of ROS maintainers: A survey of modern & capable mobile robotics algorithms in the robot operating system 2. Robot. Auton. Syst. 2023, 168, 104493. [Google Scholar] [CrossRef]
  23. Macenski, S.; Booker, M.; Wallace, J. Open-Source, Cost-Aware Kinematically Feasible Planning for Mobile and Surface Robotics. arXiv 2024, arXiv:2401.13078. [Google Scholar]
  24. Macenski, S.; Jambrecic, I. SLAM Toolbox: SLAM for the dynamic world. J. Open Source Softw. 2021, 6, 2783. [Google Scholar] [CrossRef]
  25. Lalancette, C. ROS 2 Pointcloud <-> Laserscan Converters. Available online: https://github.com/ros-perception/pointcloud_to_laserscan/tree/humble (accessed on 2 May 2025).
  26. ROS Core Team. 2023-09 ROS 2 RMW Alternate. 2024. Available online: https://discourse.ros.org/uploads/short-url/o9ihvSjCwB8LkzRklpKdeesRTDi.pdf (accessed on 2 May 2025).
  27. Zhang, J.; Yu, X.; Ha, S.; Peña Queralta, J.; Westerlund, T. Comparison of Middlewares in Edge-to-Edge and Edge-to-Cloud Communication for Distributed ROS 2 Systems. J. Intell. Robot. Syst. 2024, 110, 162. [Google Scholar] [CrossRef]
Figure 1. Overview of the components used in the system with their main functionality described.
Figure 1. Overview of the components used in the system with their main functionality described.
Robotics 15 00070 g001
Figure 2. Simplified overview of the system architecture and principal software components.
Figure 2. Simplified overview of the system architecture and principal software components.
Robotics 15 00070 g002
Figure 3. SLAM map from SLAM Toolbox. Including labelled artifacts caused by common environmental elements like fences, glass windows and clutter.
Figure 3. SLAM map from SLAM Toolbox. Including labelled artifacts caused by common environmental elements like fences, glass windows and clutter.
Robotics 15 00070 g003
Figure 4. Floor plan of lab with SLAM Toolbox map overlay with loop closure shown in blue.
Figure 4. Floor plan of lab with SLAM Toolbox map overlay with loop closure shown in blue.
Robotics 15 00070 g004
Figure 5. Floor plan of lab with SLAM Toolbox map overlay without loop closure shown in red.
Figure 5. Floor plan of lab with SLAM Toolbox map overlay without loop closure shown in red.
Robotics 15 00070 g005
Figure 6. Nav2-generated path for Spot to walk on the /path topic.
Figure 6. Nav2-generated path for Spot to walk on the /path topic.
Robotics 15 00070 g006
Figure 7. Longer Nav2-generated path for Spot to walk, on the /path topic with local costmap shown in cyan and magenta.
Figure 7. Longer Nav2-generated path for Spot to walk, on the /path topic with local costmap shown in cyan and magenta.
Robotics 15 00070 g007
Figure 8. Foxglove HMI with sensor data from Spot.
Figure 8. Foxglove HMI with sensor data from Spot.
Robotics 15 00070 g008
Figure 9. Before and after initial map update.
Figure 9. Before and after initial map update.
Robotics 15 00070 g009
Figure 10. Progression of the first three frontiers.
Figure 10. Progression of the first three frontiers.
Robotics 15 00070 g010
Figure 11. Progression of the last two frontiers.
Figure 11. Progression of the last two frontiers.
Robotics 15 00070 g011
Figure 12. Return-to-home function.
Figure 12. Return-to-home function.
Robotics 15 00070 g012
Table 1. Time-to-Closest-Viable-Frontier (TCVF) latency across five goal scenarios (100 repeats each). The final row is an across-scenario summary computed from per-scenario statistics.
Table 1. Time-to-Closest-Viable-Frontier (TCVF) latency across five goal scenarios (100 repeats each). The final row is an across-scenario summary computed from per-scenario statistics.
ScenarioCustomBasic (Baseline)Speedup [×]
Median [ms] p95 [ms] Median [ms] p95 [ms]
Goal 111.8412.7243.2145.483.65
Goal 215.5226.5049.7498.363.20
Goal 324.9428.38100.38107.964.02
Goal 4198.48212.35183.14190.230.92
Goal 54.635.0595.70105.6720.68
Goal average51.0856.9994.43109.541.85
Notes: Speedup in the summary row is computed as t ¯ basic / t ¯ custom , where t ¯ is the mean of the per-scenario medians. p95 denotes the 95th percentile of the latency distribution within each scenario.
Table 2. Search effort for TCVF selection across five goal scenarios (100 repeats each). The final row summarizes per-scenario medians using a mean across scenarios.
Table 2. Search effort for TCVF selection across five goal scenarios (100 repeats each). The final row summarizes per-scenario medians using a mean across scenarios.
ScenarioCells Searched (Median)Total Frontier Cells (Median)
Custom Basic Custom Basic
Goal 121518379143464
Goal 2329610,97164322
Goal 3611822,86242465
Goal 420,57523,747697760
Goal 572825,13154118
Goal average6573.618,218200425.8
Notes: “Total frontier cells” is the total number of frontier cells detected in the map at the time a viable frontier is selected.
Table 3. Worst-case time to first viable frontier (TCVF) selection across five goal scenarios (100 repeats each).
Table 3. Worst-case time to first viable frontier (TCVF) selection across five goal scenarios (100 repeats each).
ScenarioTime (Worst-Case, ms)
Custom Basic
Goal 115.1747.17
Goal 236.54141.23
Goal 335.72112.88
Goal 4235.57203.17
Goal 565.67122.93
Notes: “Worst-case time” is the maximum observed time over the 100 repeats for each goal.
Table 4. Speedup summary for three frontier-selection stages across all evaluated positions (100 randomly generated positions, 10 repeats at each position). For each frontier stage, the mean speedup, best (max) speedup, and worst (min) speedup are reported, where speedup is defined as Basic / Custom .
Table 4. Speedup summary for three frontier-selection stages across all evaluated positions (100 randomly generated positions, 10 repeats at each position). For each frontier stage, the mean speedup, best (max) speedup, and worst (min) speedup are reported, where speedup is defined as Basic / Custom .
Frontier StageMean SpeedupBest SpeedupWorst Speedup
39.2471.942.18
41.373.300.92
52.8430.500.99
Notes: Speedup is computed per position as Basic / Custom . “Best” and “Worst” are the maximum and minimum observed speedups across positions, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brekke, V.; Berge, E.O.; Dybdahl, E.; Singh, J.; Tyapin, I. ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots. Robotics 2026, 15, 70. https://doi.org/10.3390/robotics15040070

AMA Style

Brekke V, Berge EO, Dybdahl E, Singh J, Tyapin I. ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots. Robotics. 2026; 15(4):70. https://doi.org/10.3390/robotics15040070

Chicago/Turabian Style

Brekke, Vegard, Erlend Odd Berge, Eirik Dybdahl, Jayant Singh, and Ilya Tyapin. 2026. "ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots" Robotics 15, no. 4: 70. https://doi.org/10.3390/robotics15040070

APA Style

Brekke, V., Berge, E. O., Dybdahl, E., Singh, J., & Tyapin, I. (2026). ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots. Robotics, 15(4), 70. https://doi.org/10.3390/robotics15040070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop