Next Article in Journal
An Ensemble Network for High-Accuracy and Long-Term Forecasting of Icing on Wind Turbines
Previous Article in Journal
A WSN and LoRa Hybrid Multimedia Transmission Protocol for Scalar Data and Image Transmission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation

by
Marziyeh Bamdad
1,2,*,
Hans-Peter Hutter
1 and
Alireza Darvishy
1
1
Institute of Computer Science, Zurich University of Applied Sciences, 8400 Winterthur, Switzerland
2
Department of Informatics, University of Zurich, 8050 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(24), 8164; https://doi.org/10.3390/s24248164
Submission received: 21 November 2024 / Revised: 16 December 2024 / Accepted: 20 December 2024 / Published: 21 December 2024
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Simultaneous localization and mapping (SLAM) techniques can be used to navigate the visually impaired, but the development of robust SLAM solutions for crowded spaces is limited by the lack of realistic datasets. To address this, we introduce InCrowd-VI, a novel visual–inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments. Recorded using Meta Aria Project glasses, it captures realistic scenarios without environmental control. InCrowd-VI features 58 sequences totaling a 5 km trajectory length and 1.5 h of recording time, including RGB, stereo images, and IMU measurements. The dataset captures important challenges such as pedestrian occlusions, varying crowd densities, complex layouts, and lighting changes. Ground-truth trajectories, accurate to approximately 2 cm, are provided in the dataset, originating from the Meta Aria project machine perception SLAM service. In addition, a semi-dense 3D point cloud of scenes is provided for each sequence. The evaluation of state-of-the-art visual odometry (VO) and SLAM algorithms on InCrowd-VI revealed severe performance limitations in these realistic scenarios. Under challenging conditions, systems exceeded the required localization accuracy of 0.5 m and the 1% drift threshold, with classical methods showing drift up to 5–10%. While deep learning-based approaches maintained high pose estimation coverage (>90%), they failed to achieve real-time processing speeds necessary for walking pace navigation. These results demonstrate the need and value of a new dataset to advance SLAM research for visually impaired navigation in complex indoor environments.

1. Introduction

Navigation in crowded indoor public spaces presents major challenges for blind and visually impaired (BVI) individuals. Systems that support this navigation require real-time user localization, detailed environmental mapping, and enhanced spatial awareness. Robust solutions are necessary to cope with unfamiliar settings and provide safe and more independent mobility for people with visual disabilities. Simultaneous localization and mapping (SLAM) [1] offers promising capabilities for addressing these requirements. However, several hurdles must be overcome in order to make SLAM viable for visually impaired navigation, particularly in crowded public spaces. These settings are characterized by unpredictable pedestrian movements, varying lighting conditions, and reflective and transparent surfaces. Such dynamic and complex environments complicate reliable navigation significantly.
Although existing SLAM research has made significant advancements in domains such as robotics [2,3], autonomous driving [4], and aerial vehicles [5,6], these approaches do not adequately address the specific challenges of pedestrian-rich indoor navigation for the visually impaired.
The lack of realistic datasets tailored for human navigation in crowded environments has been a significant barrier in the development of robust SLAM systems tailored for visually impaired navigation. Current datasets have several key limitations:
  • Most existing datasets focus on vehicle-based scenarios or controlled environments with limited dynamic elements.
  • Available pedestrian-focused datasets like ADVIO and BPOD lack the combination of high crowd density and accurate ground truth needed for robust SLAM evaluation.
  • Datasets that do include crowded scenes, such as NAVER LABS, are typically collected from fixed or robotic platforms that do not capture natural human motion patterns.
  • Existing datasets do not comprehensively address the challenges of indoor navigation, such as varying crowd densities, motion transitions, and complex architectural features.
To address these gaps, we introduce InCrowd-VI, a visual–inertial dataset specifically designed for SLAM research for human navigation in human-crowded indoor environments. Unlike existing datasets, InCrowd-VI captures sequences recorded at diverse indoor public locations, such as airports, train stations, museums, university labs, shopping malls, and libraries, representing realistic human motion patterns at typical walking speeds. The recorded sequences feature diverse settings, including varying crowd densities (from pedestrian-rich to static environments) and complex architectural layouts such as wide-open spaces, narrow corridors, (moving) ramps, staircases, and escalators. They present various challenges characteristic of real-world indoor spaces, including frequent occlusions by pedestrians, variations in lighting conditions, and presence of highly reflective surfaces. The dataset was collected with Meta Aria Project glasses worn by a walking person in pedestrian-rich environments, and thus incorporates realistic human motion, behavior, and interaction patterns.
The dataset has a total trajectory length of 4998.17 m, with a total recording time of 1 h, 26 min, and 37 s. This dataset provides RGB images, stereo images, and IMU measurements. In addition, it includes a semi-dense 3D point cloud of scenes for further analysis. The ground-truth trajectories were provided by the Meta Aria project machine perception SLAM service [7], which offers a reliable benchmark for evaluating the accuracy of the SLAM algorithms.
To demonstrate the value of InCrowd-VI, several state-of-the-art classical and deep learning-based approaches for visual odometry (VO) and SLAM systems were evaluated. The analysis revealed the severe performance degradation of these systems in crowded scenarios, large-scale environments, and challenging light conditions, highlighting the key challenges and opportunities for future research to develop more robust SLAM solutions for visually impaired navigation.
The contributions of this paper are as follows:
  • Introduction of InCrowd-VI, a novel visual–inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments, filling a critical gap in existing research resources.
  • Provision of ground-truth data, including accurate trajectories (approximately 2 cm accuracy) and semi-dense 3D point clouds for each sequence, enabling rigorous evaluation of SLAM algorithms.
  • Evaluation of state-of-the-art visual odometry and SLAM algorithms using InCrowd-VI, revealing their limitations in realistic crowded scenarios.
  • Identification of crucial areas for improvement in SLAM systems designed for visually impaired navigation in complex indoor environments.

2. Related Work

Evaluation of visual SLAM systems requires comprehensive datasets that capture the complexity and variability of real-world environments. The existing benchmark datasets for SLAM can be categorized according to their operational domains. Depending on the domain, different sensory data and varying degrees of ground-truth accuracy have been provided [8]. Various datasets have been proposed for different operational domains, each with distinct sensor platforms, settings, and challenges. This section reviews state-of-the-art datasets, highlighting their characteristics and limitations in comparison with the newly proposed InCrowd-VI dataset. Datasets from robotics and autonomous systems, as well as those focused on pedestrian odometry, are examined to assess their applicability to BVI navigation challenges. Table 1 provides an overview of these datasets, summarizing their key features and comparing them to InCrowd-VI.

2.1. Robotics and Autonomous Systems

Datasets play a crucial role in advancing SLAM research in various domains. The KITTI dataset [9], which is pivotal for autonomous vehicle research, has limited applicability to indoor pedestrian navigation, because it focuses on outdoor environments. Similarly, datasets like EuRoC MAV [10], TUM VI [14], and Zurich Urban [12], although offering high-quality visual–inertial data, do not fully capture the challenges of indoor pedestrian navigation in crowded environments. The CID-SIMS dataset [20], recorded from a ground-wheeled robot, provides IMU and wheel odometer data with semantic annotations and an accurate ground truth. However, it lacks the complexity of pedestrian-rich environments. VIODE [17] offers a synthetic dataset from simulated UAV navigation in dynamic environments; however, synthetic data cannot fully replace real-world data, particularly for safety-critical applications [24,25]. The ConsInv dataset [19] evaluates the SLAM systems with controlled dynamic elements. However, its controlled nature fails to capture the complexity of human-crowded public spaces. Moreover, its ground-truth method using ORB-SLAM2 with masked dynamic objects does not accurately represent the challenges faced by SLAM systems in crowded real-world settings.

2.2. Pedestrian Odometry Dataset

The Zena dataset [21] provides laser scan data and IMU measurements from helmet and waist-mounted sensors to support research on human localization and mapping in outdoor scenarios. However, the lack of visual data limits its utility in the assessment of visual SLAM systems. The Brown Pedestrian Odometry Dataset (BPOD) [23] provides real-world data from head-mounted stereo cameras in diverse indoor and outdoor environments. Although it captures challenges, such as rapid head movement and image blur, BPOD does not specifically focus on crowded indoor environments. ADVIO [22] is a dataset for benchmarking visual–inertial odometry using smartphone sensors in various indoor and outdoor paths. Although valuable for general visual–inertial odometry, the use of handheld devices limits its ability to capture natural head movements and gait patterns, which are crucial for realistic navigation scenarios. Additionally, ADVIO’s ground truth, which is based on inertial navigation with manual position fixes, may have accuracy limitations.
Although these datasets provide valuable resources for evaluating SLAM systems, they still need to fully address the specific challenges for human navigation in indoor pedestrian-rich environments. InCrowd-VI fills critical gaps by providing the following:
  • Natural human motion patterns captured through head-worn sensors represent visually impaired navigation scenarios better than the data collected using robotic or handheld devices in existing datasets.
  • Comprehensive coverage of crowd densities ranging from empty to heavily crowded, allowing systematic evaluation of SLAM performance under varying levels of dynamic obstacles.
  • Realistic indoor scenarios, including challenging architectural features and environmental conditions that are critical for practical navigation applications.
  • High-quality ground-truth trajectories even in crowded scenes, enabling precise evaluation of SLAM performance in challenging dynamic environments.
  • Long-duration sequences that test system stability and drift in real-world navigation scenarios.

3. InCrowd-VI Dataset

The InCrowd-VI dataset comprises 58 sequences with a total trajectory length of 4998.17 m and a total recording time of 1 h, 26 min, and 37 s. The dataset consists of approximately 1T GB of extracted data and approximately 100 GB of raw data. Sequences were captured across diverse indoor environments, including airports, train stations, museums, university buildings, shopping malls, and university labs and libraries. The dataset is categorized by crowd density (High: >10 pedestrians per frame; Medium: 4–10 pedestrians; Low: 1–3 pedestrians; and None: no pedestrians). It includes environmental challenges, such as lighting variations and reflective surfaces. Individual sequence lengths range from 12.20 m to 348.84 m, with recording durations spanning from 34 s to 5 min and 50 s. Additional detailed descriptions of each sequence are documented on the dataset website. The dataset and associated tools are publicly available at https://incrowd-vi.cloudlab.zhaw.ch/ (accessed on 20 November 2024).
This section presents the InCrowd-VI dataset, specifically developed for evaluating SLAM in indoor pedestrian-rich environments for human navigation. The sensor framework used for data collection is first described, followed by an outline of the methodology employed to create the dataset. The process of obtaining and validating the ground-truth data are then explained. Finally, the captured sequences and the various challenges they represent are detailed.

3.1. Sensor Framework

The choice of platform for data collection was determined on the basis of the intended application of the SLAM system to be evaluated using the dataset. In the context of visually impaired navigation, wearable platforms are particularly appropriate because they can effectively capture human motion patterns during movements. Consequently, we employed a head-worn platform to collect the data. Head-mounted devices offer the advantage of capturing a forward-facing view, which is crucial for navigation, and more accurately representing how a visually impaired individual might scan their environment, including natural head movements and areas of focus. In this study, we utilized Meta Aria glasses as our sensor platform.
The Meta Aria glasses feature five cameras, including two mono-scene cameras with less overlapping and large field of view, one RGB camera, and two eye-tracking (ET) cameras (https://facebookresearch.github.io/projectaria_tools/docs/tech_spec/hardware_spec, accessed on 4 September 2024). Additionally, the glasses are equipped with several non-visual sensors, including two Inertial Measurement Units (IMUs), a magnetometer, a barometer, a GPS receiver, and both Wi-Fi and Bluetooth beacons. The glasses also include a seven-channel spatial microphone array with a 48 kHz sampling rate, which can be configured to operate in stereo mode with two channels. It should be noted that the InCrowd-VI dataset includes data from only the RGB and mono-cameras as well as IMU measurements. Other sensor data are not included in the dataset. One of the key features of Meta Aria glasses is their support for multiple recording profiles, allowing users to select which sensors to record with and configure their settings accordingly. This flexibility makes these glasses particularly suited for diverse experimental conditions and requirements. Table 2 summarizes the specifications of the five cameras on Aria glasses.

3.1.1. Sensor Calibration

The Meta Aria glasses used in our dataset underwent rigorous factory calibration for both intrinsic and extrinsic parameters. The factory calibration provides fundamental sensor parameters:

Camera Intrinsics [27]

The Meta Aria camera intrinsic model establishes a mapping between 3D world points in camera coordinates and their corresponding 2D pixels on the sensor. For the FisheyeRadTanThinPrism model, Meta Aria bases this mapping on polar coordinates of 3D world points. The mapping begins with a 3D world point in the device frame P d , which transforms to the camera’s local frame through the following:
P c = ( x , y , z ) = T d e v i c e c a m e r a P d
The resulting polar coordinates Φ = ( θ , ϕ ) satisfy the following:
x / z = tan ( θ ) cos ( ϕ )
y / z = tan ( θ ) sin ( ϕ )
These coordinates then map to a 2D pixel through the projection mapping:
p = f ( ϕ )
with its inverse projection defined as follows:
Φ = f 1 ( p )
The FisheyeRadTanThinPrism model used by Meta Aria extends beyond basic projection by incorporating thin-prism distortion ( t p ) on top of the Fisheye62 model. The complete projection function takes the form:
u = f x · ( u r + t x ( u r , v r ) + t p x ( u r , v r ) ) + c x
v = f y · ( v r + t y ( u r , v r ) + t p y ( u r , v r ) ) + c y
In these projection equations, ( u r , v r ) represent the radially distorted coordinates, ( f x , f y ) are focal lengths, and ( c x , c y ) denote principal points. The terms t x , t y account for tangential distortion, while t p x , t p y represent the thin-prism distortion components.
The Meta Aria model defines the thin-prism distortion through the following:
t p x ( u r , v r ) = s 0 r ( θ ) 2 + s 1 r ( θ ) 4
t p y ( u r , v r ) = s 2 r ( θ ) 2 + s 3 r ( θ ) 4
where the coefficients s 0 , s 1 , s 2 , and s 3 serve as the thin-prism distortion parameters.

IMU Calibration [28]

The Meta Aria project employs an affine model for IMU calibration, where the raw sensor readouts from both accelerometer and gyroscope are compensated to obtain the actual acceleration and angular velocity measurements. For the accelerometer measurements, the compensation is defined as follows:
a = M a 1 ( s a b a )
Similarly, for the gyroscope measurements:
ω = M g 1 ( s g b g )
where s a and s g represent the raw sensor readouts from the accelerometer and gyroscope, respectively, M a and M g are scale matrices assumed to be upper triangular to maintain no global rotation from the IMU body frame to the accelerometer frame, and b a and b g are the bias vectors.
The inverse of these compensation equations defines how the sensor readouts relate to the actual physical quantities:
s a = M a a + b a
s g = M g ω + b g
The Meta Aria IMU system implements specific saturation limits to handle measurement bounds. These limits vary between the left and right IMUs, with the accelerometers bounded at 4g and 8g, respectively, and the gyroscopes at 5000 and 1000 degrees per second, respectively.

Extrinsic Calibration

The Meta Aria glasses provide factory-calibrated extrinsic parameters that model the 6-DoF pose among the sensors. The extrinsic calibration expresses the relative poses between sensors using SE(3) Lie group representations [29], following the Hamilton quaternion convention. The device frame serves as the reference coordinate system and is aligned by default with a left stereo camera. For cameras, the local coordinate frame is defined based on the optical axis and the entrance pupil of the lens, with its origin at the center of the entrance pupil. For IMUs, the coordinate systems have their origins at the position of the accelerometer, oriented along the accelerometer’s sensitive axis, and orthogonalized to compensate for the sensor orthogonality error. The Meta Aria project assumes that the accelerometer and gyroscope for each IMU are co-located; thus, they share the same extrinsic parameters, which simplifies the sensor fusion process. These calibration parameters remain fixed because they are determined during factory calibration and are provided as part of the calibration data of the device. This factory-provided calibration ensures reliable spatial relationships between sensors, which is crucial for accurate visual–inertial data collection and subsequent SLAM processing.

3.2. Methodology

The data collection process was carefully designed to capture the diverse challenges of real-world indoor navigation. The strength of the dataset lies in its comprehensive representation of these challenges, which include frequent pedestrian occlusions, varying crowd densities, complex architectural layouts, wide-open spaces, narrow corridors, (moving) ramps, staircases, escalators, texture-poor scenes, lighting variations, and highly reflective surfaces. These environments range from densely populated areas with more than 10 pedestrians per frame to empty cluttered spaces, offering a wide variety of scenarios for evaluating SLAM systems.
During data collection, tactile paving (textured ground surfaces that BVI individuals can feel with their feet and cane) was followed where available, to mimic the walking patterns of visually impaired individuals and to enable later visual inspection of the accuracy of 3D point clouds, particularly in high-traffic areas. Data collection was conducted with the necessary permission from the relevant authorities, ensuring that ethical considerations were addressed.
To ensure the authenticity of the pedestrian movement patterns, we prioritized capturing organic and unscripted interactions during data collection. Rather than staging specific scenarios or providing explicit instructions to pedestrians, we adopted an observational approach that allowed for natural human behavior. Data were collected from real-world public spaces, including train stations, airports, shopping malls, and university campuses, during typical hours of pedestrian activity. Participants wearing Meta Aria glasses were instructed to navigate these spaces as they normally would, without any artificial constraints on their movement or interactions with the environment. This approach ensured that pedestrian movements were spontaneous and not artificially controlled, whereas the dataset captured genuine navigation challenges, including unpredictable pedestrian interactions. Walking speeds and movement patterns reflected real-world variability, with an average walking speed of 0.75 m/s, further emphasizing the natural navigation patterns observed in crowded spaces. By allowing pedestrians to move freely and capture their interactions, we ensured that the dataset reflected the inherent complexity and dynamism of indoor human navigation.

3.3. Ground Truth

The ground-truth trajectory and 3D reconstruction of the scene were produced using the Meta Aria machine perception SLAM service [7]. This service provides device trajectories generated by state-of-the-art VIO and SLAM systems, followed by offline post-processing and refinement. It uses multiple sensors to improve accuracy and robustness, taking advantage of the precise knowledge of the sensor models, timing, and rigidity of Meta Aria devices. This allows for robust localization even under challenging real-world conditions such as fast motion, low or highly dynamic lighting, partial or temporary occlusion of cameras, and a wide range of static and dynamic environments.
Although the Meta Aria machine perception SLAM service achieves high accuracy, it does so under conditions that are not feasible for real-time visually impaired navigation: it operates offline with server-side processing, utilizes the full sensor suite, and performs extensive post-processing. By contrast, practical navigation systems for the visually impaired must operate in real time on resource-constrained wearable devices, provide immediate reliable feedback, and maintain robustness without the benefit of post-processing or cloud computation.
The trajectories produced by the Meta SLAM service have a typical global RMSE translation error of no more than 1.5 cm in room-scale scenarios. Additionally, Meta SLAM service provides a semi-dense point cloud of the scene, accurately reconstructing the static portion of the environment, even in highly dynamic and challenging situations [7].
The accuracy of this ground truth was further validated through manual measurements in several challenging scenarios, with results indicating a mean absolute error of approximately 2 cm, aligning with the reported accuracy of the Meta SLAM service. To validate the ground truth, a method was used that leverages the core principle of SLAM systems: the joint optimization of the 3D map and camera trajectory [30]. This approach involved identifying easily recognizable landmarks across the trajectory in both the real-world environment and semi-dense map. The 3D coordinates of these landmarks were recorded from the point cloud and the distances between them were calculated using the Euclidean distance formula. These calculated distances were then compared with actual measurements taken in the real world, allowing for a direct assessment of the accuracy of the map and reliability of the estimated trajectory. Figure 1 illustrates an example of the manual measurement process.
Initially, manual measurements were conducted on selected crowded sequences. Following the evaluation of the state-of-the-art VO and SLAM systems presented in Section 4.2, we conducted additional manual measurements, specifically focusing on sequences where all systems exhibited failure or suboptimal performance, according to the metrics defined in Section 4.1. We used a variety of objects in each sequence, such as floor tiles, doors, advertising boards, and manhole covers, distributed across different spatial orientations throughout the trajectories to ensure robust validation. Figure 2 presents the relationship between real-world and measured distances from the second round of manual measurements. The strong linear correlation (indicated by the red trend line) and tight clustering of points around this line demonstrates that the Meta SLAM service maintains its accuracy even in challenging scenarios where contemporary SLAM systems struggle. The plot, incorporating measurements ranging from 30 cm to over 250 cm, shows that the reconstruction accuracy remained stable regardless of the measured distance, with deviations typically within 2 cm of the expected values.
It is important to note that the manual measurement process itself introduces some level of error owing to the challenges in precisely identifying corresponding points. Despite this, the observed errors, which are slightly higher than the reported typical error, remain within a reasonable range, suggesting that the Meta SLAM service performs adequately in this specific scenario. In addition to quantitative metrics, a qualitative visual inspection of the estimated trajectories and maps was performed. This involved assessing the alignment of landmarks, plausibility of trajectories, and removal of moving pedestrians in the scene. Figure 3 illustrates the capability of the Meta Aria machine perception SLAM service to handle dynamic objects by showcasing a scenario where a pedestrian initially appears static relative to the camera while on an escalator but subsequently becomes dynamic. The image presents the refined 3D reconstruction produced by the SLAM service, which successfully identifies and removes dynamic pedestrians from the final point cloud, leaving only static elements of the environment. The dynamic object removal process improves the accuracy of 3D map reconstruction and trajectory estimation by ensuring that moving and temporary stationary objects are not incorporated into the static environment representation or ground truth.

3.4. Sequences and Captured Challenges

Each dataset sequence in InCrowd-VI includes timestamped RGB images at resolutions of 1408 × 1408 and 30 FPS, stereo pair images at resolutions of 640 × 480 and 30 FPS, two streams of IMU data at data rates of 1000 and 800 Hz, semi-dense point clouds, and accurate trajectory ground truth. Although Meta-Ariana glasses provide data from multiple sensors, not all of them are included in the dataset, as they were not essential for the focus of this dataset. An example of the image data and their relative 3D map of the scene is shown in Figure 4.
The dataset sequences are organized according to different levels of pedestrian density to facilitate a thorough evaluation of the SLAM algorithms. These levels are categorized as follows: High (more than 10 pedestrians per frame), Medium (4 to 10 pedestrians per frame), Low (1 to 3 pedestrians per frame), and None (no pedestrians present in the scene). In some sequences, severe occlusion caused by overcrowding is also present. Overcrowding occurs when pedestrians occupy the camera frame to the extent that a significant portion of the scene is obscured. This situation is characterized by substantial visual obstructions, where pedestrians temporarily block more than 50% of the camera field of view. Furthermore, the sequences also consider the complexity of the environment.
This categorization allows the evaluation of SLAM systems at varying levels of human density and environmental complexity. Table 3 provides an overview of the sequences, illustrating their categorization based on pedestrian density, diversity of venues, sequence lengths, duration, and the specific challenges encountered in each scenario. Table 4 presents a detailed breakdown of the specific challenges in each sequence. The distribution of these challenges across different density levels enables comprehensive testing of SLAM systems under various combinations of human presence and environmental complexity. Figure 5 shows the distribution of various challenges, broken down by crowd density levels. The visualization reveals that reflective surfaces and challenging lighting conditions are among the most common environmental challenges, reflecting typical indoor navigation scenarios.
In addition, the trajectory length plays a crucial role in assessing the performance and robustness of visual SLAM systems. For indoor environments for visually impaired navigation, we considered sequences with lengths shorter than 40 m as short trajectories, those ranging from 40 to 100 m as medium trajectories, and those of 100 m and beyond as long trajectories. Figure 6 shows a histogram of the trajectory lengths for the InCrowd-VI dataset.
It is important to note that the walking speeds in our dataset reflect the typical navigation patterns of visually impaired individuals. Studies have shown that people with visual impairment tend to walk slower and have reduced stride lengths during independent and guided walking compared to sighted people [31]. While the typical walking speed for sighted populations is generally between 1.11 and 1.4 m/s [31], the average walking speed in the InCrowd-VI dataset is 0.75 m/s. This lower average speed aligns with the expectations of visually impaired navigation.
The dataset captures a wide range of challenges inherent in real-world indoor navigation scenarios, including the following:
  • Dynamic obstacles: InCrowd-VI features sequences with moving pedestrians, capturing scenarios of crossing paths with groups and maneuvering around individuals moving in different directions. These sequences test the ability of the SLAM systems to handle unpredictable dynamic elements in real-world environments.
  • Crowd density variation: Sequences capture a range of crowd densities, from static to densely populated areas, testing the adaptability of SLAM systems to different levels of human activity.
  • Frequent occlusions: The dataset includes sequences with frequent occlusions caused by moving pedestrians, luggage, and infrastructure, thereby creating significant challenges for maintaining accurate mapping and tracking.
  • Reflective and transparent surfaces: The dataset includes scenes with glass and other reflective surfaces that can distort sensor readings and complicate the visual SLAM algorithms.
  • Texture-poor areas: Scenes with minimal visual features, such as plain walls, challenge feature-based SLAM systems.
  • Large-scale and complex environments: The dataset covers diverse architectural layouts, including open spaces, corridors, ramps, staircases, and escalators, to test the adaptability of SLAM to various spatial configurations.
  • Lighting variations: Sequences incorporate sequences with varying lighting conditions, from well-lit atriums to dimly lit corridors or areas with flickering lights, to test the SLAM robustness under varying illumination conditions.
  • Sudden viewpoint changes: Sequences capture user perspective shifts during corner turns and level transitions, thereby challenging SLAM tracking consistency.
  • Motion transitions: Sequences include transitions between moving environments (escalators, moving ramps, and trains) and stationary areas, to test SLAM’s ability to distinguish ego-motion from environmental motion.
These challenges collectively contribute to the realism and complexity of the InCrowd-VI dataset, making it a valuable resource for evaluating and advancing SLAM systems designed for visually impaired navigation in real-world indoor environments. Figure 7 shows the selection of images from the InCrowd-VI dataset.

4. Experimental Evaluation

To assess the accuracy and robustness of current state-of-the-art visual SLAM systems on the InCrowd-VI dataset, four representative algorithms were selected: two classical approaches and two deep learning-based methods. These systems were chosen because of their prominence in the field and diverse approaches to visual SLAM. As shown in Table 5, DROID-SLAM [32] and DPV-SLAM [33] use dense optical flow-based and sparse patch-based matching approaches, respectively, both of which leverage end-to-end learning. Conversely, ORB-SLAM3 [34] and SVO [35] employ the classical feature matching and semi-direct matching approaches, respectively. This evaluation serves to demonstrate that InCrowd-VI effectively captures challenging real-world scenarios that current state-of-the-art systems struggle to handle and underscores the dataset’s value as a benchmark for advancing SLAM research in visually impaired navigation.
A selection of sequences from InCrowd-VI representing a range of difficulty levels from easy to hard was used for evaluation. These sequences were selected to represent the diverse challenges present in the dataset, including changing elevations, long trajectories, motion transitions, challenging lighting conditions, and crowds with different densities. This approach allows for a comprehensive assessment of the accuracy and robustness of SLAM systems across various scenarios, while maintaining a manageable evaluation process. ORB-SLAM3 was evaluated using the left camera images and left IMU measurements. The DROID-SLAM, DPV-SLAM, and SVO were evaluated using only the left camera images. Additionally, the intrinsic camera parameters were extracted, and the left IMU-to-left camera transformation was calculated using the calibration data provided by the Meta Aria project. For camera calibration, Project Aria used a sophisticated camera model (FisheyeRadTanThinPrism)—https://facebookresearch.github.io/projectaria_tools/docs/tech_insights/camera_intrinsic_models, accessed on 4 September 2024—which includes six radial, two tangential, and four thin-prism distortion parameters. A custom tool was developed to convert the data from the VRS files recorded by the Meta Aria glasses to the required format for each VO and SLAM system, and system configuration parameter files were provided for each system to ensure compatibility with the dataset. The experimental setup ran Ubuntu 20.04 and 22.04 on a Lenovo laptop equipped with a 12th Gen Intel® Core™ i9-12950HX vPro® Processor, 64 GB DDR5 RAM, and an NVIDIA RTX™ A2000 8 GB GDDR6 graphics card. ORB-SLAM3, SVO, and DPV-SLAM incorporate random processes into their operations, such as RANSAC, for outlier rejection and random feature selection. DPV-SLAM, which is based on DPVO [36], adds further variability through random keypoint selection and its probabilistic depth estimation approach. By contrast, DROID-SLAM uses a deterministic model that produces identical results across multiple runs, as observed in our experiments. To account for these algorithmic differences and ensure fair evaluation, we executed ORB-SLAM3, SVO, and DPV-SLAM five times each, reporting mean values, whereas DROID-SLAM required only a single execution per sequence. The results are summarized in Table 6.

4.1. Evaluation Metrics

To evaluate VO and SLAM systems, we used the root mean square error (RMSE) of absolute trajectory error (ATE) [37]. This metric quantifies the accuracy of the estimated trajectory compared with the ground truth by measuring the root mean square of the differences between the estimated and true poses. For ORB-SLAM3 and SVO, the absolute trajectory error was calculated using the TUM RGB-D benchmark tool [38] and RPG trajectory evaluation tool [39], respectively. ORB-SLAM3 outputs trajectories in TUM format, which is directly compatible with the TUM RGB-D benchmark tool, whereas SVO includes the RPG evaluation tool in its package, making it the most suitable choice for evaluating SVO’s output. For DROID-SLAM and DPV-SLAM, custom scripts were developed to compute the same metrics using the EVO package [40], a flexible tool capable of handling their output formats.
Trajectory accuracy alone does not fully capture the performance of a SLAM system, particularly in challenging real-world scenarios. SLAM systems are susceptible to initialization failures and tracking loss, particularly under conditions such as motion blur, lack of visual features, or occlusions. Such disruptions lead to gaps in the estimated trajectory, thus affecting the overall reliability of the system. To address this, we introduced the pose estimation coverage (PEC) metric, calculated as (number of estimated poses/total number of frames) × 100. The PEC quantifies the system’s ability to maintain consistent pose estimation throughout a sequence, offering insights into its robustness against initialization failures and tracking losses. A notably low PEC often indicates critical failure, which can make the ATE unreliable. To further evaluate each system, we introduced a drift percentage (DP) metric, defined as (ATE/trajectory length) × 100. This metric quantifies the drift of the system as a percentage of the total distance traveled. A lower value indicates better performance, with the system maintaining more accurate localization over extended trajectories. To ensure a comprehensive evaluation that considers both accuracy and robustness, we adopted a dual-criteria approach that incorporates both ATE and PEC. A sequence is considered successful if the drift percentage (DP) value is less than 1% of the distance traveled and its PEC exceeds 90%.
Beyond accuracy and robustness, real-time performance is crucial for visually impaired navigation applications because it directly influences user experience and safety. To directly relate system performance to the end-user experience, we developed a metric that compares the processing speed of SLAM systems with the user’s walking speed. The fundamental idea is that, if the system can process more distance than the user walks in a given time interval (e.g., one second), it is considered to be performing in real time. First, we calculated the frames per second (FPS) processed by the system, indicating the number of frames handled per second. Next, we calculated the distance per frame (DPF), which represents the distance between consecutive camera frames. DPF was calculated using the formula DPF = AWS/camera_fps, where AWS is the average user walking speed and camera_fps is the camera’s frame rate. Finally, we introduced the processed distance rate (PDR), which is obtained by multiplying the FPS and DPF. If the PDR meets or exceeds the average walking speed, the system is considered capable of keeping pace with the user’s movement, indicating adequate real-time performance for BVI navigation. We quantified this capability through the real-time factor (RTF), defined as the ratio of PDR to AWS, where the values 1 demonstrate the real-time processing capacity.

4.2. Evaluation Results

The evaluation results presented in Table 6 demonstrate that InCrowd-VI successfully captures challenging scenarios that push the limits of the current state-of-the-art VO and SLAM systems. Our analysis of these results is two-fold. First, we assessed the system performance against the key requirements of visually impaired navigation applications. Second, we analyzed the impact of specific environmental factors, such as crowd density, trajectory length, lighting conditions, and motion transition, on system performance. This analysis validates the effectiveness of InCrowd-VI as a benchmark, while also identifying critical areas that require further research to make SLAM viable for visually impaired navigation.
To effectively address the needs of BVI individuals during navigation, several key requirements must be satisfied, including real-time performance, high localization accuracy, and robustness. In addition, the system should maintain long-distance consistency with minimal drifts. For this study, we established a localization accuracy criterion of 0.5 m for indoor environments, which we consider suitable for BVI navigation applications. Across the sequences, ORB-SLAM3 and DROID-SLAM generally met or approached this criterion in low-density crowds. However, under more challenging conditions, both classical and learning-based systems exhibit a higher ATE, which significantly surpasses the required accuracy. Although not directly comparable to the requirement for robustness to occlusions, dynamic objects, and other challenging conditions, pose estimation coverage (PEC) provides insights into system reliability. Robust pose estimation is essential for maintaining consistent localization. BVI navigation requires minimal interruption during the operation. Deep learning-based approaches consistently achieved high PEC values (>90%) across all sequences, whereas classical approaches failed to maintain consistent tracking, particularly in scenes with lighting variations. This indicates a limitation in handling the dynamics in the environment, which is critical for real-world BVI applications.
Real-time performance requires the system to match or exceed the walking speed of the user. Our processed distance rate (PDR) metric, when compared to the average walking speed (AWS), indicates that classical approaches generally achieve real-time or near real-time performance across almost all sequences. However, deep learning-based methods fall short of real-time processing, indicating delays that could hinder real-time feedback for users. Regarding long-distance consistency, the drift percentage (DP) metric revealed that both classical and deep learning-based approaches frequently exceeded the desired 1% drift threshold, particularly for longer trajectories. Classical approaches typically show higher drift percentages (often exceeding 5–10% of the trajectory length), whereas deep learning-based methods generally maintain lower drift rates but still struggle to consistently achieve the desired threshold. It is important to note that, although some systems may approach or meet certain requirements under controlled conditions, they all face significant challenges in maintaining consistent performance across diverse real-world scenarios represented in the InCrowd-VI dataset. This highlights the gap between the current VO and SLAM capabilities and requirements for visually impaired navigation in crowded indoor environments.
Beyond these requirements, our analysis reveals how specific environmental factors affect the system performance. The evaluation results highlight the significant impact of crowd density on the performance of VO and SLAM systems. In scenarios with high crowd density, systems often exhibit a higher ATE than sequences with low or no crowd density. The trajectory length also plays a crucial role. For instance, in the Entrance_checkin1 scene, despite the medium crowd density, the system reported a lower error owing to the short trajectory length. In addition to crowd density, other challenging conditions also affect system performance. The TS116 scene, which is a typical university building environment with artificial lighting and sunlight in some areas, exemplifies the impact of lighting variations. Despite the absence of pedestrians, ORB-SLAM3 encountered difficulties during initialization and experienced frequent tracking losses owing to lighting challenges. Similarly, in the TH_loop scene, the user’s proximity to a wall causing occlusions, combined with challenging lighting conditions from large windows, caused tracking issues, even without the presence of pedestrians.
The results also demonstrated the influence of specific scenario characteristics. In scenes involving stairs, the short trajectory lengths allow the systems to handle elevation changes effectively, despite the inherent challenges. However, in scenes with motion transitions, particularly Getoff_pharmacy, where the user transitions from a moving train to a stationary platform, the systems struggle to differentiate between ego-motion and environmental motion, resulting in a poor performance. Figure 8 provides a visual representation of the impact of these challenging conditions on ATE across different systems. The charts reveal that deep learning-based methods generally maintain a lower ATE across most challenging conditions, whereas classical approaches tend to show more dramatic accuracy degradation with increasing environmental complexity.
In summary, our evaluation revealed three key findings: (1) deep learning-based methods demonstrate superior robustness but struggle with real-time processing, whereas classical approaches offer better processing speed but lack consistency in challenging conditions; (2) environmental factors, such as crowd density and lighting variations, significantly impact all systems, with performance degradation increasing in proportion to crowd density; and (3) none of the evaluated systems fully satisfied the combined requirements for reliable BVI navigation in terms of accuracy, robustness, and real-time performance in complex indoor environments. These findings underscore the need for new approaches that can better balance these competing demands while maintaining reliability across diverse real-world conditions.

5. Conclusions

This paper introduces InCrowd-VI, a novel visual–inertial dataset designed to address the challenges of SLAM in indoor pedestrian-rich environments, particularly for visually impaired navigation. Our evaluation of the state-of-the-art VO and SLAM algorithms on InCrowd-VI demonstrated significant performance limitations across both classical and deep learning approaches, validating that the dataset effectively captures challenging real-world scenarios. These systems struggle with the complexities of crowded scenes, lighting variations, and motion transitions, highlighting the gap between the current capabilities and real-world indoor navigation demands. While this study highlights significant performance limitations of SLAM and VO algorithms on the InCrowd-VI dataset, a more detailed analysis of specific failure modes (e.g., feature loss, motion blur, and occlusion) remains an important direction for future work. Such an analysis could provide researchers with actionable insights to develop algorithms that are better suited to challenging indoor navigation scenarios.
InCrowd-VI serves as a crucial benchmark for advancing SLAM research in complex, crowded indoor settings. It provides realistic, user-centric data that closely mirrors the challenges faced by visually impaired individuals navigating such environments. Future studies should focus on addressing the key challenges identified by the InCrowd-VI dataset. Although InCrowd-VI is a valuable resource for indoor SLAM research, it is important to acknowledge its limitations. The absence of depth information restricts its applicability for testing depth-based SLAM systems, and its focus on indoor environments limits its utility in outdoor and mixed-environment scenarios.

Author Contributions

Conceptualization, M.B.; methodology, M.B.; software, M.B.; validation, M.B.; formal analysis, M.B.; investigation, M.B.; resources, M.B.; data curation, M.B.; writing—original draft preparation, M.B.; writing—review and editing, M.B. and H.-P.H.; visualization, M.B.; supervision, H.-P.H. and A.D.; funding acquisition, A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by SNSF (Swiss National Science Foundation) grant number 40B2-0_194677. The APC was funded by ZHAW Open Acess, Zurich University of Applied Sciences.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The InCrowd-VI dataset and associated tools are publicly available at https://incrowd-vi.cloudlab.zhaw.ch/ (accessed on 20 November 2024). The repository includes the complete dataset, custom tools for data extraction and processing from Meta .vrs recording files, and documentation for dataset usage.

Acknowledgments

The authors would like to thank the Robotics and Perception Group at the Department of Informatics, University of Zurich, for providing the Meta Aria glasses used in this study. The authors also extend their gratitude to Giovanni Cioffi for his valuable feedback.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  2. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  3. Pellerito, R.; Cannici, M.; Gehrig, D.; Belhadj, J.; Dubois-Matra, O.; Casasco, M.; Scaramuzza, D. Deep Visual Odometry with Events and Frames. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024. [Google Scholar]
  4. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef]
  5. Hanover, D.; Loquercio, A.; Bauersfeld, L.; Romero, A.; Penicka, R.; Song, Y.; Cioffi, G.; Kaufmann, E.; Scaramuzza, D. Autonomous drone racing: A survey. IEEE Trans. Robot. 2024, 40, 3044–3067. [Google Scholar] [CrossRef]
  6. Alberico, I.; Delaune, J.; Cioffi, G.; Scaramuzza, D. Structure-Invariant Range-Visual-Inertial Odometry. arXiv 2024, arXiv:2409.04633. [Google Scholar]
  7. Somasundaram, K.; Dong, J.; Tang, H.; Straub, J.; Yan, M.; Goesele, M.; Engel, J.J.; De Nardi, R.; Newcombe, R. Project Aria: A new tool for egocentric multi-modal AI research. arXiv 2023, arXiv:2308.13561. [Google Scholar]
  8. Zhang, L.; Helmberger, M.; Fu, L.F.T.; Wisth, D.; Camurri, M.; Scaramuzza, D.; Fallon, M. Hilti-oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping. IEEE Robot. Autom. Lett. 2022, 8, 408–415. [Google Scholar] [CrossRef]
  9. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  10. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  11. Pfrommer, B.; Sanket, N.; Daniilidis, K.; Cleveland, J. Penncosyvio: A challenging visual inertial odometry benchmark. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3847–3854. [Google Scholar]
  12. Majdik, A.L.; Till, C.; Scaramuzza, D. The Zurich urban micro aerial vehicle dataset. Int. J. Robot. Res. 2017, 36, 269–273. [Google Scholar] [CrossRef]
  13. Li, W.; Saeedi, S.; McCormac, J.; Clark, R.; Tzoumanikas, D.; Ye, Q.; Huang, Y.; Tang, R.; Leutenegger, S. Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset. arXiv 2018, arXiv:1809.00716. [Google Scholar]
  14. Schubert, D.; Goll, T.; Demmel, N.; Usenko, V.; Stückler, J.; Cremers, D. The TUM VI benchmark for evaluating visual-inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1680–1687. [Google Scholar]
  15. Delmerico, J.; Cieslewski, T.; Rebecq, H.; Faessler, M.; Scaramuzza, D. Are we ready for autonomous drone racing? the UZH-FPV drone racing dataset. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6713–6719. [Google Scholar]
  16. Ramezani, M.; Wang, Y.; Camurri, M.; Wisth, D.; Mattamala, M.; Fallon, M. The newer college dataset: Handheld lidar, inertial and vision with ground truth. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 4353–4360. [Google Scholar]
  17. Minoda, K.; Schilling, F.; Wüest, V.; Floreano, D.; Yairi, T. Viode: A simulated dataset to address the challenges of visual-inertial odometry in dynamic environments. IEEE Robot. Autom. Lett. 2021, 6, 1343–1350. [Google Scholar] [CrossRef]
  18. Lee, D.; Ryu, S.; Yeon, S.; Lee, Y.; Kim, D.; Han, C.; Cabon, Y.; Weinzaepfel, P.; Guérin, N.; Csurka, G.; et al. Large-scale localization datasets in crowded indoor spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 3227–3236. [Google Scholar]
  19. Bojko, A.; Dupont, R.; Tamaazousti, M.; Borgne, H.L. Self-improving SLAM in dynamic environments: Learning when to mask. arXiv 2022, arXiv:2210.08350. [Google Scholar]
  20. Zhang, Y.; An, N.; Shi, C.; Wang, S.; Wei, H.; Zhang, P.; Meng, X.; Sun, Z.; Wang, J.; Liang, W.; et al. CID-SIMS: Complex indoor dataset with semantic information and multi-sensor data from a ground wheeled robot viewpoint. Int. J. Robot. Res. 2023, 43, 899–917. [Google Scholar] [CrossRef]
  21. Recchiuto, C.T.; Scalmato, A.; Sgorbissa, A. A dataset for human localization and mapping with wearable sensors. Robot. Auton. Syst. 2017, 97, 136–143. [Google Scholar] [CrossRef]
  22. Cortés, S.; Solin, A.; Rahtu, E.; Kannala, J. ADVIO: An authentic dataset for visual-inertial odometry. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 419–434. [Google Scholar]
  23. Charatan, D.; Fan, H.; Kimia, B. Benchmarking Pedestrian Odometry: The Brown Pedestrian Odometry Dataset (BPOD). In Proceedings of the 2022 International Conference on 3D Vision (3DV), Prague, Czech Republic, 12–15 September 2022; pp. 1–11. [Google Scholar]
  24. Liu, Y.; Fu, Y.; Chen, F.; Goossens, B.; Tao, W.; Zhao, H. Simultaneous localization and mapping related datasets: A comprehensive survey. arXiv 2021, arXiv:2102.04036. [Google Scholar]
  25. Tosi, F.; Zhang, Y.; Gong, Z.; Sandström, E.; Mattoccia, S.; Oswald, M.R.; Poggi, M. How NeRFs and 3D Gaussian Splatting are Reshaping SLAM: A Survey. arXiv 2024, arXiv:2402.13255. [Google Scholar]
  26. Meta AI Research. Project Aria Hardware Specifications. 2024. Available online: https://facebookresearch.github.io/projectaria_tools/docs/tech_spec/hardware_spec (accessed on 4 September 2024).
  27. Meta AI Research. Camera Intrinsic Models for Project Aria Devices. Available online: https://facebookresearch.github.io/projectaria_tools/docs/tech_insights/camera_intrinsic_models (accessed on 4 September 2024).
  28. Meta AI Research. Camera Intrinsic Models for Project Aria Devices. Available online: https://facebookresearch.github.io/projectaria_tools/docs/tech_insights/sensor_measurement_model (accessed on 4 September 2024).
  29. Meta AI Research. 3D Coordinate Frame Conventions for Project Aria Glasses. Available online: https://facebookresearch.github.io/projectaria_tools/docs/data_formats/coordinate_convention/3d_coordinate_frame_convention (accessed on 4 September 2024).
  30. Schops, T.; Sattler, T.; Pollefeys, M. Bad slam: Bundle adjusted direct rgb-d slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 134–144. [Google Scholar]
  31. Bennett, H.J.; Valenzuela, K.A.; Fleenor, K.; Morrison, S.; Haegele, J.A. Walking biomechanics and energetics of individuals with a visual impairment: A preliminary report. Hum. Mov. 2019, 20, 8–18. [Google Scholar] [CrossRef]
  32. Teed, Z.; Deng, J. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. Adv. Neural Inf. Process. Syst. 2021, 34, 16558–16569. [Google Scholar]
  33. Lipson, L.; Teed, Z.; Deng, J. Deep Patch Visual SLAM. arXiv 2024, arXiv:2408.01654. [Google Scholar]
  34. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  35. Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef]
  36. Teed, Z.; Lipson, L.; Deng, J. Deep patch visual odometry. Adv. Neural Inf. Process. Syst. 2024, 36, 39033–39051. [Google Scholar]
  37. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
  38. Computer Vision Group, TUM School of Computation, Information and Technology, Technical University of Munich Useful Tools for the RGB-D Benchmark. Available online: https://cvg.cit.tum.de/data/datasets/rgbd-dataset/tools (accessed on 4 September 2024).
  39. Zhang, Z.; Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
  40. Grupp, M. evo: Python Package for the Evaluation of Odometry and SLAM. 2017. Available online: https://github.com/MichaelGrupp/evo (accessed on 20 December 2024).
Figure 1. Sample of manual measurement process for ground-truth validation. Left: Real-world scene with a landmark floor tile highlighted by pink rectangle. Middle: Full 3D point cloud map of the scene with four adjacent floor tiles marked in blue. Right: Zoomed view of the marked corner of the tiles in the point cloud used for measurement.
Figure 1. Sample of manual measurement process for ground-truth validation. Left: Real-world scene with a landmark floor tile highlighted by pink rectangle. Middle: Full 3D point cloud map of the scene with four adjacent floor tiles marked in blue. Right: Zoomed view of the marked corner of the tiles in the point cloud used for measurement.
Sensors 24 08164 g001
Figure 2. Correlation between real-world measurements and point-cloud-derived distances in challenging sequences, where state-of-the-art SLAM systems exhibited failure or suboptimal performance. The scatter plot demonstrates a strong linear relationship between real-world and measured distances (in centimeters), with an average error of 2.14 cm, standard deviation of 1.46 cm, and median error of 2.0 cm.
Figure 2. Correlation between real-world measurements and point-cloud-derived distances in challenging sequences, where state-of-the-art SLAM systems exhibited failure or suboptimal performance. The scatter plot demonstrates a strong linear relationship between real-world and measured distances (in centimeters), with an average error of 2.14 cm, standard deviation of 1.46 cm, and median error of 2.0 cm.
Sensors 24 08164 g002
Figure 3. Refined 3D reconstruction demonstrating the removal of dynamic pedestrians that initially appeared static relative to the camera on the escalator.
Figure 3. Refined 3D reconstruction demonstrating the removal of dynamic pedestrians that initially appeared static relative to the camera on the escalator.
Sensors 24 08164 g003
Figure 4. Example of image data and corresponding 3D map from a dataset sequence: The top-left image shows the RGB frame, and the top-middle and top-right images represent the left and right images of a stereo pair. The bottom image shows the 3D map of the scene.
Figure 4. Example of image data and corresponding 3D map from a dataset sequence: The top-left image shows the RGB frame, and the top-middle and top-right images represent the left and right images of a stereo pair. The bottom image shows the 3D map of the scene.
Sensors 24 08164 g004
Figure 5. Distribution of challenges across sequences in the InCrowd-VI dataset, categorized by crowd density levels (High: >10 pedestrians per frame; Medium: 4–10 pedestrians; Low: 1–3 pedestrians; None: no pedestrians). The x-axis represents the different types of challenges, and the y-axis indicates the total number of sequences. Note that the sequences may contain multiple challenges simultaneously.
Figure 5. Distribution of challenges across sequences in the InCrowd-VI dataset, categorized by crowd density levels (High: >10 pedestrians per frame; Medium: 4–10 pedestrians; Low: 1–3 pedestrians; None: no pedestrians). The x-axis represents the different types of challenges, and the y-axis indicates the total number of sequences. Note that the sequences may contain multiple challenges simultaneously.
Sensors 24 08164 g005
Figure 6. Histogram of trajectory length.
Figure 6. Histogram of trajectory length.
Sensors 24 08164 g006
Figure 7. Example scenes from the InCrowd-VI dataset demonstrating various challenges: (a) high pedestrian density, (b) varying lighting conditions, (c) texture-poor surfaces, (d) reflective surfaces, (e) narrow aisles, and (f) stairs.
Figure 7. Example scenes from the InCrowd-VI dataset demonstrating various challenges: (a) high pedestrian density, (b) varying lighting conditions, (c) texture-poor surfaces, (d) reflective surfaces, (e) narrow aisles, and (f) stairs.
Sensors 24 08164 g007
Figure 8. ATE comparison of evaluated SLAM systems under challenging conditions, with the x-axis depicting sequences categorized by crowd density: high, medium, low, and none.
Figure 8. ATE comparison of evaluated SLAM systems under challenging conditions, with the x-axis depicting sequences categorized by crowd density: high, medium, low, and none.
Sensors 24 08164 g008
Table 1. Comparison of representative datasets.
Table 1. Comparison of representative datasets.
DatasetEnvironmentCarrierSensorsCrowd DensityGround Truth# Sequence
Robotics and autonomous systems
KITTI [9]OutdoorCarHigh-resolution cameras, LiDAR, GPS/IMULowGPS/IMU22
EuRoC MAV [10]IndoorDroneStereo cameras, IMUStaticMotion capture11
PennCOSYVIO [11]Indoor/OutdoorHandheldStereo cameras, IMULowFiducial markers4
Zurich Urban [12]OutdoorQuadrotorHigh-resolution camera, GPS, and IMUNot mentionedPhotogrammetric 3D reconstruction2 km
InteriorNet [13]IndoorSimulated camerasSynthetic images, IMUNoneSynthetic15K
TUM VI [14]Indoor/OutdoorHandheldStereo cameras, IMULowPartial motion capture28
UZH-FPV [15]Indoor/OutdoorQuadrotorEvent and RGB cameras, IMUNoneLeica Nova MS60 TotalStation27+
Newer College [16]OutdoorHandheldStereoscopic–inertial camera, LiDARNone6DOF ICP localization4
VIODE [17]Indoor/OutdoorSimulated quadrotor UAVSynthetic RGB cameras, IMUHighSynthetic12
NAVER LABS [18]IndoorA dedicated mapping platformCameras, laser scannersMediumLiDAR SLAM & SFM5 datasets
ConsInv [19]Indoor/OutdoorNot mentionedMonocular and stereo cameraLowORB-SLAM2159
Hilti-Oxford [8]Indoor/OutdoorHandheldStereo cameras, IMU, LiDARLowsurvey-grade scanner16
CID-SIMS [20]IndoorRobot/
Handheld
RGB-D camera, IMU, wheel odometryLowGeoSLAM22
Pedestrian odometry dataset
Zena [21]OutdoorHead-mounted2D laser scanner, IMUHighStep estimation process using IMUa 35 min dataset
ADVIO [22]Indoor/OutdoorHandheldSmartphone cameras, IMUHighIMU-based + manual position fixes23
BPOD [23]Indoor/OutdoorHead-mountedStereo cameras, laser distance meterNot mentionedMarker-based48
InCrowd-VI (ours)IndoorHead-wornRGB and stereo cameras, IMUHighMeta Aria Project SLAM service58
Table 2. Specifications of the cameras on Meta Aria glasses [26], with the horizontal field of view (HFOV) in degrees, vertical field of view (VFOV) in degrees, instantaneous field of view (IFOV) in degrees per pixel, and maximum resolution in pixels. FPS represents the maximum frame rate. Note that InCrowd-VI only includes data from the RGB and Mono cameras.
Table 2. Specifications of the cameras on Meta Aria glasses [26], with the horizontal field of view (HFOV) in degrees, vertical field of view (VFOV) in degrees, instantaneous field of view (IFOV) in degrees per pixel, and maximum resolution in pixels. FPS represents the maximum frame rate. Note that InCrowd-VI only includes data from the RGB and Mono cameras.
CameraHFOVVFOVIFOVMax ResolutionFPSShutter
Mono (×2)1501200.26640 × 48030global
RGB (×1)1101100.0382880 × 288030rolling
ET (×2)64480.2640 × 48090global
Table 3. InCrowd-VI sequences. Density categories are defined as follows: High: >10 pedestrians per frame; Medium: 4–10 pedestrians per frame; Low: 1–3 pedestrians per frame; None: No pedestrians present in the scene. Sequences marked with an asterisk (*) in the “Sequence name” column include severe occlusion caused by overcrowding.
Table 3. InCrowd-VI sequences. Density categories are defined as follows: High: >10 pedestrians per frame; Medium: 4–10 pedestrians per frame; Low: 1–3 pedestrians per frame; None: No pedestrians present in the scene. Sequences marked with an asterisk (*) in the “Sequence name” column include severe occlusion caused by overcrowding.
DensitySequence NameVenueLength (m)Span (mm:ss)Main Challenges
HighOerlikon_G7Train station133.3602:04Challenging light, ramp
G6_exitTrain station60.2101:27Stairs, flickering light
G6_loop *Train station103.7402:07Challenging light
Getoff_pharmacy *Train station159.5003:06Motion transition
Service_centerAirport131.8902:18Reflection, open area
Ochsner_sport *Airport70.5701:14Reflection
Toward_gatesAirport165.4602:57Reflection, open area
Arrival2 *Airport74.9601:23Crowd
Checkin2_loopAirport348.8405:22Long trajectory
Shopping_open *Shopping mall182.5103:10Challenging light, open area
Turn180Airport41.9500:46Reflection
Cafe_exit *Main train station121.4001:51Reflection, open area
G8_cafe *Main train station294.0504:14Long trajectory
Ground_53 *Main train station32.4001:01Escalator (motion transition)
Kiko_loop *Shopping mall314.6805:50Long trajectory
Reservation_officeMain train station95.1501:28Reflection, open area
Orell_fussli *Shopping mall57.9801:16Crowd
Reservation_G17 *Main train station97.3001:50Challenging light, open area
Short_Loop_BH *Main train station104.9402:29Reflection, open area
Shopping_loop *Shopping mall151.9102:36Challenging light
MediumUZH_stairs *Museum13.1600:41Stairs
MigrosShopping mall84.1801:40Reflection, open area
TS_exam_loopExam location28.8101:34Glass wall, stationary people
Museum_loopMuseum66.9902:02Challenging light
Ramp_checkin2Airport191.7703:40Moving ramp, open area
Airport_shopShopping mall99.4901:46Reflection, open area
Towards_checkin1Airport55.8901:02Reflection, open area, glass wall
Entrance_checkin1Airport35.7000:39Reflection
Towards_circleAirport127.4602:06Challenging light, repetitive structure
LowAND_floor51Library40.2300:53Narrow aisle, stationary people
AND_floor52Library39.1301:10Narrow aisle, stationary people
AND_liftACUniversity building71.1501:33Open area, glass wall
ETH_HGUniversity building99.0001:56Repetitive structure
ETH_labLaboratory56.7001:24Reflection
Kriegsstr_pedestrianPublic building31.9700:59Texture-poor
Kriegsstr_same_dirPublic building31.1700:56Texture-poor
TH_entranceUniversity building41.8500:53Challenging light, stationary people
TS_entranceUniversity building32.0600:42Reflection
TS_examUniversity building44.9301:01Narrow corridor, texture-poor
UZH_HGUniversity building142.1203:09Long trajectory, repetitive structure
Museum_1Museum69.3001:51Challenging light
Museum_dinosaurMuseum44.6201:16Challenging light
Museum_upMuseum12.2000:34Stairs
Short_loopAirport36.9200:46Open area
NoneAND_LibOffice building52.1901:18Reflection, narrow corridor
Hrsaal1B01Academic building74.5501:50Challenging light, narrow corridor
ETH_FT2Museum40.9700:56Challenging light, reflection
ETH_FTEMuseum59.0001:35Challenging light, open area
ETH_lab2Laboratory58.0801:26Texture-poor, reflection
Habsburgstr_darkPublic building36.0801:05Stairs, dimly lit
Habsburgstr_lightPublic building87.9902:46Stairs
IMS_labLaboratory15.2300:43Cluttered scene
IMS_TE21Laboratory42.4301:23Cluttered scene, challenging light
IMS_LEALaboratory19.3500:43Cluttered scene
KriegsstrPublic building31.1800:54Texture-poor
TH_loopOffice building154.7403:55Reflection
TS116University building59.1101:24Challenging light, reflection, glass wall
TS_stairsUniversity building27.6700:52Stairs, challenging light, glass wall
Table 4. Challenges present in each sequence in InCrowd-VI. A checkmark (✓) indicates the presence of a specific challenge in the sequence.
Table 4. Challenges present in each sequence in InCrowd-VI. A checkmark (✓) indicates the presence of a specific challenge in the sequence.
DensitySequenceOvercrowdingChallenging LightRamp/StairsMotion TransitionReflectionOpen AreaGlass WallStationary PeopleNarrow Aisle/CorridorRepetitive StructureTexture-PoorCluttered SceneLong Trajectory
HighOerlikon_G7
G6_exit
G6_loop
Getoff_pharmacy
Service_center
Ochsner_sport
Toward_gates
Arrival2
Checkin2_loop
Shopping_open
Turn180
Cafe_exit
G8_cafe
Ground_53
Kiko_loop
Reservation_office
Orell_fussli
Reservation_G17
Short_Loop_BH
Shopping_loop
MediumUZH_stairs
Migros
TS_exam_loop
Museum_loop
Ramp_checkin2
Airport_shop
Towards_checkin1
Entrance_checkin1
Towards_circle
LowAND_floor51
AND_floor52
AND_liftAC
ETH_HG
ETH_lab
Kriegsstr_pedestrian
Kriegsstr_same_dir
TH_entrance
TS_entrance
TS_exam
UZH_HG
Museum_1
Museum_dinosaur
Museum_up
Short_loop
NoneAND_Lib
Hrsaal1B01
ETH_FT2
ETH_FTE
ETH_lab2
Habsburgstr_dark
Habsburgstr_light
IMS_lab
IMS_TE21
IMS_LEA
Kriegsstr
TH_loop
TS116
TS_stairs
Table 5. Characteristics of selected SLAM systems.
Table 5. Characteristics of selected SLAM systems.
SLAM SystemMatching ApproachApproach
DROID-SLAMDense optical flow-basedEnd-to-end
DPV-SLAMSparse patch-basedEnd-to-end
ORB-SLAM3Feature-basedClassical SLAM
SVOSemi-directClassical VO
Table 6. Quantitative results of evaluated SLAM systems across various challenging scenarios, including different crowd densities, lighting conditions, long trajectories, elevation changes, and motion transitions. The results include the absolute trajectory error (ATE) in meters, drift percentage (DP) showing system drift as a percentage of total trajectory length, pose estimation coverage (PEC) as a percentage, system frames per second (FPS) indicating the number of frames processed by the system per second, and real-time factor (RTF). RTF provides a ratio that indicates how much faster the system processes data compared to the user’s walking speed. The table includes the trajectory length (Trj. Len.) in meters and the average walking speed (AWS) in meters per second for each sequence. Sequences with high crowd density are marked in red, medium in orange, low in blue, and none in black. The failed sequences are denoted as ×.
Table 6. Quantitative results of evaluated SLAM systems across various challenging scenarios, including different crowd densities, lighting conditions, long trajectories, elevation changes, and motion transitions. The results include the absolute trajectory error (ATE) in meters, drift percentage (DP) showing system drift as a percentage of total trajectory length, pose estimation coverage (PEC) as a percentage, system frames per second (FPS) indicating the number of frames processed by the system per second, and real-time factor (RTF). RTF provides a ratio that indicates how much faster the system processes data compared to the user’s walking speed. The table includes the trajectory length (Trj. Len.) in meters and the average walking speed (AWS) in meters per second for each sequence. Sequences with high crowd density are marked in red, medium in orange, low in blue, and none in black. The failed sequences are denoted as ×.
SequenceTrj
Len
(m)
AWS
(m/s)
Classical SystemsDeep Learning-Based Systems
ORB-SLAM3SVODROID-SLAMDPV-SLAM
ATE
(m)
DP
(%)
PEC
(%)
FPSRTFATE
(m)
DP
(%)
PEC
(%)
FPSRTFATE
(m)
DP
(%)
PEC
(%)
FPSRTFATE
(m)
DP
(%)
PEC
(%)
FPSRTF
Crowd Density
Orell_fussli57.980.762.955.0890230.80×××1745.821.071.84100160.530.210.3698100.36
Entrance_checkin135.700.920.772.1592250.826.7818.99801284.270.080.22100160.530.080.2297110.39
Short_loop36.920.800.220.5996240.833.7110.04701344.510.040.10100150.510.320.8697110.38
IMS_lab15.230.350.060.3994240.822.4916.34711495.050.020.13100160.540.040.2697140.48
Lighting Variations
Reservation_G1797.300.887.457.6595260.8715.2315.65741565.251.141.17100190.651.952.0099120.40
Toward_circle127.461.01××97250.86×××1755.860.880.69100150.500.590.4699100.29
Museum_169.300.620.340.4998250.873.825.51361715.756.218.96100180.618.0611.6399100.34
TS11659.110.70××57210.713.996.75311595.323.776.37100150.521.081.8298100.34
Long Trajectory
Kiko_loop314.680.9022.817.2498260.8720.126.39591535.112.510.79100170.5626.798.519980.28
Shopping_loop151.910.973.021.9897250.8521.4114.09891384.632.521.65100160.5512.218.039990.31
UZH_HG142.120.7530.6021.5394260.8816.1111.33471705.702.862.01100170.5811.458.059990.32
TH_loop154.740.66××88220.720.010.0060.91314.374.532.92100180.626.714.339990.33
Changing Elevation (stairs)—Short Trajectories
UZH_stairs13.160.320.060.4577230.781.5211.55371384.620.040.30100220.780.050.3797150.5
G6_exit60.210.696.6711.0797260.86×××1535.110.410.68100170.571.913.1797120.40
Museum_up12.200.360.423.4482220.750.030.246.21645.440.030.24100180.610.020.1698120.41
TS_stairs27.670.530.260.9394250.842.438.78411585.300.210.75100160.560.411.4896120.39
Motion Transition
Getoff_pharmacy159.500.8681.0150.7885250.8334.9121.88481535.1027.9517.52100170.5927.5517.2798100.33
Ground_5332.400.53××96250.844.5414.01791364.543.3010.18100190.643.8411.8598130.45
Ramp_checkin2191.770.877.313.8198260.8746.9524.48971424.741.820.94100200.670.770.4099100.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bamdad, M.; Hutter, H.-P.; Darvishy, A. InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation. Sensors 2024, 24, 8164. https://doi.org/10.3390/s24248164

AMA Style

Bamdad M, Hutter H-P, Darvishy A. InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation. Sensors. 2024; 24(24):8164. https://doi.org/10.3390/s24248164

Chicago/Turabian Style

Bamdad, Marziyeh, Hans-Peter Hutter, and Alireza Darvishy. 2024. "InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation" Sensors 24, no. 24: 8164. https://doi.org/10.3390/s24248164

APA Style

Bamdad, M., Hutter, H.-P., & Darvishy, A. (2024). InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation. Sensors, 24(24), 8164. https://doi.org/10.3390/s24248164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop