Next Article in Journal
Assessment of Long-Term Photovoltaic (PV) Power Potential in China Based on High-Quality Solar Radiation and Optimal Tilt Angles of PV Panels
Previous Article in Journal
Closed and Structural Optimization for 3D Line Segment Extraction in Building Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lidar-Based Detection and Analysis of Serendipitous Collisions in Shared Indoor Spaces

1
Department of Geography, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA
2
Department of Geography, Binghamton University, Binghamton, NY 13902, USA
3
Department of Mathematics, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(18), 3236; https://doi.org/10.3390/rs17183236
Submission received: 9 July 2025 / Revised: 7 September 2025 / Accepted: 13 September 2025 / Published: 18 September 2025

Abstract

Highlights

What are the main findings?
  • Unplanned social interactions between people are detectable in spatio-temporal lidar streams with 86.1% precision.
  • These social interactions are related to but spatially and temporally distinct from simple measures of occupancy.
What is the implication of the main finding?
  • These methods can be used for post-occupancy evaluation of indoor spaces designed to facilitate social interaction.

Abstract

Indoor environments significantly influence human interaction, collaboration, and well-being, yet evaluating how architectural designs actually perform in fostering social connections remains challenging. This study demonstrates the use of 11 static-mounted lidar sensors to detect serendipitous encounters—collisions—between people in a shared common space of a mixed academic–residential university building. A novel collision detection algorithm achieved 86.1% precision and detected 14,022 interactions over 115 days (67 million person-seconds) of an academic semester. While occupancy strongly predicted collision frequency overall (R2 ≥ 0.74), significant spatiotemporal variations revealed the complex relationship between co-presence and social interaction. Key findings include the following: (1) collision frequency peaked early in the semester then declined by ~25% by mid-semester; (2) temporal lags between occupancy and collision peaks of 2–3 h in the afternoon indicate that social interaction differs from physical presence; (3) collisions per occupancy peaked on the weekend, with Saturday showing 52% higher rates than the weekly average; and (4) collisions clustered at key transition zones (elevator areas, stair bases), with an additional “friction effect”, where proximity to seating increased interaction rates (>30%) compared to open corridors. This methodology establishes a scalable framework for post-occupancy evaluation, enabling evidence-based assessment of design effectiveness in fostering the spontaneous interactions essential for creativity, innovation, and place-making in built environments.

1. Introduction

Indoor spaces play a key role in the lives of humans, and they are becoming an increasingly important focus area of study in geography, geographic information science, and remote sensing [1,2,3,4]. New mapping techniques include photogrammetry and lidar-based simultaneous localization and mapping (SLAM) [5,6] and Indoor Positioning Systems (IPSs) [7,8] that provide guidance for navigation inside buildings just as GPS is used outdoors. These techniques are particularly useful within large, shared indoor environments such as shopping malls, hospitals, airports, and university common spaces [9].
First-year university students often undergo a challenging transition period when they move to campus [10]. New social relationships and the sense of community and culture within their residence halls transform unfamiliar spaces into highly valued places [11,12,13]. One way this happens is through serendipitous interactions with “weak ties,” which lead to greater happiness and feelings of belonging [14]. Through thoughtful design, architects and planners can encourage these interactions to foster collaboration and relationship development [15,16,17,18,19]. While survey, interview, and observation-based methods are frequently used to study human interactions, remote sensing-based approaches to studying human behavior indoors offer the potential for continuous measurement at fine spatiotemporal scales [20,21,22]. This study used a network of integrated lidar sensors with a custom-developed algorithm to detect these serendipitous “collisions” and examined how their spatial and temporal distributions changed over the course of a semester in a mixed academic–residential building on a university campus. This work was part of a larger project to better understand how space becomes place in indoor environments.
Much of indoor-oriented remote sensing has focused on mapping the physical space rather than observation of humans. Modern scanning techniques often rely on photogrammetric or lidar-based SLAM approaches to create a “digital twin” of the space, which can then be represented as CAD drawings, building information models (BIMs), point clouds, and textured meshes [3,9,23,24,25]. Newer approaches to modeling use radiance fields (e.g., Neural Radiance Fields or NeRFs, or gaussian splatting) to create a high-fidelity visuo-centric representation rather than a model of the physical space itself [26]. Much of this work has focused on large, shared spaces such as hospitals, shopping malls, airports and universities [9], but as scanners have become more affordable and accessible, digital twins are more commonly developed from smaller structures, with the real estate market serving as a major commercial driver [27,28].
Lidar is particularly useful for mapping such spaces, especially when paired with cameras to colorize the resulting point cloud: unlike photogrammetric techniques, lidar is natively spatial, works in any lighting conditions, and provides for a denser reconstruction, especially indoors where flat textures and repeating patterns make techniques using camera-based imagery alone challenging [3,29,30,31]. Hand-carried or backpack-mounted mobile lidar scanners, such as those made by GeoSLAM and Leica, are widely used for this purpose [32,33,34]. Building on developments in lidar for autonomous driving [35,36,37], static lidar installations have grown in popularity for detection and tracking of human and traffic behavior [38,39,40]. These applications of lidar fall within the growing subfield of “lidar perception,” which focuses on using lidar sensors to detect, classify, and track objects and behaviors in real-time [41,42,43].
A natural method of tracking activity indoors is through the use of cameras, often using multi-camera setups [22,44,45] and artificial intelligence [46,47]. However, lidar has several advantages over cameras. Lidar sensors do not rely on ambient light and therefore function better under dynamic lighting conditions and at night [48]. Furthermore, 3D data from lidar sensors are generally denser and less noisy than camera-derived data [49,50]. Because lidar sensors can reliably cover distances from dozens to hundreds of meters, they can be less expensive in terms of cost and time to maintain [51]. Critically, lidar sensors provide anonymity at the point of collection, as they are generally of lower resolution and capture limited spectral information compared to RGB cameras [21].
Analysis of high-spatiotemporal-resolution movement data was largely realized through the analysis of GPS data, first on dedicated units [52] and then often through GPS-enabled smartphones [53,54,55,56]. The analytical framework for the resulting space–time trajectories traces to Hägerstrand [57] as “time geography” [58]. Visualization of “space–time paths,” in which two-dimensional spatial positions are plotted in three dimensions with time represented on the third axis became a common starting point for analysis [59,60,61,62]. Discretization of these 3D paths followed via Space–Time Cubes [63]. Modern libraries such as MovingPandas [64], Trackintel [65], and Esri’s Space Time Pattern Mining toolbox [66] and GeoAnalytics Engine [67] facilitate movement-based analysis. Surpassing relatively sparse, GPS-based sampling, high-density tracking is now afforded by modern camera and lidar-based systems, in which (in increasing order of specificity) the presence, count, location, track, and unique identification of objects and people in the scene can be recorded [21].
Here, we are interested in tracking not movement as a whole, but specific interactions—these unplanned encounters or serendipitous collisions between the “weak ties” described by Sandstrom and Dunn [14]. These interactions are important because they represent the kind of encounters sought after by building designers to create opportunities for creativity and innovation [68]. They are also meaningful interactions that foster the formation of a “sense of place” [69,70,71,72]. As friends, acquaintances, or even strangers pause to interact, they do so at particular locations; these locations are thereby invested with meaning over time, until they become—individually or communally—places [13,73,74,75]. Beyond their intrinsic value, an improved sense of place and sense of community yield tangible benefits: research has demonstrated their capacity to enhance social well-being [76] and reduce loneliness among university students [77].
We present a systematic framework for using a fused network of static-mounted lidar sensors to continuously detect and quantify a human behavior of interest—in this case, serendipitous encounters in indoor spaces. Our analysis reveals a temporal displacement between co-presence and social interaction, with collision peaks lagging occupancy by 2–3 h and varying significantly by day of week, demonstrating that social timing follows different patterns than simple physical presence. At the spatial scale, we identify “friction effects” in architectural transition zones, where movement-pause boundaries generate significantly more interactions than high-traffic corridors. This methodology establishes a scalable post-occupancy evaluation framework that provides architects and planners with continuous, quantitative feedback on design effectiveness in fostering spontaneous social connections, enabling evidence-based assessment of how built environments actually perform rather than how they are intended to perform.

2. Materials and Methods

This study used a network of static-mounted lidar sensors to detect movement and interaction between occupants within the shared public space of a university building. Object-tracking software (Blickfeld Percept version 1.6.3) was applied to the lidar stream to produce space–time trajectories [59,62] of anonymized individuals as they moved through the building. The resulting tracks were used to algorithmically detect “serendipitous collisions” between people—unplanned encounters where two or more individuals pause to interact after approaching from different directions. The spatiotemporal patterns of these collisions and their association with environmental factors were then analyzed.
Data was collected in the Creativity and Innovation District Living-Learning Community building (CID) on Virginia Tech’s campus in Blacksburg, Virginia. The mixed-use academic and residential CID building opened in August 2021 and houses approximately 600 undergraduate students [78]. The building was designed to facilitate collaboration among its users and contains a variety of collaborative spaces including classrooms, study areas, meeting rooms, a makerspace, and a rehearsal and performance space. The Community Assembly space on the ground floor served as our study area due to its high potential for interactions, with individuals entering and exiting through multiple hallways, an elevator, and a large central staircase (Figure 1). The study period lasted from 21 August (first day of classes) to 13 December (last day of final examinations) in 2023 (115 days total).
Eleven Blickfeld Cube 1 (Blickfeld GmbH, Munich, Germany) forward-facing lidar sensors were mounted in the ceiling of the CID Community Assembly to continuously scan the space. These sensors have a range of 75 m, and allow for configurable fields of view, scanning patterns, and update frequencies to customize data collection [79]. Scanners were configured to a 72° × 30° field-of-view, 230 scanlines, and an output frequency of 2.4 Hz. Sensors were connected by Ethernet to a local private network, including a central computer that collected data (155 GB/day) streamed from each sensor. Custom Python (1.0.5.) software using Blickfeld’s API aggregated the data from each sensor into a fused point cloud via an internal building coordinate reference system [80]. This cloud was rasterized by intensity and elevation and written to a local hard drive at 1 Hz; the full point cloud was written out as an LAZ at 0.1 Hz. A render of the point cloud for the study area is shown in Figure 2.
To focus our research efforts on collision detection rather than basic object tracking, we used Blickfeld’s Percept software version 1.6.3 for initial trajectory generation. Percept was installed on a standard workstation PC on the private network and was used to perform real-time unique object detection on the raw sensor data and log them as space–time trajectories as JSON files [81]. Each day generated ~100 MB total of plaintext JSON, with each individual JSON file containing approximately 10 min worth of data. Percept records timestamped position (x, y) and time of each vertex in the space–time trajectory, as well as volume, velocity, and other object parameters. Each object is assigned a random unique ID and tracked through the space; if a person leaves and re-enters, they receive a new ID, preserving anonymity. In total, 67 million person-seconds of Percept data were collected over 115 days, and a sample of the Percept dataset is included as Supplementary Material. The full dataset is available by request. A visualization of sample track data from Percept over a one-hour period is shown in Figure 3. The object detection from Percept served as the starting point for processing, but additional algorithm development was required to locate the behavior of interest in the data stream.
Percept includes tuning parameters for object decay, detection thresholds, and clustering that control noise reduction and object permanence. Parameters were tuned through testing to maximize detection rates while minimizing false positives. Final parameter settings are shown in Table 1.
Analysis was conducted using a rasterized grid of the study area at 1 m resolution. A simple measure of occupancy was used to characterize activity in each cell. This was calculated as the total number of seconds within a grid cell occupied by a person (person-seconds) across a specified period of time. The number of unique IDs (i.e., a person count) in a grid cell over a period of time was a potential alternative, but this was a less reliable metric for collision potential, as someone briefly passing a grid cell would be weighted equally to someone spending an extended period of time there.
The principal measure of interest was collisions, unplanned, serendipitous encounters between people. A custom algorithm was written to process the space–time trajectories (tracks) to determine whether a collision between two or more individuals had occurred. An example of a collision might be a person traveling through the space when they encounter another person and stop to talk. Three collision types were considered: two-way collisions (two people from different directions stop to interact, then continue separately), stationary collisions (a moving person stops to interact with someone stationary, then continues), and conjoining collisions (two people from different directions stop to interact, then continue together).
Analysis focused on two-way and conjoining collisions due to their clearer spatiotemporal signature in the data. The Percept software uses a mean-shift-based tracking algorithm [82] in which clusters of nearby moving points are grouped together as an “object.” However, if individuals remain stationary for a prolonged period of time (e.g., sitting down at a table to study), the algorithm can stop tracking them; they are reacquired when they move again and are assigned a new unique object ID. This makes accurately detecting interactions between moving and stationary people difficult.
Development of the detection algorithm started with in-person observation in the space to note the collisions, followed by a review of the Percept logs. The total observation period was 12 one-hour sessions over a period of seven weeks, with additional observations recorded opportunistically. These ground-truth observations (n = 35) were used to design and tune the parameters of the algorithm. The algorithm was written in Python and used common open-source packages, including Numpy, Pandas, Geopandas, and Scipy. The individual JSON files from Percept were converted to GeoDataFrames and merged to form a continuous record, which was then split into chunks (as shapefiles) for processing.
The collision detection algorithm applies six sequential filters to identify genuine interactions from the trajectory data. A displacement filter first groups all Percept points by unique ID and calculates the maximum distance traveled from the initial detection point, removing tracks with total displacement less than 4 m. This was designed to eliminate noise and brief tracking errors. Second, a movement filter calculates linear velocity from x and y direction components recorded by Percept and derives acceleration from instantaneous velocity changes, retaining only tracks with velocities below 0.3 m/s and negative acceleration (deceleration). This had the effect of identifying potential pauses in the trajectory. The value of 0.3 m/s was chosen to find a balance between genuine detection and consistency: a velocity threshold that is not low enough can exclude some collisions—especially those that occur “in passing,” in which only a brief interaction occurs and neither party comes to a full stop. A threshold that’s too high can include too many false positives in the output, as it was occasionally including the “collision” of two people passing one another, especially during times/locations of greater congestion, where velocity patterns are generally slower.
Next, a spatiotemporal filter searches for any other tracks that come within 1.5 m, attempting to match up two or more individuals. At a threshold greater than 1.5 m, the false-positive rate was significantly higher due to the inclusion of simultaneous pause without an apparent interaction. Thresholds less than 1.5 m tended to not include any collisions without any significant physical contact (handshaking, hug, etc.), which must also be included. If a person’s unique ID was within 1.5 m of another’s at the same recorded timestamp, these tracks were kept—all others were removed. A heading filter is then used to ensure that individuals’ headings differ by at least 90 degrees, signifying they are coming from different directions. If two people enter the space together, walk to a table and stop at the table to sit down together, they are technically experiencing the aforementioned requirements for a collision, as described above: a prevalent movement pattern, a simultaneous slowdown, and close proximity with one another. But this is a planned interaction and should not be counted as a collision. Similar patterns of movement and pause can occur immediately outside of classrooms, the elevator, or any other spatial chokepoints in the study space in which movement is slowed as people funnel into a smaller space. The purpose of the heading filter is to ensure that spontaneity is captured, by requiring that all eligible people for a collision must also be traveling in semi-opposite directions. This step greatly reduces false-positives by removing potential “collisions” which would only be detected due to chokepoints and natural impediments in general movement patterns. During algorithm development, multiple different angle thresholds were tested, and the choice of 90 degrees appeared to best minimize false positives and correspond to the orientation of major traffic corridors. Lastly, to remove subsequent interactions between the same two individuals and to ensure that group interactions are not counted multiple times, a duplication filter is applied to remove all collisions within 2 m and 5 s of an initial collision. If not corrected for, group collisions can artificially inflate the number of collisions. For example, if three people enter the study space together and collide with another group of three, their interaction could produce nine unique recorded collisions, disregarding intra-group collisions due to the heading filter. For this study, we treat this as one collision—a single, spontaneous interaction—even though it involves more than two people.
A flowchart depicting the general diagram steps can be found in Figure 4. Thresholds were empirically determined through iterative comparison with an initial set of ground-truth observations. These values were not fully optimized, but intended to be consistent with considerations such as average adult walking velocities and typical magnitudes for interpersonal distances. The resulting dataset of collisions is included as Supplementary Materials.
To evaluate the performance of the collision detection algorithm, detections were overlaid onto orthographic renders of the rasterized lidar data as animations, which helped to show both false positives and false negatives. Playback and review of these animations was used both during the development of the algorithm and for a performance assessment. Performance was formally validated by manually reviewing 127 detected collisions across three one-hour periods of varying occupancy (low, medium, and high activity) for true positives, false positives, and ambiguous cases. Manual validation focused on detected collisions, as missed interactions could not be systematically identified from the animations. A 10 min example animation is included as a Supplementary File, and Figure 5 shows a true-positive detection of two people interacting in the space.
Temporal analysis examined how collisions related to occupancy and varied throughout the semester. Data were grouped by week, day of week, and hour to identify patterns. Hourly data were cyclical and were therefore transformed to sine and cosine components as is recommended practice [83]. For each of these intervals, figures were generated showing overall trends, and a multivariate ordinary least squares (OLS) regression tested relationships between collisions, occupancy, and week of semester.
Spatial analysis used generalized linear regression (GLR) to examine relationships between occupancy and collisions across a rasterized 1 m grid of the study area. Residual analysis identified areas with higher or lower collision rates than expected given traffic levels. A geographically weighted regression (GWR) was also applied to account for spatial autocorrelation. To examine collision patterns near movement-pause transitions, analysis focused on the upper-left corridor in Figure 6. Horizontal gridlines (10 cm) were positioned across the hallway, with variables calculated for each: occupancy sum, average velocity, and collision count. This corridor was selected because one side has seating areas while the other serves as a throughway, allowing examination of how spatial attributes—in this case the “friction” associated with the seating area—influence collision patterns.

3. Results

This analysis focused on three key aspects of “collision” detection and spatiotemporal patterns within our study area—a shared-space commons on the ground floor of a hybrid academic–residential building designed to facilitate community. We define collisions as unplanned encounters where individuals pause to interact. First, we evaluated our custom collision detection algorithm’s performance to establish dataset reliability. Second, we examined temporal patterns of collisions and occupancy across multiple scales to understand when interactions occur. Finally, we analyzed spatial distributions to identify where interactions cluster and how building features influence collision frequency.

3.1. Algorithm Performance

During the study period, approximately 67 million person-seconds of Percept data were collected over 115 days, from which the algorithm detected 14,022 collisions (averaging 121.9 per day; 5.1 per hour). To assess false-positive rates, the collision algorithm was manually validated against three hours (10,800 individual frames) of lidar data representing different occupancy scenarios (low, medium, high use). For this evaluation, collision detections were overlaid onto the animation orthographically rendered lidar frames. The process was intensive and required repeated viewings, during which time and location of collisions apparent in the data were carefully noted. While the validation sample size is limited, the manual review process was thorough and systematic. Each detected collision was carefully evaluated against 10,800 frames of lidar data across varying occupancy conditions, prioritizing accuracy over sample size. The resulting precision metrics reflect confident assessments of algorithm performance rather than estimates based on large but potentially unreliable samples. Detected collisions were categorized as “True Positives,” “Unclear,” or “False Positives.” Of 127 detected collisions in the validation set, 68.5% were true positives, 11.0% were false positives, and 20.5% were unclear. Based on confirmed cases only, the algorithm achieved 86.1% precision. No spatial clustering was observed in the false positives.

3.2. Temporal Analysis

Occupancy and collisions were both highest during the first few weeks of the semester, and generally declined as the semester progressed (Figure 7). The general trend of daily collisions peaked mid-September before dropping off (Figure 7a). There was a notable increase in collisions between the Thanksgiving and winter breaks. Major troughs corresponded to semester breaks (Fall Break in Week 6, Thanksgiving Break in Week 13, and end of semester in Week 16—Figure 7b). Weekly patterns showed occupancy highest during weekdays, while collisions peaked on Fridays and Saturdays (Figure 7c). A notable temporal lag between occupancy and collisions was evident in the hourly data (Figure 7d). Occupancy peaked during typical working hours (9:00–20:00) at 15:00, while collisions increased throughout the day to an evening peak from 17:00–20:00.
Occupancy and collisions were strongly related when fit with second-order polynomials across all temporal scales: hourly (R2 = 0.76, F(2,2672) = 4128.97, p < 0.001), daily (R2 = 0.74, F(2,112) = 161.97, p < 0.001), and weekly (R2 = 0.92, F(2,14) = 81.22, p < 0.001) intervals (Figure 8). Log-log transformations revealed scale-dependent relationships, with daily (n = 1.17) and weekly (n = 1.15) intervals suggesting slightly super-linear relationships, while the hourly interval (n = 0.78) suggested a sub-linear trend. A multiple regression on the sine-cosine transformed hourly data showed a strong relationship between collisions and occupancy (R2 = 0.99, F(3,21) = 801.1, p < 0.001).
A one-way ANOVA examining day-of-week effects on collisions per thousand occupancy showed that day of the week significantly influences collision frequency (p < 0.001), even when normalized by occupancy. Relative to the weekly average, Friday (+24.53%) and Saturday (+51.90%) had much higher collision rates than Monday (−17.13%), Tuesday (−32.37%), and Wednesday (−12.07%). This day-of-week effect was consistent throughout the semester (Figure 8).

3.3. Spatial Analysis

Figure 9 shows total collisions and occupancy for the Fall 2023 semester summarized into a 1 m grid over the study area. The occupancy pattern (Figure 9a) shows that the corridors, base of stairs, seating areas, and the elevator were the most used areas in the study space. The left corridor, which exits towards campus, was used more than the right, which exits towards the downtown area. The most popular seating area was in the center of the study space, where large tables and accessible floor outlets encourage occupants to stay for extended periods of time. Generally, transition areas between hallways and seating, and edge spaces closer to walls, were least used. Collision counts (Figure 9b) were high at the base of the stairs, around seating areas in the study space, and the area around the elevator, which contained over 2000 of the semester’s 14,022 collisions. Collision counts were lowest in corridors, edge spaces (e.g., near walls), and by exits (excluding the elevator).
To identify locations with more or fewer collisions than expected based on occupancy, a generalized linear regression (GLR) was applied to nonzero grid cells, with residuals mapped to show deviations from expected collision rates (Figure 10). This revealed a significant relationship between occupancy and collisions (R2 = 0.598, F(3,785) = 390.02, p < 0.001). A geographically weighted regression, which accounts for spatial autocorrelation, corroborated these patterns with improved model fit (R2 = 0.84). The left and center corridors and the stairs were sites of fewer collisions than expected, while the right hallway had about as many as were expected. Notably, the right hallway houses two classrooms and often exhibits student artwork, while the left hallway does not. Additionally, the study/seating areas with the highest occupancy had fewer collisions than expected. However, many other areas of seating experienced a higher number of collisions than expected. The base of the stairs, a major point of intersection, and the center of the Community Assembly space, often used during events, both experienced more collisions than expected. The most extreme collision-to-occupancy distribution occurred at the elevator, which causes people to wait in the space.
To examine movement friction effects on collisions, a 10 cm grid was created to examine fine-scale patterns in a primary throughway (Figure 11), and average velocity, total occupancy, and total collisions were calculated for each cell. Velocity and occupancy both peaked at the hallway center, but velocity decreased more steeply on the seating side than the non-seating side. Collisions were bimodally distributed on either side of high-velocity center “channel,” with more collisions on the seating-side of the throughway. Occupancy remained higher on the seating side and lowest on the non-seating side. Collisions were minimal in the center of the hallway and on the very edge of the seating area.

4. Discussion

The collision detection algorithm successfully identified 14,022 serendipitous interactions from 67 million person-seconds of lidar data over the academic semester, achieving 86.1% precision and providing a substantial dataset for analysis. While not comprehensive, this dataset represents a large sample from which to derive insights about aggregate spatial behavior patterns. Based on the manual review of a portion of the dataset, the false-positive rate was 13.8% among clear cases, a number low enough to assure that the observable patterns are generally not spurious artifacts. While our collision detection algorithm employs established spatial-temporal analysis techniques [60,61,62,63], the contribution lies in demonstrating that this specific combination of filters reliably identifies unplanned social interactions from trajectory data.
The number of collisions generally declined over the semester, then peaked again between Thanksgiving and winter breaks. Several explanations could account for this decline. This may be because students have a larger social circle and “cast a wider net” at the beginning of the semester. As the semester goes on, a subset of these relationships evolve to form stronger friendships. While these relationships play a major role in placemaking [69,70,71,72,75], the number of people each person is willing to stop and talk to (“weak ties” [14]) decreases as the semester goes on, despite knowing more people in total. Another potential explanation for this decrease is that as people become more connected with one another, their collisions may become more subtle, meaning they may be more difficult to detect via an algorithm. Finally, collisions may also shift from open, public spaces to other locations (dorm rooms, hallways, etc.). Distinguishing between these possibilities requires additional data beyond collision frequency, and future work will use qualitative data and interviews to directly answer this question.
As expected, occupancy strongly predicted collisions overall, but examining different temporal scales revealed important nuances in this relationship, a result in line with Coltekin et al. [84] who extended the Modifiable Aerial Unit Problem [85] to temporal analysis. Collisions (relative to occupancy) are more common toward the end of the week and less common at the beginning. They are shifted in time toward late afternoon/early evening and away from the morning. This temporal displacement suggests that while physical co-presence enables collisions, social timing determines when they actually occur. These findings align with work by [86], who found that conversation frequency among young adults is highest on Fridays and Saturdays, and lowest on Tuesdays, and that conversations are more likely in the afternoon and evening hours than at night or in the morning.
Collisions occurred both in expected locations that validate the algorithm (e.g., outside the elevator) and at locations explicitly designed in this building to facilitate interaction, such as the bottom of the main staircase. Both transit corridors and seating areas generated interactions, but transit areas produced fewer collisions than expected given their occupancy levels, while some exits (e.g., toward downtown) generated more interactions than others (e.g., toward campus). At a finer resolution, analysis of a main corridor revealed flow patterns resembling a river: the center channel had the highest average velocity, while collisions occurred primarily at the periphery. Further, the “friction” associated with the seating side was more likely to generate collisions than the open side of the corridor. These results indicate that collisions occur more frequently on the boundaries between movement and pause zones, rather than in one specific zone, and they align with Whyte’s foundational work on public space usage, which identified the social magnetism of amenities and the importance of adding stimuli to spaces that induce people naturally pause and interact [19]. From this perspective, our study provides quantitative evidence to support the idea that architects and designers should seek to capitalize on these zones of high friction to facilitate collisions. Specifically, our findings suggest placing seating areas adjacent to major circulation corridors rather than isolating them in separate zones, as the boundary between movement and pause spaces generates significantly more interactions than either zone alone. Elevator waiting areas could be designed as social spaces rather than purely functional ones, given their exceptional collision potential. Additionally, designers should consider that social interactions peak in evening hours rather than during maximum occupancy periods, suggesting that lighting, programming, and amenities should be optimized for late afternoon and early evening use.
This research demonstrates a fundamental advancement in our ability to empirically measure the social outcomes of architectural design decisions. By quantifying how specific design features—from seating placement to circulation patterns—translate into measurable interaction, this methodology bridges the gap between design intent and behavioral reality. The temporal and spatial variations we observed suggest that “design for interaction” is highly context-sensitive, influenced not only by fixed architectural elements but by dynamic factors such as seasonal patterns and temporary displays (e.g., artwork). These departures from simple occupancy-based predictions underscore the need for similar observational studies across diverse building types, user populations, and environmental conditions to develop a more nuanced understanding of social space design. As computational urban studies continue to evolve, automated collision detection represents a scalable approach to post-occupancy evaluation that could fundamentally transform how we assess and iterate on built environments. Rather than relying solely on surveys, interviews, or observations, architects and planners could access continuous, remote sensing-generated data about how their design decisions actually perform in practice, enabling evidence-based refinements that enhance social connectivity in our increasingly important shared spaces.

Limitations and Future Work

We developed a rule-based, parameterized algorithm to detect collisions. These parameters were iteratively tuned during algorithm development to achieve a level of algorithm proficiency sufficient to identify the collision behavior we were interested in, though they were not fully optimized and represent one reasonable configuration among many possible approaches. However, from our validation efforts, we know that many collisions, especially those involving an already stationary individual, go uncounted as Percept can lose track of people who are stationary for prolonged periods of time. Ref. [47] explored YOLO-based methods [87] for object tracking that offer several advantages over Percept, and yet have disadvantages as well (e.g., frame-based processing fails to use trajectory context that Percept handles well). Future work will explore the use of Vision Language Models [26], using the samples that we generated here as training data that may be able to both track objects as well as identify collisions. Systems that are better able to treat the either the underlying lidar data or representations of it as a continuous dataset are more likely to accurately count stationary individuals. If these collisions were more accurately counted, we would expect an even steeper gradient of collisions across the principal axis of movement in high “friction” areas, as this would include interactions between seated individuals and passersby. However, we do not have any suggestion in our analysis that temporal patterns for these kinds of collisions would be different than those we did observe.
One of the other major challenges to collision detection was the quantification of collisions during various planned social events and other high-density periods. In an empty space, a two-way collision is relatively easy to both algorithmically detect and manually validate using animation. However, during periods of high density, the algorithm appeared to perform worse, and validation was much more difficult due to the frequent and brief interactions at these times. The event filter in the algorithm was designed to limit the effects of these events on the entire dataset, but by better classifying and categorizing collisions as occurring during events or not-during events, a more organic understanding of space-use classification could be derived. Future work could explore these events in a much more targeted way, and it may be that secondary, supplementary algorithms need to be designed to detect these interactions of a substantially different character.
Strictly speaking, the collision detection algorithm does not capture interaction; it captures pause and proximity from a lidar data stream. Intensive in-person observations and animation-based validation confirm that most detected events represent true social interactions. However, in some places such as the elevator, movement and pause represent a collision to the algorithm when it is likely that these individuals are simply in proximity and not interacting socially. To address this limitation, future work will explore the use of acoustic sensors and frequency-specific spatial sound intensity algorithms paired with the lidar information to gauge whether talking is occurring while still maintaining anonymity.
Ultimately, broader data collection is needed to understand how diverse indoor environments influence collision patterns. While this four-year project will continue gathering data from our study area, expanding to other building types and common spaces is essential for developing generalizable insights. Future work should include short-term deployments (several weeks) in buildings not explicitly designed for social interaction, potentially using mobile hemispherical lidar sensors for more efficient data collection across diverse architectural contexts.
Long-term monitoring across multiple academic cycles will reveal seasonal and annual variations in social interaction patterns. Key questions include whether collision patterns differ systematically between spring and fall semesters, and whether these seasonal effects remain consistent across years. Such temporal analysis will help distinguish between universal patterns of social interaction and those specific to particular environmental or institutional contexts.
Beyond observational studies, this methodology enables experimental approaches to space design. Building on Vroman and Lagrange’s [88] findings that visual and physical obstructions alter established movement patterns, future research could systematically test design interventions. Controlled experiments might involve strategically placing visual attractions (artwork, displays) or reconfiguring furniture layouts within monitored spaces, then measuring resulting changes in collision frequency and distribution. Such experimental validation could provide evidence-based guidance for architects and planners designing large, open indoor environments, moving beyond intuitive design principles toward data-driven optimization of social spaces.
This remote-sensing-based framework represents a significant advancement in post-occupancy evaluation, offering quantitative feedback on design effectiveness that can usefully augment surveys, interviews, and in-person observation. The true power of this approach lies not in replacing traditional evaluation methods, but in creating meaningful fusion between objective behavioral data and subjective user experiences. While automated collision detection reveals where and when interactions occur with high spatiotemporal precision, surveys and interviews provide essential context about why certain spaces facilitate meaningful connections and how users experience these encounters.

5. Conclusions

This study demonstrated the use of a static-mounted lidar network to detect and analyze the anonymized use, movement, and interaction patterns of individuals within a shared indoor space. A novel collision detection algorithm was developed to parse through tabular-based object tracking data, which achieved 86.1% precision, and it collected a large sample of 14,022 collisions over the course of a semester. The system enabled detailed analysis of spatiotemporal dynamics, revealing that while occupancy strongly predicted collisions overall, significant temporal and spatial variations emerged that deviated from simple occupancy-based expectations.
Collisions were highest during the first weeks of the semester before dropping off, suggesting that the social circle of students may decrease and tighten after the beginning of the semester, that collisions may become more nuanced, or that collisions move elsewhere in the building. Collisions per occupancy generally increased throughout the week, with Friday and Saturday having the most collisions per occupancy. Similarly, on an hourly basis, occupancy peaked during the working hours, but collisions peaked later in the evening. Collisions tracked hourly occupancy patterns with a notable 2–3 h lag, suggesting that social interactions occur not simply when spaces are most occupied, but when social conditions are most conducive.
Spatial patterns revealed collision clustering at key transition zones—elevator areas, stair bases, and boundaries between movement and pause spaces—while high-use transit corridors generated fewer interactions than would be expected given occupancy. A “friction effect” was observed, where seating-adjacent areas produced more collisions than open corridors, and, within corridors, a river-like pattern emerged with minimal collisions in the high-velocity center and concentrated interactions at the edges where movement naturally slows.
This work establishes a scalable framework for post-occupancy evaluation that generates continuous, quantitative data on design effectiveness for generating serendipitous encounters. This approach provides objective, real-time measurement of social interactions that can meaningfully complement surveys and interviews. The methodology enables experimental testing of environmental changes (e.g., adding seating near transit corridors, hanging artwork, rearranging furniture), allowing practitioners to measure the actual impact of modifications. This methodology thus provides the empirical tools needed to assess whether designs foster the spontaneous interactions that are essential for the development of social bonds that transform institutional spaces into meaningful places.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs17183236/s1, Video S1: Collision Algorithm Animation. File S2: A sample Percept dataset. File S3: Collisions dataset.

Author Contributions

Conceptualization, A.H.F., T.J.P., T.D.B., S.K. and N.A.; Data curation, A.H.F. and T.J.P.; Formal analysis, A.H.F., T.J.P. and T.D.B.; Funding acquisition, T.J.P., T.D.B. and N.A.; Investigation, A.H.F., T.J.P., T.D.B., S.K. and N.A.; Methodology, A.H.F., T.J.P., T.D.B., S.K. and N.A.; Project administration, T.J.P. and T.D.B.; Resources, A.H.F., T.J.P. and T.D.B.; Software, A.H.F., T.J.P. and S.K.; Supervision, T.J.P. and T.D.B.; Validation, A.H.F. and T.J.P.; Visualization, A.H.F., T.J.P. and T.D.B.; Writing—original draft, A.H.F.; Writing—review and editing, A.H.F., T.J.P., T.D.B., S.K. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the US National Science Foundation, grant number BCS-2149229.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q. Problems in Indoor Mapping and Modelling. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2013, XL-4/W4, 63–68. [Google Scholar] [CrossRef]
  2. Sithole, G.; Zlatanova, S. Position, Location, Place and Area: An Indoor Perspective. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2016, III–4, 89–96. [Google Scholar] [CrossRef]
  3. Chen, J.; Clarke, K.C. Indoor Cartography. Cartogr. Geogr. Inf. Sci. 2020, 47, 95–109. [Google Scholar] [CrossRef]
  4. Villarreal, M.; Baird, T.D.; Tarazaga, P.A.; Kniola, D.J.; Pingel, T.J.; Sarlo, R. Shared Space and Resource Use within a Building Environment: An Indoor Geography. Geogr. J. 2025, 191, e12604. [Google Scholar] [CrossRef]
  5. Chan, T.H.; Hesse, H.; Ho, S.G. LiDAR-Based 3D SLAM for Indoor Mapping. In Proceedings of the 2021 7th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 23–26 April 2021; pp. 285–289. [Google Scholar]
  6. Ding, Y.; Zheng, X.; Zhou, Y.; Xiong, H.; Gong, J. Low-Cost and Efficient Indoor 3D Reconstruction through Annotated Hierarchical Structure-from-Motion. Remote Sens. 2018, 11, 58. [Google Scholar] [CrossRef]
  7. Mendoza-Silva, G.M.; Torres-Sospedra, J.; Huerta, J. A Meta-Review of Indoor Positioning Systems. Sensors 2019, 19, 4507. [Google Scholar] [CrossRef]
  8. Al-Ammar, M.A.; Alhadhrami, S.; Al-Salman, A.; Alarifi, A.; Al-Khalifa, H.S.; Alnafessah, A.; Alsaleh, M. Comparative Survey of Indoor Positioning Technologies, Techniques, and Algorithms. In Proceedings of the 2014 International Conference on Cyberworlds, Santander, Cantabria, Spain, 6–8 October 2014; pp. 245–252. [Google Scholar]
  9. Karam, S.; Lehtola, V.; Vosselman, G. Strategies to Integrate IMU and Lidar SLAM for Indoor Mapping. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-1–2020, 223–230. [Google Scholar] [CrossRef]
  10. Temple, P. From Space to Place: University Performance and Its Built Environment. High Educ. Policy 2009, 22, 209–223. [Google Scholar] [CrossRef]
  11. Cassidy, C.; Trew, K. Identity Change in Northern Ireland: A Longitudinal Study of Students’ Transition to University. J. Soc. Issues 2004, 60, 523–540. [Google Scholar] [CrossRef]
  12. Chow, K.; Healey, M. Place Attachment and Place Identity: First-Year Undergraduates Making the Transition from Home to University. J. Environ. Psychol. 2008, 28, 362–372. [Google Scholar] [CrossRef]
  13. Tuan, Y.-F. Place: An Experiential Perspective. Geogr. Rev. 1975, 65, 151. [Google Scholar] [CrossRef]
  14. Sandstrom, G.M.; Dunn, E.W. Social Interactions and Well-Being: The Surprising Power of Weak Ties. Pers. Soc. Psychol. Bull. 2014, 40, 910–922. [Google Scholar] [CrossRef]
  15. Irving, G.L.; Ayoko, O.B.; Ashkanasy, N.M. Collaboration, Physical Proximity and Serendipitous Encounters: Avoiding Collaboration in a Collaborative Building. Organ. Stud. 2020, 41, 1123–1146. [Google Scholar] [CrossRef]
  16. Björneborn, L. Three Key Affordances for Serendipity: Toward a Framework Connecting Environmental and Personal Factors in Serendipitous Encounters. J. Doc. 2017, 73, 1053–1081. [Google Scholar] [CrossRef]
  17. Brown, C.; Efstratiou, C.; Leontiadis, I.; Quercia, D.; Mascolo, C. Tracking Serendipitous Interactions: How Individual Cultures Shape the Office. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15–19 February 2014; ACM: New York, NY, USA, 2014; pp. 1072–1081. [Google Scholar]
  18. Pennington, K.E. Hey, Maybe You Can Help Me with This: Chance Encounters, Geographic Proximity, and Innovative Collaboration. Ph.D. Thesis, University of Minnesota, Minneapolis, MN, USA, 2021. [Google Scholar]
  19. Whyte, W.H. The Social Life of Small Urban Spaces; 7. Print; Project for Public Spaces: New York, NY, USA, 2010; ISBN 978-0-9706324-1-8. [Google Scholar]
  20. Otsuka, K.; Mukawa, N. Multiview Occlusion Analysis for Tracking Densely Populated Objects Based on 2-D Visual Angles. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 1, pp. 90–97. [Google Scholar]
  21. Teixeira, T.; Dublon, G.; Savvides, A. A Survey of Human-Sensing: Methods for Detecting Presence, Count, Location, Track, and Identity; ENALAB, Yale University: New Haven, CT, USA, 2010; pp. 1–41. [Google Scholar]
  22. He, Y.; Wei, X.; Hong, X.; Shi, W.; Gong, Y. Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment. IEEE Trans. Image Process. 2020, 29, 5191–5205. [Google Scholar] [CrossRef]
  23. Durrant-Whyte, H.; Bailey, T. Simultaneous Localization and Mapping: Part I. IEEE Robot. Automat. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  24. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A Comparative Analysis of LiDAR SLAM-Based Indoor Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transport. Syst. 2022, 23, 6907–6921. [Google Scholar] [CrossRef]
  25. Tiozzo Fasiolo, D.; Maset, E.; Scalera, L.; Macaulay, S.O.; Gasparetto, A.; Fusiello, A. Combing Lidar SLAM and Deep Learning-Based People Detection for Autonomous Indoor Mapping in a Crowded Environment. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2022, XLIII-B1-2022, 447–452. [Google Scholar] [CrossRef]
  26. Zhou, Y.; Zeng, Z.; Chen, A.; Zhou, X.; Ni, H.; Zhang, S.; Li, P.; Liu, L.; Zheng, M.; Chen, X. Evaluating Modern Approaches in 3D Scene Reconstruction: NeRF vs. Gaussian-Based Methods. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 926–931. [Google Scholar]
  27. Ying, Y.; Koeva, M.; Kuffer, M.; Zevenbergen, J. Toward 3D Property Valuation—A Review of Urban 3D Modelling Methods for Digital Twin Creation. ISPRS Int. J. Geo-Inf. 2022, 12, 2. [Google Scholar] [CrossRef]
  28. Attaran, M.; Celik, B.G. Digital Twin: Benefits, Use Cases, Challenges, and Opportunities. Decis. Anal. J. 2023, 6, 100165. [Google Scholar] [CrossRef]
  29. Kang, Z.; Yang, J.; Yang, Z.; Cheng, S. A Review of Techniques for 3D Reconstruction of Indoor Environments. ISPRS Int. J. Geo-Inf. 2020, 9, 330. [Google Scholar] [CrossRef]
  30. Rogers, S.R.; Manning, I.; Livingstone, W. Comparing the Spatial Accuracy of Digital Surface Models from Four Unoccupied Aerial Systems: Photogrammetry Versus LiDAR. Remote Sens. 2020, 12, 2806. [Google Scholar] [CrossRef]
  31. Storch, M.; Kisliuk, B.; Jarmer, T.; Waske, B.; De Lange, N. Comparative Analysis of UAV-Based LiDAR and Photogrammetric Systems for the Detection of Terrain Anomalies in a Historical Conflict Landscape. Sci. Remote Sens. 2025, 11, 100191. [Google Scholar] [CrossRef]
  32. Borrega, P.L.; Felisilda, Y.A.; Sarmiento, C.J.; Tamondong, A. 3D Reconstruction of the Fort Santiago Dungeons Using Handheld Laser Scanning Method. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2024, XLVIII-4/W8-2023, 69–75. [Google Scholar] [CrossRef]
  33. Antova, G. Portable Laser Scanning Solutions for 3D Modelling of Large Buildings. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2024, XLVIII-4/W10-2024, 13–19. [Google Scholar] [CrossRef]
  34. Kelly, C.; Mao, O.; Gamlich, V.; Kirkpatrick, R. Assessment of Slam Lidar—An Accuracy Assessment and Drift Anaylsis of the Leica BLK2GO. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 6404–6407. [Google Scholar]
  35. Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef]
  36. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  37. Zhao, X.; Sun, P.; Xu, Z.; Min, H.; Yu, H. Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications. IEEE Sens. J. 2020, 20, 4901–4913. [Google Scholar] [CrossRef]
  38. Vitols, G.; Bumanis, N.; Arhipova, I.; Meirane, I. LiDAR and Camera Data for Smart Urban Traffic Monitoring: Challenges of Automated Data Capturing and Synchronization. In Applied Informatics; Florez, H., Pollo-Cattaneo, M.F., Eds.; Communications in Computer and Information Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 1455, pp. 421–432. ISBN 978-3-030-89653-9. [Google Scholar]
  39. Zhang, Z.; Zheng, J.; Xu, H.; Wang, X. Vehicle Detection and Tracking in Complex Traffic Circumstances with Roadside LiDAR. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 62–71. [Google Scholar] [CrossRef]
  40. Gómez, J.; Aycard, O.; Baber, J. Efficient Detection and Tracking of Human Using 3D LiDAR Sensor. Sensors 2023, 23, 4720. [Google Scholar] [CrossRef] [PubMed]
  41. Triess, L.T.; Dreissig, M.; Rist, C.B.; Marius Zollner, J. A Survey on Deep Domain Adaptation for LiDAR Perception. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), Nagoya, Japan, 11–17 July 2021; pp. 350–357. [Google Scholar]
  42. Dreissig, M.; Scheuble, D.; Piewak, F.; Boedecker, J. Survey on LiDAR Perception in Adverse Weather Conditions. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–8. [Google Scholar]
  43. Li, X.; Zhou, Y.; Hua, B. Study of a Multi-Beam LiDAR Perception Assessment Model for Real-Time Autonomous Driving. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  44. Marrón-Romera, M.; García, J.C.; Sotelo, M.A.; Pizarro, D.; Mazo, M.; Cañas, J.M.; Losada, C.; Marcos, Á. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments. Sensors 2010, 10, 8865–8887. [Google Scholar] [CrossRef]
  45. Benezeth, Y.; Emile, B.; Laurent, H.; Rosenberger, C. Vision-Based System for Human Detection and Tracking in Indoor Environment. Int. J. Soc. Robot. 2010, 2, 41–52. [Google Scholar] [CrossRef]
  46. Kadam, P.; Fang, G.; Zou, J.J. Object Tracking Using Computer Vision: A Review. Computers 2024, 13, 136. [Google Scholar] [CrossRef]
  47. Karki, S.; Pingel, T.J.; Baird, T.D.; Flack, A.; Ogle, T. Enhancing Digital Twins with Human Movement Data: A Comparative Study of Lidar-Based Tracking Methods. Remote Sens. 2024, 16, 3453. [Google Scholar] [CrossRef]
  48. Gunter, A.; Boker, S.; Konig, M.; Hoffmann, M. Privacy-Preserving People Detection Enabled by Solid State LiDAR. In Proceedings of the 2020 16th International Conference on Intelligent Environments (IE), Madrid, Spain, 20–23 July 2020; pp. 1–4. [Google Scholar]
  49. Obanawa, H.; Yoshitoshi, R.; Watanabe, N.; Sakanoue, S. Portable LiDAR-Based Method for Improvement of Grass Height Measurement Accuracy: Comparison with SfM Methods. Sensors 2020, 20, 4809. [Google Scholar] [CrossRef] [PubMed]
  50. Zhou, T.; Hasheminasab, S.M.; Habib, A. Tightly-Coupled Camera/LiDAR Integration for Point Cloud Generation from GNSS/INS-Assisted UAV Mapping Systems. ISPRS J. Photogramm. Remote Sens. 2021, 180, 336–356. [Google Scholar] [CrossRef]
  51. Meng, X.; Wang, L.; Silván-Cárdenas, J.L.; Currit, N. A Multi-Directional Ground Filtering Algorithm for Airborne LIDAR. ISPRS J. Photogramm. Remote Sens. 2009, 64, 117–124. [Google Scholar] [CrossRef]
  52. Mountain, D.; Raper, J. Modelling Human Spatio-Temporal Behaviour: A Challenge for Location-Based Services. In Proceedings of the Proceedings of 6th International Conference on Geocomputation, Brisbane, Australia, 24–26 September 2001; pp. 24–26. [Google Scholar]
  53. Doyle-Baker, P.K.; Ladle, A.; Rout, A.; Galpern, P. Smartphone GPS Locations of Students’ Movements to and from Campus. ISPRS Int. J. Geo-Inf. 2021, 10, 517. [Google Scholar] [CrossRef]
  54. Wang, Y.; Huang, C.; Shan, J. An Initial Study on College Students’ Daily Activities Using GPS Trajectories. In Proceedings of the 2015 23rd International Conference on Geoinformatics, Wuhan, China, 19–21 June 2015; pp. 1–6. [Google Scholar]
  55. Zenk, S.N.; Schulz, A.J.; Matthews, S.A.; Odoms-Young, A.; Wilbur, J.; Wegrzyn, L.; Gibbs, K.; Braunschweig, C.; Stokes, C. Activity Space Environment and Dietary and Physical Activity Behaviors: A Pilot Study. Health Place 2011, 17, 1150–1161. [Google Scholar] [CrossRef]
  56. Korpilo, S.; Virtanen, T.; Lehvävirta, S. Smartphone GPS Tracking—Inexpensive and Efficient Data Collection on Recreational Movement. Landsc. Urban Plan. 2017, 157, 608–617. [Google Scholar] [CrossRef]
  57. Hägerstraand, T. What about People in Regional Science? Pap. Reg. Sci. 1970, 24, 7–21. [Google Scholar] [CrossRef]
  58. Goulias, K.G. Travel Behavior Models. In Handbook of Behavioral and Cognitive Geography; Montello, D.R., Ed.; Edward Elgar Publishing: Cheltenham, UK, 2018; pp. 56–73. ISBN 978-1-78471-754-4. [Google Scholar]
  59. Miller, H.J. Modelling Accessibility Using Space-Time Prism Concepts within Geographical Information Systems. Int. J. Geogr. Inf. Syst. 1991, 5, 287–301. [Google Scholar] [CrossRef]
  60. Kwan, M.-P. Gender and Individual Access to Urban Opportunities: A Study Using Space–Time Measures. Prof. Geogr. 1999, 51, 211–227. [Google Scholar] [CrossRef]
  61. Kwan, M.-P. Interactive Geovisualization of Activity-Travel Patterns Using Three-Dimensional Geographical Information Systems: A Methodological Exploration with a Large Data Set. Transp. Res. Part C Emerg. Technol. 2000, 8, 185–203. [Google Scholar] [CrossRef]
  62. Kwan, M. Gis Methods in Time-geographic Research: Geocomputation and Geovisualization of Human Activity Patterns. Geogr. Ann. Ser. B Hum. Geogr. 2004, 86, 267–280. [Google Scholar] [CrossRef]
  63. Kraak, M.-J. The Space-Time Cube Revisited from a Geovisualization Perspective. In Proceedings of the 21st international Cartographic Conference, ICC 2003: Cartographic Renaissance, Durban, South Africa, 10–16 August 2003; International Cartographic Association: Bern, Switzerland, 2003; pp. 1988–1996. [Google Scholar]
  64. Graser, A. MovingPandas: Efficient Structures for Movement Data in Python. GIForum 2019, 1, 54–68. [Google Scholar] [CrossRef]
  65. Martin, H.; Hong, Y.; Wiedemann, N.; Bucher, D.; Raubal, M. Trackintel: An Open-Source Python Library for Human Mobility Analysis. Comput. Environ. Urban Syst. 2023, 101, 101938. [Google Scholar] [CrossRef]
  66. Esri An Overview of the Space Time Pattern Mining Toolbox 2025. Available online: https://pro.arcgis.com/en/pro-app/latest/tool-reference/space-time-pattern-mining/an-overview-of-the-space-time-pattern-mining-toolbox.htm (accessed on 11 June 2025).
  67. Battersby, S. Generating Network Paths from High Volume Movement Data 2023. Available online: https://community.esri.com/t5/geoanalytics-engine-blog/generating-network-paths-from-high-volume-movement/ba-p/1318105 (accessed on 11 June 2025).
  68. Baird, T.D.; Tural, E.; Kniola, D.; Pingel, T.; Abaid, N. Building as common property: Examining Ostrom’s model in an innovative university residence hall. Build. Res. Inf. 2025; in review. [Google Scholar]
  69. Lewis, P. Defining a Sense of Place. South. Q. 1979, 17, 24–32. [Google Scholar]
  70. Shamai, S. Sense of Place: An Empirical Measurement. Geoforum 1991, 22, 347–358. [Google Scholar] [CrossRef]
  71. Shamai, S.; Ilatov, Z. Measuring Sense of Place: Methodological Aspects. Tijdschr. Econ. Soc. Geogr. 2005, 96, 467–476. [Google Scholar] [CrossRef]
  72. Gillespie, J.; Cosgrave, C.; Malatzky, C.; Carden, C. Sense of Place, Place Attachment, and Belonging-in-Place in Empirical Research: A Scoping Review for Rural Health Workforce Research. Health Place 2022, 74, 102756. [Google Scholar] [CrossRef]
  73. Tuan, Y.-F. Space and Place: Humanistic Perspective. In Philosophy in Geography; Gale, S., Olsson, G., Eds.; Springer: Dordrecht, The Netherlands, 1979; pp. 387–427. ISBN 978-94-009-9396-9. [Google Scholar]
  74. Tuan, Y. Space and Place: The Perspective of Experience, 2nd ed.; University of Minnesota Press: Minneapolis, MN, USA, 2001; ISBN 978-0-8166-3877-2. [Google Scholar]
  75. Agarwal, P. Place. In Handbook of Behavioral and Cognitive Geography; Montello, D.R., Ed.; Edward Elgar Publishing: Cheltenham, UK, 2018; pp. 291–306. ISBN 978-1-78471-754-4. [Google Scholar]
  76. Rollero, C.; De Piccoli, N. Does Place Attachment Affect Social Well-Being? Eur. Rev. Appl. Psychol. 2010, 60, 233–238. [Google Scholar] [CrossRef]
  77. Thomas, L.; Orme, E.; Kerrigan, F. Student Loneliness: The Role of Social Media Through Life Transitions. Comput. Educ. 2020, 146, 103754. [Google Scholar] [CrossRef]
  78. Virginia Tech Creativity and Innovation District Living–Learning Program Residence Hall 2023. Available online: https://facilities.vt.edu/content/facilities_vt_edu/en/planning-financing/campus-construction-projects/facilities-CID.html (accessed on 11 June 2025).
  79. Blickfeld GmbH Cube 1—3D LiDAR Sensor 2025. Available online: https://www.blickfeld.com/lidar-sensor-products/cube-1/ (accessed on 8 June 2025).
  80. Chen, J. Grid Referencing of Buildings; ETH Zurich: Zürich, Switzerland, 2018. [Google Scholar] [CrossRef]
  81. Blickfeld GmbH Blickfeld Percept—LiDAR Perception Software 2022. Available online: https://www.blickfeld.com/lidar-sensor-products/percept/ (accessed on 8 June 2025).
  82. Comaniciu, D.; Meer, P. Mean Shift: A Robust Approach toward Feature Space Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
  83. Mahajan, T.; Singh, G.; Bruns, G. An Experimental Assessment of Treatments for Cyclical Data. Master’s Thesis, California State University Channel Islands, Channel Islands, CA, USA, 2021. [Google Scholar]
  84. Cöltekin, A.; De Sabbata, S.; Willi, C.; Vontobel, I.; Pfister, S.; Kuhn, M.; Lacayo, M. Modifiable temporal unit problem. In Persistent Problems in Geographic Visualization; ICC2011 Workshop: Paris, France, 2011; Available online: http://geoanalytics.net/ica/icc2011/coltekin.pdf (accessed on 8 June 2025).
  85. Openshaw, S. The Modifiable Areal Unit Problem; Concepts and Techniques in Modern Geography; Geo Books: Norwich, UK, 1983; ISBN 0-86094-134-5. [Google Scholar]
  86. Harari, G.M.; Müller, S.R.; Stachl, C.; Wang, R.; Wang, W.; Bühner, M.; Rentfrow, P.J.; Campbell, A.T.; Gosling, S.D. Sensing Sociability: Individual Differences in Young Adults’ Conversation, Calling, Texting, and App Use Behaviors in Daily Life. J. Personal. Soc. Psychol. 2020, 119, 204–228. [Google Scholar] [CrossRef]
  87. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  88. Vroman, L.; Lagrange, T. Human Movement in Public Spaces: The Use and Development of Motion-Oriented Design Strategies. Des. J. 2017, 20, S3252–S3261. [Google Scholar] [CrossRef][Green Version]
Figure 1. The ground floor of the Community Assembly space of the CID at Virginia Tech. The extent of lidar data collection is outlined in red.
Figure 1. The ground floor of the Community Assembly space of the CID at Virginia Tech. The extent of lidar data collection is outlined in red.
Remotesensing 17 03236 g001
Figure 2. Combined point cloud from 11 lidar sensors for the study area.
Figure 2. Combined point cloud from 11 lidar sensors for the study area.
Remotesensing 17 03236 g002
Figure 3. One hour of Percept-generated tracks in the study area rendered as 2D view. Dominant throughways are clearly visible in the data.
Figure 3. One hour of Percept-generated tracks in the study area rendered as 2D view. Dominant throughways are clearly visible in the data.
Remotesensing 17 03236 g003
Figure 4. Flowchart depicting the order of steps and general process of the collision detection algorithm.
Figure 4. Flowchart depicting the order of steps and general process of the collision detection algorithm.
Remotesensing 17 03236 g004
Figure 5. Two orthographic views of maximum intensity-rendered lidar frames overlaid with the collision algorithm. In frame (a), two people are approaching one another (dashed box). In frame (b), they stop and interact—the algorithm successfully detected this collision.
Figure 5. Two orthographic views of maximum intensity-rendered lidar frames overlaid with the collision algorithm. In frame (a), two people are approaching one another (dashed box). In frame (b), they stop and interact—the algorithm successfully detected this collision.
Remotesensing 17 03236 g005
Figure 6. Section of study area featuring a throughway (arrows) with seating (a) and open space (b) on either side to examine the effect of “friction” on collisions.
Figure 6. Section of study area featuring a throughway (arrows) with seating (a) and open space (b) on either side to examine the effect of “friction” on collisions.
Remotesensing 17 03236 g006
Figure 7. (a) Total collisions by day, (b) total collisions and occupancy by week of the semester, (c) normalized collisions and occupancy averaged by day of week, and (d) mean (line) and upper/lower quartiles (shaded region) of collisions and occupancy by hour of day.
Figure 7. (a) Total collisions by day, (b) total collisions and occupancy by week of the semester, (c) normalized collisions and occupancy averaged by day of week, and (d) mean (line) and upper/lower quartiles (shaded region) of collisions and occupancy by hour of day.
Remotesensing 17 03236 g007
Figure 8. Collisions vs. occupancy, differently symbolized by day of week, fitted with a quadratic trendline.
Figure 8. Collisions vs. occupancy, differently symbolized by day of week, fitted with a quadratic trendline.
Remotesensing 17 03236 g008
Figure 9. Occupancy (a) and collisions (b) within the CID over the course of the Fall 2023 academic semester.
Figure 9. Occupancy (a) and collisions (b) within the CID over the course of the Fall 2023 academic semester.
Remotesensing 17 03236 g009
Figure 10. GLR Results show that hallways (1) and some seating areas (2) experienced fewer collisions than expected given occupancy. The elevator (3), base of the stairs (4), and center of Community Assembly (5) all experienced more than expected.
Figure 10. GLR Results show that hallways (1) and some seating areas (2) experienced fewer collisions than expected given occupancy. The elevator (3), base of the stairs (4), and center of Community Assembly (5) all experienced more than expected.
Remotesensing 17 03236 g010
Figure 11. Normalized and smoothed average velocity, occupancy, and collisions along a primary hallway in the study space. The x values in panel (a) correspond with the left side of the hallway in panel (b).
Figure 11. Normalized and smoothed average velocity, occupancy, and collisions along a primary hallway in the study space. The x values in panel (a) correspond with the left side of the hallway in panel (b).
Remotesensing 17 03236 g011
Table 1. Parameters for Percept object tracking software.
Table 1. Parameters for Percept object tracking software.
ParameterSetting
Dynamic BackgroundMixture of Gaussians
Initialization Frames10
Exponential Decay0.005
Minimum Weight Threshold0.17
Minimum # of Neighbor Points3
Neighbor Radius0.5 m
Point Clustering MethodMean Shift
Minimum Points for a Cluster30
Average Radius of Objects0.33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Flack, A.H.; Pingel, T.J.; Baird, T.D.; Karki, S.; Abaid, N. Lidar-Based Detection and Analysis of Serendipitous Collisions in Shared Indoor Spaces. Remote Sens. 2025, 17, 3236. https://doi.org/10.3390/rs17183236

AMA Style

Flack AH, Pingel TJ, Baird TD, Karki S, Abaid N. Lidar-Based Detection and Analysis of Serendipitous Collisions in Shared Indoor Spaces. Remote Sensing. 2025; 17(18):3236. https://doi.org/10.3390/rs17183236

Chicago/Turabian Style

Flack, Addison H., Thomas J. Pingel, Timothy D. Baird, Shashank Karki, and Nicole Abaid. 2025. "Lidar-Based Detection and Analysis of Serendipitous Collisions in Shared Indoor Spaces" Remote Sensing 17, no. 18: 3236. https://doi.org/10.3390/rs17183236

APA Style

Flack, A. H., Pingel, T. J., Baird, T. D., Karki, S., & Abaid, N. (2025). Lidar-Based Detection and Analysis of Serendipitous Collisions in Shared Indoor Spaces. Remote Sensing, 17(18), 3236. https://doi.org/10.3390/rs17183236

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop