Next Article in Journal
Water Requirements for Hydrogen Production: Assessing Future Demand and Impacts on Texas Water Resources
Previous Article in Journal
The Impact of Green Spaces on Workplace Creativity: A Systematic Review of Nature-Based Activities and Employee Well-Being
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Tourists’ Spatial–Temporal Behaviors Between Location-Based Service Data and Onsite GPS Tracks

by
Colby Parkinson
1,
Bing Pan
1,*,
Sophie A. Morris
2,
William L. Rice
2,
B. Derrick Taff
1,
Guangqing Chi
3 and
Peter Newman
1
1
Department of Park, Recreation, and Tourism Management, Pennsylvania State University, University Park, PA 16802, USA
2
Department of Society and Conservation, University of Montana, Missoula, MT 59801, USA
3
Department of Agricultural Economics, Sociology, and Education, Pennsylvania State University, University Park, PA 16802, USA
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(2), 391; https://doi.org/10.3390/su17020391
Submission received: 15 November 2024 / Revised: 23 December 2024 / Accepted: 29 December 2024 / Published: 7 January 2025
(This article belongs to the Section Tourism, Culture, and Heritage)

Abstract

:
Tourism and recreation managers rely on spatial-temporal data to measure visitors’ behavior for gauging carrying capacity and sustainable management. Location-based service (LBS) data, which passively record location data based on mobile devices, may enable managers to measure behaviors while overcoming constraints in labor, logistics, and cost associated with in-person data collection. However, further validation of LBS data at more refined spatial and temporal scales within tourism attractions is needed. We compared observations of salient spatial–temporal measures from a stratified sample of onsite visitors’ GPS traces in a popular U.S. National Park during peak season over two years with a sample of visitors’ traces collected during the same period by a third-party LBS data provider. We described trip characteristics and behaviors within 34 points of interest (POIs) and then pre-processed both datasets into weighted, directed networks that treated POIs as nodes and flow between POIs as edges. Both datasets reported similar proportions of day-use visitors (~79%) and had moderate-to-strong correlations across networks depicting visitor flow (r = 0.72–0.85, p < 0.001). However, relative to the onsite data, LBS data underestimated the number of POIs the visitors stopped by and differed in its rank of popular POIs, underestimating the length of time visitors spent in POIs (z = 1, p ≤ 0.001) and overestimating visitation to the most popular POIs (z = 180, p = 0.044). Our findings suggest that LBS data may be helpful for identifying trends or tracking tourist movement in aggregate and at crude spatial and temporal scales, but they are too sparse and noisy to reliably measure exact movement patterns, visitation rates, and stay time within attractions.

1. Introduction

Tracking tourists’ movement is critical for measuring their economic, social, and environmental impacts aligned with the United Nations and World Tourism Organization’s sustainability aims [1,2,3]. Measuring movement permits tourism and recreation-destination stakeholders and managers to understand (a) tourists’ spatial–temporal behaviors, including visitation rate and time spent at visitor amenities, services, and attractions, which can serve as benchmarks for management and tracking capacity [4,5], (b) market segmentation based on tourist’s behaviors, which may explain different impacts [6,7], and (c) tourist’s experience and physical outcomes from the built and natural environment [8,9].
Through much of the 20th century, however, measuring tourists’ movements at more refined spatial–temporal scales relied on high-burden and possibly unreliable methods, such as time diaries and recall [3]. These methods were expensive to implement and out of reach for many tourism and recreation-destination managers [10]. Over the past two decades, legislation and technology shifted to permit individually owned high-accuracy personal global positioning system (GPS) devices [3,11,12]. That shift culminated in widespread access to the Global Navigation Satellite System (GNSS) through devices in people’s pockets, wrists, vehicles, and more. GNSS is an umbrella term for GPS, cellular, internet, and other satellite-derived services in devices that capture and provide location and time information. Today, recreation and tourism-destination managers can purchase years of location data from millions of tourists’ GNSS-enabled devices through third-party location-based service (LBS) data providers, also referred to as mobile- or cellular-device data [13,14,15].
LBS data makes tracking tourists more accessible than traditional research approaches because it is scalable, cost- and time-effective, and often complemented by intuitive online dashboards [2,16,17]. Further, there is less concern over response bias that occurs from more traditional survey intercept methods [2,3,16,18], because single-source providers or third-party LBS companies collect location data passively. Those data are obtained through private or partnered ad exchanges and device applications and subsequently aggregated, anonymized, and cleaned for customers. The result is data that seemingly provide unparalleled accuracy and depth for examining tourists’ behaviors from a macro inter-destination level to a more micro intra-attraction level [19,20,21].
Several studies have used LBS data to measure overall visitation to destinations [15,21,22,23,24,25,26], track movement between destinations [27,28] and within destinations [14], and describe visitation to amenities based on visitor’s attributes [13,29]. Broadly, these studies contribute to understanding core aspects of visitor behavior—where they go, where they come from, and how long they stay [4]. Measurements of these behaviors serve as indicators of visitor use [30].
More importantly, the potential overuse, and, by extension, unsustainable overburdening of the social and ecological resources of tourism destinations by humans has been a concern among scientists and practitioners for nearly 100 years [31,32]. However, measurement of human behavior in nature-based tourism destinations is often underemphasized relative to ecological monitoring [10]. The advent of big data has led to a rapid expansion of the ways that visitor behaviors can be measured, with LBS data becoming one of the most prominent tourist tracking methods [2,3,19,33,34,35]. LBS data’s expanding application reflects its ability to measure dimensions of visitor and tourist use salient to destination management [4,36], including visitation rate and the amount of time or days spent at destinations across spatial and temporal scales [2,16,21]. The implications of these metrics are broad, with LBS data contributing to understanding of tourists’ demand on infrastructure (e.g., roads, public transit, parking, trails) [14,23], permitting market segmentation based on visitor attributes and behavior [13,18,29], and tracking changes in behavior in response to adverse events [24,25,26].
Theoretically, concerns over the overburdening of social and ecological resources have been examined through the lens of the overtourism phenomenon and carrying capacity research [5,32,37,38,39,40]. Overtourism occurs when tourists cause a negative impact to a tourist destination’s home community or environment [5,37]. Carrying capacity represents the tipping point of the rate of tourist or visitor use at a destination before unsustainable or undesirable demand on social, ecological, and cultural resources causes visitor or resident dissatisfaction or unsustainable ecological harm [32,40]. Nature-based tourism destinations are more often examined through the lens of carrying capacity because, unlike more urban destinations, they are usually not heavily populated by residents. The overtourism and carrying-capacity concepts require deeper understandings of place-based social and ecological phenomena [5,32,37,38,40]. With that in mind, practical frameworks for measuring carrying capacity are all built on the foundation of understanding visitor-use trends, as well as ongoing monitoring of visitor use to ensure sustainability goals are met [30,41,42,43,44,45,46]. These frameworks typically involve developing zones for tracking visitation numbers in processes that are spatially (i.e., geographic boundaries) and temporally bound (i.e., rates based on time of day or year) and reflect their anticipated and preferred uses and use levels. Therefore, understanding these trends requires high quality spatial and temporal data, which provides the justification for research on LBS data’s utility and reliability.
Despite promising contemporary research and practical applications using LBS data, there are substantial methodological gaps in understanding the data’s reliability, especially at the more refined intra-attraction level [16,20]. Existing research has documented biases in LBS data while presenting methods for improving their reliability by using calibration with existing data or accounting for sociodemographics [14,15,22,26,29,47]. However, these existing studies have not sufficiently examined LBS data’s reliability in measuring movement and behaviors within a destination [35]. These limitations warrant further examination before LBS data can be integrated and adapted into destination management strategies and more advanced research that moves beyond descriptives [3]. While one study supported LBS data could reflect cyclists’ and pedestrians’ rate of use on a path in a popular recreation area compared to GPS and operational data, the emphasis was not on trip-level characteristics and their inferences were limited to visitation density. Beyond counts and visitor density, LBS data could theoretically provide salient baseline spatial–temporal visitor data within attractions, including stay time at the trip-level and at specific locations [4]. More advanced understanding of tourist behavior could also be derived from describing and examining aggregated visitor trajectories between locations at the intra-attraction level [9,48], referred to as visitor flows [49,50]. Flows within a destination provide insight into visitors’ typical trip patterns and sequence of visitation, permitting improved investments in staffing, resources, or technology for monitoring, as well as infrastructure for transportation and enjoyment [50]. The primary contribution of the present study is in comparing LBS data’s ability to measure these important behaviors—intra-attraction visitation, stay time, and flow—to existing approaches.
LBS data’s potential unreliability, like all big data, is likely due to biases in sampling, skewed data contribution among users, measurement error, and platform selection [20,51]. The sample of users accessible to an LBS platform, the types, quality, and ethics of the data collected, and variations in availability, privacy, and how users use devices may all influence observations of visitor characteristics and behaviors [35,52]. At more refined levels, factors such as mobile phone service, GPS connectivity, and data collection and cleaning methodology may cause biases in visitor movement [20,50]. Those limitations are potentially compounded when measuring refined behaviors like stay time or flow.
The current “gold standard” for reliably measuring intra-attraction visitor behavior involves intercepting a stratified random sample of visitors at an attraction’s entrances and asking them to carry a portable GPS device that records their location throughout their trip [12,19,53]. Compared to more traditional intercept-survey approaches using time diaries and recall, the application of GPS devices provides high internal validity on visitors’ spatial–temporal behavior by preventing recall bias and reducing visitor burden [12]. Compared to LBS data and other big data methods (e.g., social media, voluntary geographic information), pairing GPS with intercept-survey methods leads to more reliable sampling procedures, continuously collected data with minimal error, and highly ethical collection of subject data [2,8,12]. Therefore, it is the most valid practical benchmark for determining LBS data’s utility for intra-attraction research.
Despite the overall reliability and accuracy of intercept-survey methods, these approaches can suffer from potential non-participation bias [54] and logistical constraints that lead to sampling at a limited number of locations and times of day and year [55]. Logistically expensive and unscalable, the majority of intra-attraction research has remained single-site [2,19,53]. With proper validation and possible weighting, LBS data may overcome some of the weaknesses in traditional sampling approaches by including more limited, hard-to-reach visitors at any time of day or season. LBS platforms also have the potential to provide a cost- and time-effective alternative to onsite GPS sampling methodology, which may permit sites with smaller budgets, sparser visitation, and more porous layouts to have access to spatial–temporal management tools historically reserved for major tourism destinations.
We propose that the best approach to address current gaps in the literature regarding the reliability of LBS data for tracking salient metrics of tourist behavior is to use descriptive and network analysis. These analyses will permit comparisons between representative observations of visitor movement recorded on handed-out GPS devices with observations by an LBS dataset. Network analysis has had limited applications for studying visitor use and flow in nature-based tourism settings [56,57,58,59] and is an established method for tourism research within cities [27,60] and at finer spatial scales [28]. These studies have generated networks that treated the nodes as points of interest (POIs) at attractions or destinations and weighted, directed edges as visitor flows between POIs. Network analysis does not require independence of observation and permits direct comparisons between two partially paired samples, making it an appropriate methodology to draw comparisons for spatially informed research questions and potentially paired analyses [49,61]. Those assumptions are indispensable for comparing big data to onsite data during the same period, because it is likely that both samples contain some of the same individuals.
Therefore, we compared spatial–temporal visitor data observed from a stratified onsite sample of visitors who carried GPS devices with a sample from the third-party LBS platform Near Intelligence Inc. (now rebranded as Azira LLC) during the same period. Azira, which has continued to offer LBS data since rebranding, is a global marketing and operational intelligence company specializing in tourism, hospitality, retail and real estate (Azira, 2024) [62]. The comparison was drawn between two datasets obtained from visitors to the South Rim section of Grand Canyon National Park (GRCA), the second busiest U.S. National Park [63] and a UNESCO World Heritage Site. The South Rim was selected due to its relative popularity and cultural significance, climate, infrastructure, and geography. Its spatial configuration promotes structured behavior and inhibits dispersed use; its documented recreation, health, safety, and ecological management concerns justify investment in solutions for monitoring visitor use [64].
The validation of LBS data for research on, and management of, visitor use within destinations helps with assessing overtourism and carrying capacity. These comparisons can inform both datasets’ strengths and weaknesses. The observations for this study are specific to a single nature-based tourism site and a single prominent LBS platform, similar to prior research [14,29]. However, our findings have broader implications for measuring human behavior using LBS data because underlying data collection methods across LBS platforms are similar, even if pre-processing and observed behaviors can vary [65]. We compared the two datasets and asked two research questions. First, do both samples depict similar trip characteristics for visitors? Trip characteristics include the proportion of visitors who stayed for one day versus multiple days, stay time in the park, and rate of visitation to specific POIs. Second, are the spatial–temporal characteristics of visitation to POIs similar between the two data samples? Those characteristics included visitation, stay time, and flow.

2. Materials and Methods

2.1. Sampling

Onsite data collection was approved by the Internal Review Board of the authors’ institution. All participants provided oral consent and had to be at least 18 years old, speak English, and arrive at the park in a personal vehicle. Visitors were intercepted at the beginning of their trip at either entrance to the South Rim, from May to June, over two years: the Main Entrance in 2022 (n = 315) and the Desert View Entrance in 2023 (n = 224). Up to eight visitors were intercepted per hour on both weekends and weekdays, using a temporal stratification method [55]. Consenting visitors completed a brief oral survey and carried a portable GPS device with them throughout the duration of their visit. They were asked to drop the devices off at drop boxes located at either entrance as they exited.
The GPS devices were Garmin eTrex 10s (Garmin Ltd., Olathe, KS, USA). These devices are common in contemporary nature-based tourism and recreation research [57], reflecting their affordability, durability, and accuracy. The devices recorded waypoints every 15 s, a standard for meeting accuracy, storage, and processing-power needs [12].
On the other hand, the third-party LBS data were downloaded via the Near data platform, which integrates GNSS data from user devices with Near’s internally developed applications, second-party partner applications, or ad exchanges (see Supplementary File S1, for more information). The waypoints are generated by unique devices if each of the following criteria are met: (a) device location services are enabled, (b) the device user has granted applications permission to track their location and, on some operating systems, provide it to third parties, and (c) the user used a GNSS-enabled application on their device (e.g., a photo application with GNSS metadata enabled). Location is only provided to the platform when a device has an internet or mobile phone connection or when the data are recorded and later uploaded via the internet. The waypoints collected by the platform are then anonymized, securely stored, and organized by unique device identifications. The sample from Near was collected by creating a polygon depicting the study site. Waypoints that appeared within that polygon during the sampling periods during both years were then downloaded for pre-processing and analysis.

2.2. Study Location and Points of Interest

The study site was the South Rim of GRCA, located in the southwestern region of the United States, approximately 325 km (202 miles) north of Pheonix, Arizona. GRCA is 4931 square kilometers (1904 square miles) in size and geographically and managerially divided north and south by the Colorado River that flows approximately one mile below the dry, arid canyon rim. To extract spatial–temporal data for analyses, 34 POIs were generated at significant tourist attractions and amenities (Figure 1 and Table 1).
POIs were defined in cooperation with park staff and based on research on areas of management concern [64]. POIs were generated by creating 60 m buffers at points designating each area. Buffers for seventeen POIs were manually altered based on our observed GPS tracks, to ensure they represented the boundaries of their empirical usage. Some initial buffers were too small to encapsulate all visitor behavior (e.g., Main Village Visitor Center) or so large they may overestimate visitation, due to surrounding features like roads (e.g., Hermits Rest Bus Transfer). No POI boundaries overlapped with publicly accessible major roadways, to reduce noise in the data, but they did overlap with parking facilities and roadways designated for accessing parking areas.
The POIs were categorized into five non-mutually exclusive groups relating to three functional aspects: transit accessibility, recreational opportunities, and services. Two categories related to transit accessibility, road and bus, and whether the POI had parking and access via the public roadway or a bus stop. Empirical evidence suggests transit infrastructures is associated with heightened visitor use and that network analysis informed by transit can improve understanding of visitor flow and demand [56,66,67,68]. Two categories related to recreational opportunities, trail and overlook. Trails were POIs at the start of, or along, a paved or unpaved trail. Overlooks were designated areas for overlooking the Grand Canyon. These categorizations, depicting recreational affordances and opportunities, are consistent with prior network analysis research in nature-based tourism destinations [58,59,69]. Lastly, service POIs are locations that offered additional visitor services, including museums, markets or gift shops, and information centers. No POIs were generated in locations where visitor services were privately owned or where visitors may stay overnight, which included areas for lodging, food and beverage service, and gas stations [34].
All POIs were divided into four spatial park sections: (i) Desert View is on the east of the park and accessible exclusively by roadway, (ii) Hermits Rest is on the west end and accessible in the summer only by bus, paved trail, or special permit by vehicle, (iii) Main Village is where most visitor services and popular amenities are located, and (iv) South Kaibab and Bright Angel Trail are the two most popular recreational trail systems and areas of high concern to management [64].

2.3. Pre-Processing

The samples from GPS devices and the LBS dataset were pre-processed over five steps (see Figure 2). ArcGIS Pro 3.1.2, Python 3.9 with ArcPy, NumPy, SciPy, Pandas [70], and NetworkX [71], and UCINET 6.772 [72] were used for pre-processing, descriptive analysis, and network analysis.
In the first step, waypoints were excluded if they were (a) located somewhere physically impossible to reach or out of scope from the project (i.e., outside the park or on the North Rim), (b) duplicates, or (c) too far from preceding data points (i.e., points that would require traveling speeds far above reasonable speed limits). GPS devices should record waypoints continuously, but device, user, and researcher error can cause missing or misrecorded data; therefore, waypoints within onsite visitors’ trips with a greater than 16 h separation were examined manually to determine if they warranted removal. Data were retained if the gap appeared to reflect observable visitor behavior (e.g., the visitor left the park and returned later). Because of the inconsistent, sometimes long intervals between waypoints, referred to as data sparsity, we did not manually examine intervals in the LBS data. Instead, we accounted for visitors who may exit the park and return later for a separate trip by treating waypoints that took place before and after a 48 h interval as two unique visits. The cleaned onsite sample had a sample size of 539 with 815,839 waypoints (M = 1513.6 per visitor) and the LBS sample consisted of 39,799 devices with 1,996,369 waypoints (M = 30.1 per visitor). This cleaning process was informed by prior research [12] and the code was made publicly available (source code available at https://doi.org/10.5281/zenodo.13850813, accessed on 28 December 2024).
In the second step of pre-processing, we used the ArcGIS Pro spatial join feature to merge waypoints with POIs and retain only the waypoints within POI polygons. Establishing a minimum stay time in POIs can be helpful for deciphering visitor behavior and experience [12,20], but this was not feasible for the LBS sample, due to data sparsity. As a result, a minimum stay time was not established for determining visitation to or stay time in a POI. The onsite sample reduced in size to 518 visitors with 213,980 waypoints (M = 413.1 per visitor) while the LBS sample reduced to 20,143 devices with 281,254 waypoints (M = 14.0 per device).
In the third step, we further analyzed behaviors in POIs and created the nodes for measuring the network structure of visitor flows. Each visitor’s initial waypoint in every POI was retained for their first visit and, as applicable, revisit (i.e., if a visitor consecutively visited POIs A, B, and A, two points were recorded in A and one point was recorded in B). Retaining the initial waypoint and eliminating all other consecutive waypoints before reaching another POI reduced the data size to permit faster processing. Both samples retained the same sample size, but waypoints were reduced and pre-processed into POI visits. Resultingly, the onsite sample recorded 3545 POI visits (M = 6.8 per visitor) and LBS recorded 44,017 POI visits (M = 2.2 per device).
In the fourth step, the edges for the network were generated by aggregating trajectories between POI visits for each visitor (i.e., consecutively visited POIs A, B, and A became “A to B” and “B to A”). These aggregations are referred to as visitor flow [9,48,56]. Visitors who did not visit more than one POI were excluded from network analysis, which reduced the sample size and number of POI visits recorded by both samples. The onsite sample recorded 463 visitors with 3027 POI visits (M = 6.5 per visitor), while the LBS sample reduced to 9666 devices with 23,874 POI visits (M = 2.5 per device).
In the fifth step, we created five unique subnetworks to examine network similarity after POI removal to identify potential sources of bias in the LBS data. The subnetworks were generated systematically, iteratively repeating steps three and four of pre-processing exclusively with POIs of certain criteria, and recalculating node and edge weights at each step. The first four networks were the trail, road, bus, and overlook networks, which, respectively, only included POIs characterized by those categories. A fifth network, referred to as services excluded, excluded any POIs categorized as services based on the assumption that they may skew LBS data due to their high level of visitation, better cellphone and internet coverage, and longer stay times. These subnetworks were generated to test the resilience of observations across datasets and improve interpretations of the results. For example, they could help determine if flow between POIs on trails were more accurately recorded in LBS data, potentially reflecting that visitors may take more pictures when they are hiking compared to when they are driving or busing. Relative to the full networks, each subnetwork recorded a smaller sample size, number of POI visits, and mean number of POI visits.

2.4. Analysis and Methods

Frequency case weights were applied to the onsite cleaned sample during the first step of pre-processing, specifically weighting the sample based on visitor entrance to reflect the proportion of visitors who used that entrance based on the official count of entries at both entrances to GRCA. Weighting was not applied to the LBS data. Case weights were applied to make analyses using the onsite sample proportionally reflect official counts of visitors from those entrances in May and July in 2022 and 2023 (482,274 visitors for the Main Entrance and 115,509 visitors for the Desert View Entrance) [63]. The frequency case-weighting formula resulted in a weight with a value of approximately 0.34 applied to the 2023 Desert View Entrance sample and an unadjusted weight, or value of 1.0, applied to the 2022 Main Entrance sample (w = weight; s1 = Main Entrance sample; s2 = Desert View Entrance sample; x1 = Main Entrance visitors x2 = Desert View Entrance visitors):
w = (s1 × (x2/x1)/s2,

2.4.1. Descriptive Characteristics

The trip characteristics included the proportion of visitors who visited the park more than one day, described as day-use versus overnight visitors, and an average count of the number of POIs they visited. Total stay time in the park was also determined by measuring the difference between the first and last recorded waypoint for each visitor.
Three metrics of POI visitation were compared: volume of visitation, proportional visitation, and stay time. Volume of visitation was determined by counting the number of visitors’ unique entries to POIs, and was described as a percentage of total volume. Proportional visitation was determined based on the number of unique visitors who went to each POI, described as a percentage of the total sample. Mean stay time in POIs was calculated by subtracting the time of each visitor’s first waypoint from that of the last waypoint for each unique entry to every POI. Stay times greater than three hours for both LBS and onsite samples were excluded because they were assumed to reflect outliers in user behavior, based on the observed standard deviations of the onsite data. All three characteristics of POIs were compared between samples, with a repeated measures approach using the Wilcoxon signed-rank test. This test is traditionally used for paired samples and, for our partially paired sample, would serve as a conservative estimate of whether the distribution of rankings among these three characteristics differed between two samples.

2.4.2. Spatial Network Analysis

We first aggregated directed movements between POIs made by each visitor. These counts were then converted into adjacency matrices that treated POIs as nodes and weighted representations of directed flows as edges.
Descriptive holistic network measures were computed based on the adjacency matrices using summary statistics; the quadratic assignment procedure (QAP) Pearson correlation between two samples was performed for each respective network. The summary statistics included density, transitivity, and diameter. Density describes the ratio of the number of actual edges in a network to the number of total possible edges. Transitivity indicates clustering by measuring the fraction of total possible triads completed within a network [73]. Diameter describes the shortest path between the furthest two nodes within a network. The QAP Pearson correlation provides a measure of similarity between networks’ adjacency matrices. QAP was computed over 5000 permutations with correlations computed between the two sample’s comparable, respective networks (i.e., the onsite trail network was correlated with the LBS trail network).

3. Results

3.1. Trip Characteristics

Trip characteristics for both samples were described (see Table 2). Both years of the LBS sample described similar trip characteristics (absolute difference between metrics is depicted as Δ) across the proportion of day-use and overnight visitors (Δ = 3.7%), POI visitation rates by day-use and overnight visitors (Δ = 0.0–0.6) and stay time among day-use visitors (Δ = 00:39:34). While there was a larger difference between mean stay time for overnight visitors observed between the two LBS samples (Δ = 2 days 08:44:11), due to the general similarity across all other metrics, we deemed combining the two samples for all other analyses appropriate.
Comparing LBS data with onsite GPS data, the proportion of day-use and overnight visitors was nearly equal (Δ = 1%). However, the number of POIs visited by visitors from the onsite sample was on average more than double those from the LBS sample. Day-use visitors in the onsite sample also recorded trips that were on average twice as long as the LBS sample, whereas overnight visitors in the LBS sample stayed on average twice as long as visitors in the onsite sample.

3.2. Point-of-Interest Visitation and Behavior

Visitation metrics and visitor stay time for each POI were described visually, with POI heights depicting normalized weights based on the highest values (Figure 3 and Appendix A, Table A1). The two POIs with the largest absolute difference in visitation between the two samples were both recorded as high volume in the LBS sample—the Main Village Visitor Center (Δ = 12.3%) and Yavapai Point/Geology Museum (Δ = 5.2%). However, the third and fourth POIs with the largest absolute difference were recorded as higher volume in the onsite sample—Hermits Rest Bus Transfer (Δ = 4.5%) and Desert View Market and Giftshop (Δ = 4.1%). No other POIs recorded higher than 3% absolute difference in visitation volume. The Wilcoxon Ranked Sign Test determined that there was a statistically significant difference in the ranking of POIs between the two samples, z = 180, p = 0.044.
Other than Shoshone Overlook, each POI was visited by a higher proportion in the onsite sample than in the LBS sample. The range of stay time observed in the LBS sample across POIs (1.0–6.6 min) was much smaller than the onsite sample (0.6–48.4 min). The onsite sample recorded longer average stay times than the LBS sample in each POI except the Backcountry Office. Both proportional visitation and stay time had statistically significant differences in rank distribution, based on the Wilcoxon Ranked Sign Test (z = 1, p ≤ 0.001).

3.3. Flow Between Points of Interest

Flow between POIs was described using geographic symbology for all six networks (see Figure 4). To reduce noise, edges were only displayed if their flow was above the mean for their respective networks. For example, the mean count of movements between POIs in the LBS full network was 33; therefore, all edges with a volume below 33 were not displayed. Symbol colors correspond to volumes, ranging from the network’s mean volume to three standard deviations above the mean.
Descriptively, the flows revealed that the trail and overlook subnetworks resulted in higher mean and maximum volumes for both samples than the full network, whereas the bus, road, and services-excluded networks’ mean and maximum volume were lower. Additionally, the LBS networks consistently had more edges above average and volume distributions with longer tails than the onsite networks. Differences between both datasets visually and across network metrics should be interpretated considering LBS data quantity and quality, simultaneously. Specifically, LBS data’s large sample size may mean that the networks capture more rare behaviors (i.e., going directly from POI A to B is rare, but will be recorded with enough observations). On the other hand, data sparsity and unequal intervals in data recordings may result in fictitious observations that would not happen using GPS devices that record data at refined equal intervals (i.e., going directly from POI A to B does not happen, and was only recorded because the dataset failed to record visits to other POIs in between those POIs).
Table 3 describes the summary statistics for the full network and all five subnetworks, as well as the QAP Pearson Correlation results. Density differed across both samples for all networks, with the LBS networks having much higher density, corresponding to their higher number of edges and larger sample size. Relatively higher transitivity within the networks, regardless of dataset, suggested that behavior between POIs was interconnected and visitation was not consistently structured in the same sequence (i.e., visitors often went from POI A to B to C and A to C). With that in mind, higher transitivity recorded across LBS networks suggests observations of flow using that dataset would lead to conclusions that there is greater interconnection between POIs. Similarly, diameter was lower across LBS networks than onsite networks, further reinforcing higher interconnectivity recorded in the LBS dataset. However, there was much more variation in diameter size for the onsite network, with the road and bus subnetworks—the two networks with the least nodes and potentially higher structure because of their transit accessibility—having the lowest diameter. The number of nodes and relative structure of the road and bus subnetworks may also explain why they were generally characterized by higher density and transitivity.
The QAP Pearson correlation comparing flow for the full network of the two samples was strong (r = 0.82, p < 0.001). This statistically significant positive relationship suggests that both datasets recorded similar observations of flow between POIs (i.e., if movement between POI A to B was high in one network, it was high in the other). The overlook and services-excluded subnetworks had near-identical statistically significant correlation coefficients with the full network. The trail (r = 0.85, p < 0.001) and road subnetworks’ (r = 0.80, p < 0.001) correlations were also both statistically significant and strong, overall. The bus subnetwork was the only subnetwork with a statistically significant moderate correlation (r = 0.72, p < 0.001), and had the largest relative difference in strength compared to the full network. This means that while the observed relationship in flow between both datasets were still positive in the bus subnetwork, the magnitude of observations of flow varied more.

4. Discussion

The growth and expansion of GNSS-enabled technology ballooned in the 21st century, with technologies to measure visitors’ spatial–temporal behaviors and patterns growing at a pace potentially exceeding researchers’ and practitioners’ ability to understand their validity and reliability. Despite this concern, a prominent and growing number of nature-based tourism destinations, including the U.S. National Park Service [74], are using LBS data to understand visitor use and travel patterns. To contribute to a deeper understanding of the utility of LBS data for tracking tourists at the intra-attraction level, we compared measures of visitors’ intra-attraction spatial traces from an onsite stratified sample of visitors who carried GPS devices to those derived from LBS data from a third-party data aggregator, Near, at the same time and place. The measures we compared are relevant to improving destination management and sustainable planning through the frameworks of overtourism and carrying capacity. While we observed that individual-level and POI-specific temporal and proportional visitation metrics differed between the LBS and onsite samples, high-level trip characteristics and aggregated visitation metrics were more similar. The following discussion details how the results may inform the validity of metrics from LBS data to track tourists and inform sustainability strategies, including length of stay, visitation to POIs, and movement within an attraction.

4.1. Visitor’s Trip Characteristics

We observed that LBS data provided comparable proportions of overnight and day-use visitors compared to our onsite sample, suggesting face validity for this metric. Therefore, LBS data may provide this metric for recreation and tourism destinations that have no formal lodging reservation systems or where lodging and accommodations are located outside of the destination. The proportion of day-use versus overnight visitors could provide guidance on expected visitor use of amenities and services [16]. Our results demonstrated that overnight visitors across both samples went to more POIs than day-use visitors, suggesting that their impact on infrastructure and the environment was also higher.
Despite similarities in high-level trends, the number of POIs that the LBS sample visited was less than half of those observed in the onsite sample. These underestimates are concerning for measuring carrying capacity, as they may cause misleading conclusions about the actual impact of individual visitors. Stay time in the park was less consistent, with day-use visitors from the LBS data recording shorter average stay times than the onsite day-use visitors; the opposite relationship was observed for overnight visitors. Underestimates in POI visitation and day-use stay time likely reflect data sparsity, corresponding to the significantly lower number of recorded waypoints per visitor in the LBS sample. The average number of waypoints per visitor in the cleaned onsite sample was more than fifty times larger than the LBS sample. Sparsity was not a limitation for determining day use and overnight visitor proportions because the LBS platform’s underlying methodology likely results in at least one waypoint recorded per calendar day. Reflecting that interpretation, the longer stay time recorded from the LBS sample among overnight visitors may reflect onsite under-sampling of multi-night visitors and logistical constraints (e.g., the GPS device battery would possibly run out on the second or third day). Our interpretation also suggests that LBS data’s ability to track tourists’ temporal behavior at refined spatial and temporal scales is limited, whereas less refined temporal scales are more reliable.

4.2. Visitor’s Behaviors at Points of Interst

Results on visitation and stay time within POIs reinforced that LBS data’s limitations primarily occur when they are used to examine averages across individual visitors. Stay times within POIs were consistently shorter, and the proportions of the sample who went to each POI consistently smaller, in the LBS data. These limitations and our interpretation of the data sparsity issue mirror the lower average number of POIs recorded by the LBS sample. On the other hand, using normalized volume of visitation to POIs resulted in LBS data over-representing high-volume POIs compared to the onsite sample. These results suggest that using LBS data to measure visitors’ use to understand site-specific crowding could lead to misleading overestimates of visitation to popular amenities and underestimates at less-popular amenities. However, while exact values varied and we observed differences in rank at the cusp of statistical significance, the trends among this aggregated measure between samples yielded a more similar understanding of visitor use regarding POI popularity. For example, seven out of ten POIs with the highest volume of visitation overlapped in both samples, with the under-represented POIs in the LBS dataset all categorized as services (i.e., bus transfer, gift shop). Therefore, LBS data may provide helpful high-level trends on aggregated visitation volume to POIs, even if they do not precisely depict rank or proportion. That means LBS data could be an effective tool for determining where to invest in more rigorous, reliable monitoring. Prior research using other big data sources like social media and voluntary geographic information have identified similar uses [75,76,77,78], revealing an opportunity to create integrative visitor use monitoring plans using multiple data sources.
We observed moderate-to-strong correlations in directed flow of visitor volume between POIs across the full network and each subnetwork. The bus subnetworks had the weakest correlation between samples. Visitors may use GPS-enabled mobile devices and applications at lower rates while they ride buses compared to driving and hiking (for obtaining directions). Therefore, their location was not captured during brief entries to bus stops at the same rate when compared to continuously recording GPS devices. That interpretation means LBS data may be a less reliable tool for tracking public transportation infrastructure that can be critical for reducing carbon emissions for sustainable destination management [79]. The LBS networks had more edges above the mean volume, higher transitivity and density, and smaller diameters—reflecting the large sample size that may pick up more rare behaviors and noisiness of the data because of non-continuous measurement. Nonetheless, most of the above-average edges in the onsite sample were recorded as above average in the LBS sample. Furthermore, trends in decreases and increases in network volume averages and the observed maximum volume of movements between POIs across networks were consistent for both dataset, despite systematic POI removal. While we had hypothesized a lower correlation between the services-excluded networks, the relationship and observed consistency across this subnetwork and others was instead a testament to the resilient relationship between samples for measuring visitor flow. Therefore, reasonable uses of LBS data for sustainable management includes understanding aggregated movement and relationships between POIs, such as where visitors generally go from a POI, and demand on trails and roadways, consistent with other research [14,27].

4.3. Contributions

LBS data were consistently more similar to GPS data when metrics were aggregated or measured behaviors at less-refined spatial and temporal scales. These metrics included visitation volume, aggregated volume of movement between POIs, and day-use and overnight visitor proportions. On the other hand, metrics at the individual level and at more refined spatial and temporal scales were less reliable. Conceptually, these observations contribute to the scientific literature regarding when and how to incorporate LBS data, with implications for other big datasets that may have similar limitations [33,34,35,80]. Specifically, data reliability and the scale of the unit of analysis have an inverse relationship. For LBS data, there are practical reasons to question the reliability at the individual level, because contribution from individual devices can vary [20], and, as we observed, under-represent individual-level rates of POI visitation. However, by aggregating individual-level data to study visitation to POIs and flow between POIs, the data’s quality and utility increases. This explains why the dataset was more reliable for measuring visitor density in other studies [14]. Furthermore, because LBS-device data are not recorded at equal intervals [20], they do not accurately reflect refined spatial–temporal information. However, increasing the unit of measurement to the full destination and from minutes to days improved accuracy, which is why we observed relative similarity in proportions of overnight and day-use visitation consistent with related research [21]. As researchers and practitioners continue to adopt LBS data, as well as other existing and emerging big data for research and monitoring, they should keep the inverse relationship of data reliability and unit of analysis in mind. Future research reviewing or analyzing multiple big data sources could also further examine this principle across sources.
From an applied perspective based on existing frameworks for carrying capacity, our findings suggest that when onsite observation methods are not feasible due to cost, logistics, or the porousness of the management area, LBS data could provide insights into high-level visitor trends. More traditional approaches for systematic or informal observation by researchers, managers, or onsite operational devices (e.g., counters, cameras, RFID) at attractions could provide a reasonable estimate of POI visitation [30,50,53]. Those traditional approaches, however, would not reliably yield valuable spatial–temporal information about the visitors being counted, such as their trip characteristics and stay time within POIs [4]. LBS data’s ability to overcome traditional approaches and provide salient spatial–temporal information at the intra-attraction level is limited, with the exception of its ability to measure general flow from one POI to another. Instead, LBS data could help substitute or supplement other data to determine relatively busier areas to inform staffing, maintenance, and safety procedures [81], develop approaches for reducing ecological harm and enhancing sustainable tourism [82], and understand visitor activities associated with various natural and cultural amenities [7]. Researchers and practitioners seeking to use it for those applications should, however, be aware that LBS data may over-represent some high-volume POIs and under-represent visitation to less-visited amenities and services.
From a methodological perspective, we contributed to recreation and tourism management by detailing how to pre-process GPS and third-party LBS data with insight into its effects on the respective samples [67]. While existing intra-attraction research has used network analysis to describe behavior at nature-based tourism attractions [58,59,69], we detailed the effects of this approach on sample size and waypoints recorded per person for an emerging visitor monitoring tool. Each step of pre-processing yielded insight regarding LBS data’s relative sparsity compared to onsite data. For example, sample size reduction from the cleaned sample to the full network sample resulted in a loss of three-quarters of the LBS sample but only 14% of the onsite sample. The application of network analysis to examine potentially paired big samples with field benchmarking data permitted us to relax the assumption of independence of events and avoid over-reliance on conservative paired-samples tests when assessing flow.

4.4. Limitations and Future Research

National parks and nature-based tourist destinations serve as unique testing grounds for understanding human spatial–temporal behavior because they are often structured, self-contained, and offer diverse opportunities to measure a variety of activities and modes of mobility [59]. However, our findings were only within one popular U.S. National Park, limiting external validity, especially considering evidence that found LBS data are more accurate at higher-visitation national parks [22]. National parks in wilderness settings differ from attractions in more urban areas, due to a combination of their poor cellphone coverage and high proportion of international visitors who may not have cellphone connections. Future research could seek to replicate our methodology in similarly structured urban tourism and recreation areas or even more remote recreation destinations to ensure external validity. Despite general methodological similarities across the LBS data providers, future research should also examine if our results are consistent among different data providers, because some platforms may have greater or poorer accuracy depending on their underlying data and cleaning procedures [65]. Additionally, researching how visitors use devices through systematic observations in nature-based tourism settings or qualitative interviews among visitors could help evaluate LBS platform bias by examining individual’s desire to escape technology in nature-based tourism destinations [83], determining if popular destinations are over-represented due to social media-driven behavior [84], and describing visitors’ use of spatial-tracking applications for navigation, health, and fitness [85].
Our onsite sampling and measurement approach used the best available methods to ensure representativeness, including spatially and temporally stratified sampling, large sample sizes relative to the population size, and the application of field-tested GPS devices [8,55]. However, despite these strengths, we were not capable of sampling all types of visitors (e.g., tour-based visitors, visitors arriving in large vehicles), we did not account for group size in vehicles because individuals may behave differently from their group, and we only sampled during the peak season. LBS data, on the other hand, do not discern visitors based on their arrival method and could record waypoints for multiple individuals within vehicles, as well as the same individuals’ multiple times, if they have several GNSS-enabled devices. Therefore, consistent with related research [22,29], we cannot absolutely discern which sample more accurately measures real-life behavior, only the ways that LBS data differ from the “gold-standard” approach. Future research may consider methods for discerning the effect of sampling differences and develop approaches for using LBS data to measure populations that traditional methods are insufficient for. Opportunities for further research using LBS data’s strengths to study visitor impacts include researching visitors who arrive during off-peak hours, stay for periods beyond the battery life of a GPS device, and visit during off-peak seasons where onsite data collection would be less feasible.
LBS data offer substantial potential to conduct research across sites at the intra-attraction level at scales that traditional methods could never practically permit. Future research can build on our findings on reliable metrics associated with LBS data by studying visitor flow and volume at POIs across sites. Existing research has already used LBS data to study network motifs at the intra-destination level [28]. Using a similar design, and building on our findings and methods for developing visitor networks, research at the intra-attraction level could identify common network motifs across sites to understand more generalized principles of visitor movement based on tourist flow. Alternatively, researchers could take a more theoretically driven approach and empirically examine tourists’ typical movement patterns in aggregate (e.g., hub-and-spoke, circle tour, single-destination) [86,87,88]. Conducting these analyses across sites could permit deeper theoretical and practical understanding of how structural, environmental, and place-based factors influence tourist’s typical movements [89]. A more generalized understanding of those patterns could help improve destination management and design, informing the development of sustainable transportation infrastructure and enhanced social- and ecological-impact monitoring.

5. Conclusions

We observed that LBS data were a reasonable tool for describing visitor use across metrics at aggregate levels and at less-refined spatial and temporal scales. These findings reflect the inverse relationship of data reliability and unit of analysis, and offer generalized insight for how to work with LBS data in other contexts and other current and emerging big-data sources. LBS data’s relatively large sample size, capability of measuring behavior among hard-to-reach populations, and universal coverage present significant potential for attractions to gain access to valuable information for sustainable destination management [16,20,74]. Those opportunities are especially important for improving management strategies using carrying capacity or overtourism frameworks at destinations that historically could not afford to commission field research or places that are structurally too porous or dispersed for traditional approaches for measuring visitation and flow. Our results also suggest that LBS data could help address other human spatial–temporal behavior questions with implications for social, economic, and environmental sustainability. Specifically, cities and recreation areas can improve their understanding of demand and potential overuse by examining the number of days people spend in pre-defined areas, the relative number of people who use amenities, and spillover from amenities to other areas. We anticipate that aggregations of movement between POIs could support a more nuanced understanding of the trends in tourists’ travel motifs and patterns (and perhaps other human mobility trends and patterns), especially if LBS platforms across these settings are assessed using consistent methods and techniques. However, our unique contribution includes observing that as the spatial, temporal, and sampling scale for tracking tourists becomes more refined, LBS data appear to become less reliable. Thus, we encourage practitioners and researchers to exercise caution when using LBS data and integrate them with multiple datasets and sources of observation when possible, consistent with prior research [2,15,29].

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su17020391/s1, File S1: Understanding Mobile Location Data (Near).

Author Contributions

Conceptualization, C.P., B.P., S.A.M. and W.L.R.; methodology, C.P., B.P., S.A.M., W.L.R. and B.D.T.; software, B.P. and W.L.R.; validation, C.P. and B.P.; formal analysis, C.P.; investigation, C.P.; resources, C.P., B.P., W.L.R., B.D.T., G.C. and P.N.; data curation, C.P. and W.L.R.; writing—original draft preparation, C.P.; writing—review and editing, C.P., B.P., S.A.M., W.L.R., B.D.T., G.C. and P.N.; visualization, C.P.; supervision, B.P.; project administration, B.P., W.L.R., B.D.T. and P.N.; funding acquisition, C.P., B.P., W.L.R., B.D.T., G.C. and P.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Grand Canyon National Park through a Cooperative Ecosystem Studies Unit Agreement [# P22AM01414] and Penn State University’s Center for Social Data Analytics Accelerator Award Grant [no grant number].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of the author’s institutions (STUDY00014774) on 20 March 2020.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets presented in this article are not readily available, due to ethical concerns over publicly sharing human subject spatial–behavioral data and restrictions for sharing third-party data.

Acknowledgments

We would like to acknowledge the support of staff at Grand Canyon National Park, Suzie and Allen Martin, and Penn State University’s Center for Social Data Analytics.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1 provides exact values visualized in Figure 2 in the main body of the text for tourists’ behaviors within POIs for both datasets, including volume of visitation, unique visits, and stay time.
Table A1. Descriptive characteristics of spatial–temporal behavior at points of interest by dataset.
Table A1. Descriptive characteristics of spatial–temporal behavior at points of interest by dataset.
Visitation Volume
(% of Total Visits)
Proportional Visitation
(% of Sample)
Stay Time
(Mean Minutes)
LabelOnsiteLBSOnsiteLBSOnsiteLBS
Main Village Visitor Center20.432.774.358.725.15.4
Hermits Rest Bus Transfer7.53.025.95.76.51.3
Mather Point6.69.337.519.614.42.4
Market Plaza5.26.928.813.017.56.2
Yavapai Point/Geology Museum4.49.727.820.026.46.6
Grandview Point4.75.630.512.017.15.5
Kolb Studio/Lookout Terrace3.52.816.25.68.23.2
Desert View Market and Giftshop5.21.120.12.27.02.2
Pima Point3.41.013.22.06.42.7
Duck on a Rock Overlook3.12.920.56.27.42.2
Moran Point3.63.623.47.912.74.1
Lipan Point3.03.319.37.014.34.7
Hopi Point2.11.212.72.616.33.9
Mohave Point2.20.712.91.514.03.1
Bright Angel Trailhead2.11.110.72.48.21.3
Navajo Point3.23.021.46.58.83.6
Powell Point2.10.812.21.811.03.8
South Kaibab Overlook/Trailhead2.11.59.83.011.32.8
Verkamp Visitor Center2.01.812.23.711.23.2
Hermits Rest1.80.611.91.219.62.7
Desert View Watchtower3.32.220.64.718.83.6
Backcountry Office1.60.710.11.50.61
Trailview Overlook1.41.08.62.17.22.4
Yaki Point1.11.07.42.115.83.7
Maricopa Point1.10.67.11.29.31.8
Mile and a Half Resthouse0.70.63.71.318.23.4
Ooh Aah Point0.80.24.50.46.61.6
Shoshone Trailhead0.60.13.30.218.44.1
Tusayan Ruins and Museum0.50.23.30.319.16
Three Mile Resthouse0.30.21.30.521.14.6
McKee Amphitheater0.20.21.10.47.55
Cedar Ridge0.10.21.00.522.23.9
Havasupai Gardens0.10.10.50.248.44.2
Shoshone Overlook0.10.30.40.632.45.6

References

  1. WTO Making Tourism More Sustainable—A Guide for Policy Makers (English Version); World Tourism Organization: Geneva, Switzerland, 2005; ISBN 92-844-0821-0.
  2. Hardy, A. Tracking Tourists: Movement and Mobility; Goodfellow Publishers Ltd.: Oxford, UK, 2020; ISBN 1-911635-40-9. [Google Scholar]
  3. Shoval, N.; Isaacson, M.; Chhetri, P. GPS, Smartphones, and the Future of Tourism Research. In The Wiley Blackwell Companion to Tourism; Wiley: Hoboken, NJ, USA, 2024; pp. 145–159. [Google Scholar]
  4. Peterson, B.; Brownlee, M.; Hallo, J.C.; Beeco, J.A.; White, D.; Sharp, R.; Cribbs, T. Spatiotemporal Variables to Understand Visitor Travel Patterns: A Management-Centric Approach. J. Outdoor Recreat. Tour. 2020, 31, 100316. [Google Scholar] [CrossRef]
  5. Capocchi, A.; Vallone, C.; Pierotti, M.; Amaduzzi, A. Overtourism: A Literature Review to Assess Implications and Future Perspectives. Sustainability 2019, 11, 3303. [Google Scholar] [CrossRef]
  6. Abkarian, H.; Tahlyan, D.; Mahmassani, H.; Smilowitz, K. Characterizing Visitor Engagement Behavior at Large-Scale Events: Activity Sequence Clustering and Ranking Using GPS Tracking Data. Tour. Manag. 2022, 88, 104421. [Google Scholar] [CrossRef]
  7. Kidd, A.M.; D’Antonio, A.; Monz, C.; Heaslip, K.; Taff, D.; Newman, P. A GPS-Based Classification of Visitors’ Vehicular Behavior in a Protected Area Setting. J. Park Recreat. Adm. 2018, 36, 69–89. [Google Scholar] [CrossRef]
  8. D’Antonio, A.; Taff, B.D.; Baker, J.; Rice, W.L.; Newton, J.N.; Miller, Z.D.; Newman, P.; Monz, C.; Freeman, S. Integrating Aspatial and Spatial Data to Improve Visitor Management: Pairing Visitor Questionnaires with Multiple Spatial Methodologies in Grand Teton National Park, WY, USA. J. Park Recreat. Adm. 2021, 39, 67–84. [Google Scholar] [CrossRef]
  9. Liu, Y.; Lu, A.; Yang, W.; Tian, Z. Investigating Factors Influencing Park Visit Flows and Duration Using Mobile Phone Signaling Data. Urban For. Urban Green. 2023, 85, 127952. [Google Scholar] [CrossRef]
  10. Cole, D.N. Visitor and Recreation Impact Monitoring: Is It Lost in the Gulf between Science and Management? In George Wright Forum; JSTOR: Ann Arbor, MI, USA, 2006; Volume 23, pp. 11–16. [Google Scholar]
  11. Hallo, J.C.; Manning, R.E.; Valliere, W.; Budruk, M. A Case Study Comparison of Visitor Self-Reported and GPS Recorded Travel Routes. In Proceedings of the Northeastern Recreation Research Symposium, Bolton Landing, NY, USA, 10–12 April 2005; USDA Forest Service, Northeastern Research Station: Newtown Square, PA, USA, 2005; pp. 172–177. [Google Scholar]
  12. Hallo, J.C.; Beeco, J.A.; Goetcheus, C.; McGee, J.; McGehee, N.G.; Norman, W.C. GPS as a Method for Assessing Spatial and Temporal Use Distributions of Nature-Based Tourists. J. Travel Res. 2012, 51, 591–606. [Google Scholar] [CrossRef]
  13. Monz, C.; Creany, N.; Nesbitt, J.; Mitrovich, M. Mobile Device Data Analysis to Determine the Demographics of Park Visitors. J. Park Recreat. Adm. 2020, 39, 2–9. [Google Scholar] [CrossRef]
  14. Creany, N.E.; Monz, C.A.; D’Antonio, A.; Sisneros-Kidd, A.; Wilkins, E.J.; Nesbitt, J.; Mitrovich, M. Estimating Trail Use and Visitor Spatial Distribution Using Mobile Device Data: An Example from the Nature Reserve of Orange County, California USA. Environ. Chall. 2021, 4, 100171. [Google Scholar] [CrossRef]
  15. Merrill, N.H.; Atkinson, S.F.; Mulvaney, K.K.; Mazzotta, M.J.; Bousquin, J. Using Data Derived from Cellular Phone Locations to Estimate Visitation to Natural Areas: An Application to Water Recreation in New England, USA. PLoS ONE 2020, 15, e0231863. [Google Scholar] [CrossRef]
  16. Whitney, P.; Rice, W.L.; Sage, J.; Thomsen, J.M.; Wheeler, I.; Freimund, W.; Bigart, E. Developments in Big Data for Park Management: A Review of Mobile Phone Location Data for Visitor Use Management. Landsc. Res. 2023, 48, 758–776. [Google Scholar] [CrossRef]
  17. Ahas, R.; Mark, Ü. Location Based Services—New Challenges for Planning and Public Administration? Futures 2005, 37, 547–561. [Google Scholar] [CrossRef]
  18. Kovalcsik, T.; Elekes, Á.; Boros, L.; Könnyid, L.; Kovács, Z. Capturing Unobserved Tourists: Challenges and Opportunities of Processing Mobile Positioning Data in Tourism Research. Sustainability 2022, 14, 13826. [Google Scholar] [CrossRef]
  19. Riungu, G.K.; Peterson, B.A.; Beeco, J.A.; Brown, G. Understanding Visitors’ Spatial Behavior: A Review of Spatial Applications in Parks. In Tourism Spaces; Routledge: London, UK, 2021; pp. 65–89. ISBN 978-1-003-15245-3. [Google Scholar]
  20. Wang, Z.; He, S.Y.; Leung, Y. Applying Mobile Phone Data to Travel Behaviour Research: A Literature Review. Travel Behav. Soc. 2018, 11, 141–155. [Google Scholar] [CrossRef]
  21. Schmücker, D.; Reif, J. Measuring Tourism with Big Data? Empirical Insights from Comparing Passive GPS Data and Passive Mobile Data. Ann. Tour. Res. Empir. Insights 2022, 3, 100061. [Google Scholar] [CrossRef]
  22. Tsai, W.-L.; Merrill, N.H.; Neale, A.C.; Grupper, M. Using Cellular Device Location Data to Estimate Visitation to Public Lands: Comparing Device Location Data to US National Park Service’s Visitor Use Statistics. PLoS ONE 2023, 18, e0289922. [Google Scholar] [CrossRef]
  23. Monz, C.; Mitrovich, M.; D’Antonio, A.; Sisneros-Kidd, A. Using Mobile Device Data to Estimate Visitation in Parks and Protected Areas: An Example from the Nature Reserve of Orange County, California. J. Park Recreat. Adm. 2019, 37, 92. [Google Scholar] [CrossRef]
  24. Alba, C.; Pan, B.; Yin, J.; Rice, W.L.; Mitra, P.; Lin, M.S.; Liang, Y. COVID-19’s Impact on Visitation Behavior to US National Parks from Communities of Color: Evidence from Mobile Phone Data. Sci. Rep. 2022, 12, 13398. [Google Scholar] [CrossRef]
  25. Rice, W.L.; Pan, B. Understanding Changes in Park Visitation during the COVID-19 Pandemic: A Spatial Application of Big Data. Wellbeing Space Soc. 2021, 2, 100037. [Google Scholar] [CrossRef]
  26. Kupfer, J.A.; Li, Z.; Ning, H.; Huang, X. Using Mobile Device Data to Track the Effects of the COVID-19 Pandemic on Spatiotemporal Patterns of National Park Visitation. Sustainability 2021, 13, 9366. [Google Scholar] [CrossRef]
  27. Park, S.; Zhong, R.R. Pattern Recognition of Travel Mobility in a City Destination: Application of Network Motif Analytics. J. Travel Res. 2022, 61, 1201–1216. [Google Scholar] [CrossRef]
  28. Ding, D.; Zheng, Y.; Zhang, Y.; Liu, Y. Understanding Attractions’ Connection Patterns Based on Intra-Destination Tourist Mobility: A Network Motif Approach. Humanit. Soc. Sci. Commun. 2024, 11, 636. [Google Scholar] [CrossRef]
  29. Liang, Y.; Yin, J.; Pan, B.; Lin, M.; Miller, L.; Taff, B.D.; Chi, G. Assessing the Validity of Mobile Device Data for Estimating Visitor Demographics and Visitation Patterns in Yellowstone National Park. J. Environ. Manag. 2021, 317, 115410. [Google Scholar] [CrossRef]
  30. Watson, A.E.; Cordell, H.K.; Manning, R.; Martin, S. The Evolution of Wilderness Social Science and Future Research to Protect Experiences, Resources, and Societal Benefits. J. For. 2016, 114, 329–338. [Google Scholar] [CrossRef]
  31. Sumner, E.L. Special Report on a Wildlife Study of the High Sierra in Sequoia and Yosemite National Parks and Adjacent Territory; US Department of the Interior, National Park Service: Washington, DC, USA, 1936. [Google Scholar]
  32. Stankey, G.H.; McCool, S.F. Carrying Capacity in Recreational Settings: Evolution, Appraisal, and Application. Leis. Sci. 1984, 6, 453–473. [Google Scholar] [CrossRef]
  33. Padrón-Ávila, H.; Hernández-Martín, R. How Can Researchers Track Tourists? A Bibliometric Content Analysis of Tourist Tracking Techniques. Eur. J. Tour. Res. 2020, 26, 2601. [Google Scholar] [CrossRef]
  34. Li, J.; Xu, L.; Tang, L.; Wang, S.; Li, L. Big Data in Tourism Research: A Literature Review. Tour. Manag. 2018, 68, 301–323. [Google Scholar] [CrossRef]
  35. Xu, F.; Nash, N.; Whitmarsh, L. Big Data or Small Data? A Methodological Review of Sustainable Tourism. J. Sustain. Tour. 2020, 28, 144–163. [Google Scholar] [CrossRef]
  36. Hollenhorst, S.; Whisman, S.; Ewert, A. Monitoring Visitor Use in Backcountry Wilderness. In USDA Forest Service General Technical Report, PSW—GTR-134; US Department of Agriculture, Pacific Southwest Research Station, Forest Service: Albany, CA, USA, 1992. [Google Scholar]
  37. Koens, K.; Postma, A.; Papp, B. Is Overtourism Overused? Understanding the Impact of Tourism in a City Context. Sustainability 2018, 10, 4384. [Google Scholar] [CrossRef]
  38. Kuentzel, W.F.; Heberlein, T.A.; McCollum, D.W. Why Do Normative Encounter Standards Change? The Social Evolution of Recreational Crowding. J. Leis. Res. 2022, 53, 357–376. [Google Scholar] [CrossRef]
  39. Manning, R.E. Studies in Outdoor Recreation: Search and Research for Satisfaction; Oregon State University Press: Corvallis, OR, USA, 2011; ISBN 0-87071-621-2. [Google Scholar]
  40. Saveriades, A. Establishing the Social Tourism Carrying Capacity for the Tourist Resorts of the East Coast of the Republic of Cyprus. Tour. Manag. 2000, 21, 147–156. [Google Scholar] [CrossRef]
  41. McCool, S.F.; Cole, D.N. Proceedings--Limits of Acceptable Change and Related Planning Processes: Progress and Future Directions: From a Workshop Held at the University of Montana’s Lubrecht Experimental Forest; Rocky Mountain Research Station: Fort Collins, CO, USA, 1997; Volume 371. [Google Scholar]
  42. Stankey, G.H. The Recreation Opportunity Spectrum and the Limits of Acceptable Change Planning Systems: A Review of Experiences and Lessons. In Ecosystem Management; CRC Press: Boca Raton, FL, USA, 2020; pp. 173–188. [Google Scholar]
  43. Stankey, G.H.; Cole, D.N.; Lucas, R.C.; Petersen, M.E.; Frissell, S.S. The Limits of Acceptable Change (LAC) System for Wilderness Planning; Gen. Tech. Rep. INT-GTR-176; US Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station: Ogden, UT, USA, 1985; Volume 176, 37p. [Google Scholar]
  44. Clark, R.N.; Stankey, G.H. The Recreation Opportunity Spectrum: A Framework for Planning, Management, and Research; Department of Agriculture, Forest Service, Pacific Northwest Research Station: Portland, OR, USA, 1979; Volume 98. [Google Scholar]
  45. Interagency Visitor Use Management Council. Visitor Use Management Framework: A Guide to Providing Sustainable Outdoor Recreation; US Department of the Interior, National Park Service: Washington, DC, USA, 2016.
  46. Tokarchuk, O.; Gabriele, R.; Maurer, O. Estimating Tourism Social Carrying Capacity. Ann. Tour. Res. 2021, 86, 102971. [Google Scholar] [CrossRef]
  47. Blanc, B.; Figliozzi, M.; Clifton, K. How Representative of Bicycling Populations Are Smartphone Application Surveys of Travel Behavior? Transp. Res. Rec. 2016, 2587, 78–89. [Google Scholar] [CrossRef]
  48. Li, D.; Deng, L.; Cai, Z. Statistical Analysis of Tourist Flow in Tourist Spots Based on Big Data Platform and DA-HKRVM Algorithms. Pers. Ubiquitous Comput. 2020, 24, 87–101. [Google Scholar] [CrossRef]
  49. Andrienko, N.; Andrienko, G. Visual Analytics of Movement: An Overview of Methods, Tools and Procedures. Inf. Vis. 2013, 12, 3–24. [Google Scholar] [CrossRef]
  50. Cole, D.N.; Daniel, T.C. The Science of Visitor Management in Parks and Protected Areas: From Verbal Reports to Simulation Models. J. Nat. Conserv. 2003, 11, 269–277. [Google Scholar] [CrossRef]
  51. Shen, X.; Pan, B.; Hu, T.; Chen, K.; Qiao, L.; Zhu, J. Beyond Self-Selection: The Multilayered Online Review Biases at the Intersection of Users, Platforms and Culture. J. Hosp. Tour. Insights 2021, 4, 77–97. [Google Scholar] [CrossRef]
  52. Oksanen, J.; Bergman, C.; Sainio, J.; Westerholm, J. Methods for Deriving and Calibrating Privacy-Preserving Heat Maps from Mobile Sports Tracking Application Data. J. Transp. Geogr. 2015, 48, 135–144. [Google Scholar] [CrossRef]
  53. Shoval, N.; Ahas, R. The Use of Tracking Technologies in Tourism Research: The First Decade. Tour. Geogr. 2016, 18, 587–606. [Google Scholar] [CrossRef]
  54. Snider, A.G.; Hill, J.; Simmons, S.; Herstine, J. A General Framework for Gathering Data to Quantify Annual Visitation. J. Park Recreat. Adm. 2018, 36, 1. [Google Scholar] [CrossRef]
  55. Vaske, J. Survey Research and Analysis; Sagamore Publishing, LLC: Urbana, IL, USA, 2019; ISBN 978-1-57167-943-7. [Google Scholar]
  56. Hinterberger, B.; Arnberger, A.; Muhar, A. GIS-Supported Network Analysis of Visitor Flows in Recreational Areas. In Proceedings of the Monitoring and Management of Visitor Flows in Recreational and Protected Areas, Vienna, Austria, 30 January 2002; pp. 28–32. [Google Scholar]
  57. D’Antonio, A.; Monz, C.; Lawson, S.; Newman, P.; Pettebone, D.; Courtemanch, A. GPS-Based Measurements of Backcountry Visitors in Parks and Protected Areas: Examples of Methods and Applications from Three Case Studies. J. Park Recreat. Adm. 2010, 28, 42–60. [Google Scholar]
  58. Taczanowska, K.; Bielański, M.; González, L.-M.; Garcia-Massó, X.; Toca-Herrera, J. Analyzing Spatial Behavior of Backcountry Skiers in Mountain Protected Areas Combining GPS Tracking and Graph Theory. Symmetry 2017, 9, 317. [Google Scholar] [CrossRef]
  59. Taczanowska, K.; González, L.-M.; Garcia-Massó, X.; Muhar, A.; Brandenburg, C.; Toca-Herrera, J.-L. Evaluating the Structure and Use of Hiking Trails in Recreational Areas Using a Mixed GPS Tracking and Graph Theory Approach. Appl. Geogr. 2014, 55, 184–192. [Google Scholar] [CrossRef]
  60. Liu, B.; Huang, S.; Fu, H. An Application of Network Analysis on Tourist Attractions: The Case of Xinjiang, China. Tour. Manag. 2017, 58, 132–141. [Google Scholar] [CrossRef]
  61. Scott, N.; Baggio, R.; Cooper, C. Network Analysis and Tourism: From Theory to Practice; Aspects of Tourism; Channel View Publications: Bristol, UK, 2008; ISBN 978-1-84541-089-6. [Google Scholar]
  62. Azira. Azira’s About. Available online: https://azira.com/about/ (accessed on 28 December 2024).
  63. National Park Service Annual Park Ranking Report for Recreation Visits in 2023; National Park Service: Washington, DC, USA, 2024.
  64. Rice, W.L.; Taff, B.D.; Newman, P.; Zipp, K.Y.; Pan, B. Identifying Recreational Ecosystem Service Areas of Concern in Grand Canyon National Park: A Participatory Mapping Approach. Appl. Geogr. 2020, 125, 102353. [Google Scholar] [CrossRef]
  65. Hsu, C.-W.; Liu, C.; Nguyen, K.M.; Chien, Y.-H.; Mostafavi, A. Do Human Mobility Network Analyses Produced from Different Location-Based Data Sources Yield Similar Results across Scales? Comput. Environ. Urban Syst. 2024, 107, 102052. [Google Scholar] [CrossRef]
  66. Stamberger, L.; van Riper, C.J.; Keller, R.; Brownlee, M.; Rose, J. A GPS Tracking Study of Recreationists in an Alaskan Protected Area. Appl. Geogr. 2018, 93, 92–102. [Google Scholar] [CrossRef]
  67. Baggio, R. Network Science and Tourism–the State of the Art. Tour. Rev. 2017, 72, 120–131. [Google Scholar] [CrossRef]
  68. Newton, J.N.; Newman, P.; Taff, B.D.; D’Antonio, A.; Monz, C. Spatial Temporal Dynamics of Vehicle Stopping Behavior along a Rustic Park Road. Appl. Geogr. 2017, 88, 94–103. [Google Scholar] [CrossRef]
  69. Bielański, M.; Taczanowska, K.; Muhar, A.; Adamski, P.; González, L.-M.; Witkowski, Z. Application of GPS Tracking for Monitoring Spatially Unconstrained Outdoor Recreational Activities in Protected Areas—A Case Study of Ski Touring in the Tatra National Park, Poland. Appl. Geogr. 2018, 96, 51–65. [Google Scholar] [CrossRef]
  70. McKinney, W. Data Structures for Statistical Computing in Python. In Proceedings of the 9th Python in Science Conference, Austin, TX, USA, 28 June–3 July 2010; pp. 56–61. [Google Scholar]
  71. Hagberg, A.; Swart, P.J.; Schult, D.A. Exploring Network Structure, Dynamics, and Function Using NetworkX; Report Number: LA-UR-08-05495; LA-UR-08-5495; Research Organization, Los Alamos National Laboratory (LANL): Los Alamos, NM, USA, 1 January 2008. [Google Scholar]
  72. Borgatti, S.P.; Everett, M.G.; Freeman, L.C. Ucinet for Windows: Software for Social Network Analysis; Analytic Technologies: Harvard, MA, USA, 2002; Volume 6, pp. 12–15. [Google Scholar]
  73. Wasserman, S.; Faust, K. Social Network Analysis: Methods and Applications; Cambridge University Press: Cambridge, UK, 1994; ISBN 978-0-511-81547-8. [Google Scholar]
  74. National Park Service. Regional Transportation System Usage Analysis for National Parks in Colorado; Department of Interion, National Park Service: Washington, DC, USA, 2022.
  75. Rice, W.L.; Mueller, J.; Graefe, A.; Taff, D. Detailing an Approach for Cost-Effective Visitor-Use Monitoring Using Crowdsourced Activity Data. J. Park Recreat. Adm. 2019, 37, 144–154. [Google Scholar] [CrossRef]
  76. Korpilo, S.; Virtanen, T.; Lehvävirta, S. Smartphone GPS Tracking—Inexpensive and Efficient Data Collection on Recreational Movement. Landsc. Urban Plan. 2017, 157, 608–617. [Google Scholar] [CrossRef]
  77. Kim, J.; Thapa, B.; Jang, S. GPS-Based Mobile Exercise Application: An Alternative Tool to Assess Spatio-Temporal Patterns of Visitors’ Activities in a National Park. J. Park Recreat. Adm. 2019, 37, 124–134. [Google Scholar] [CrossRef]
  78. Liang, Y.; Kirilenko, A.P.; Stepchenkova, S.O.; Ma, S. Using Social Media to Discover Unwanted Behaviours Displayed by Visitors to Nature Parks: Comparisons of Nationally and Privately Owned Parks in the Greater Kruger National Park, South Africa. Tour. Recreat. Res. 2020, 45, 271–276. [Google Scholar] [CrossRef]
  79. Le-Klähn, D.-T.; Hall, C.M.; Gerike, R. Promoting Public Transport Use in Tourism. In Understanding and Governing Sustainable Tourism Mobility: Psychological and Behavioural Approaches; Routledge: London, UK, 2014; pp. 208–222. [Google Scholar]
  80. Zhang, J. Big Data and Tourism Geographies–an Emerging Paradigm for Future Study? In Tourism Spaces; Routledge: London, UK, 2021; pp. 131–136. [Google Scholar]
  81. Rice, W.L.; Park, S.Y.; Pan, B.; Newman, P. Forecasting Campground Demand in US National Parks. Ann. Tour. Res. 2019, 75, 424–438. [Google Scholar] [CrossRef]
  82. Edwards, D.; Griffin, T. Understanding Tourists’ Spatial Behaviour: GPS Tracking as an Aid to Sustainable Destination Management. J. Sustain. Tour. 2013, 21, 580–595. [Google Scholar] [CrossRef]
  83. Berto, R. The Role of Nature in Coping with Psycho-Physiological Stress: A Literature Review on Restorativeness. Behav. Sci. 2014, 4, 394–409. [Google Scholar] [CrossRef]
  84. Fisher, D.M.; Wood, S.A.; Roh, Y.-H.; Kim, C.-K. The Geographic Spread and Preferences of Tourists Revealed by User-Generated Information on Jeju Island, South Korea. Land 2019, 8, 73. [Google Scholar] [CrossRef]
  85. Russell, H.C.; Potts, C.; Nelson, E. “If It’s Not on Strava It Didn’t Happen”: Perceived Psychosocial Implications of Strava Use in Collegiate Club Runners. Recreat. Sports J. 2023, 47, 15–25. [Google Scholar] [CrossRef]
  86. Smallwood, C.B.; Beckley, L.E.; Moore, S.A. An Analysis of Visitor Movement Patterns Using Travel Networks in a Large Marine Park, North-Western Australia. Tour. Manag. 2012, 33, 517–528. [Google Scholar] [CrossRef]
  87. Lew, A.; McKercher, B. Modeling Tourist Movements: A Local Destination Analysis. Ann. Tour. Res. 2006, 33, 403–423. [Google Scholar] [CrossRef]
  88. Mckercher, B.; Lau, G. Movement Patterns of Tourists within a Destination. Tour. Geogr. 2008, 10, 355–374. [Google Scholar] [CrossRef]
  89. Birenboim, A.; Anton-Clavé, S.; Russo, A.P.; Shoval, N. Structure Versus Agency: Which Best Explains Tourist Activity in a Destination? In Regional Science Perspectives on Tourism and Hospitality; Springer Nature: Cham, Switzerland, 2021; pp. 355–373. ISBN 1430-9602. [Google Scholar]
Figure 1. Grand Canyon National Park points of interests with identification by park section.
Figure 1. Grand Canyon National Park points of interests with identification by park section.
Sustainability 17 00391 g001
Figure 2. Consecutive steps of pre-processing and analysis.
Figure 2. Consecutive steps of pre-processing and analysis.
Sustainability 17 00391 g002
Figure 3. Visitor spatial–temporal behavior at Points of Interest as 3D weights.
Figure 3. Visitor spatial–temporal behavior at Points of Interest as 3D weights.
Sustainability 17 00391 g003
Figure 4. Above-average flows across networks by attribute and sample.
Figure 4. Above-average flows across networks by attribute and sample.
Sustainability 17 00391 g004
Table 1. Grand Canyon National Park point of interest categorization and section description.
Table 1. Grand Canyon National Park point of interest categorization and section description.
SectionLabel *Categorization TypeSection Description
Desert ViewShoshone TrailheadRoad/TrailThe eastern-most section of the park, characterized by miles of driving with several scenic overlooks, backcountry trails, an entrance, and visitor amenities area with a giftshop, campground, visitor center and market.
Duck on a Rock OverlookOverlook/Road
Shoshone OverlookOverlook/Trail
Grandview PointOverlook/Road/Trail
Moran PointOverlook/Road
Tusayan Ruins and MuseumRoad/Service
Lipan PointOverlook/Road/Trail
Navajo PointOverlook/Road
Desert View WatchtowerOverlook/Road
Desert View Market and GiftshopOverlook/Road
Hermits RestHermits RestBus/Overlook/Service/TrailThe western-most section of the park with scenic overlooks and a backcountry trail that is accessible in the summer only by bus, bike, or walking trail, without a special permit.
Pima PointBus/Overlook/Trail
Mohave PointBus/Overlook/Trail
Hopi PointBus/Overlook/Trail
Powell PointBus/Overlook/Trail
Maricopa PointBus/Overlook/Trail
Trailview OverlookBus/Overlook/Trail
Main VillageHermits Rest Bus TransferBus/Overlook/TrailThe only section with a complex roadway that is characterized by many concentrated visitor amenities including lodging, camping, two bus routes, two visitor centers, a market, and access to the two most popular backcountry trails.
Backcountry OfficeBus/Service/Road
Kolb Studio/Lookout TerraceOverlook/Trail
Verkamp Visitor CenterService/Trail
McKee AmphitheaterService/Trail
Market PlazaBus/Road/Service
Yavapai Point/Geology MuseumOverlook/Road/Service/Trail
Main Village Visitor CenterBus/Overlook/Road/Service/Trail
Mather PointOverlook/Trail
South Kaibab Overlook/TrailheadBus/Overlook/Trail
Yaki PointBus/Overlook
South Kaibab and Bright Angel TrailsBright Angel TrailheadTrailThe two most popular backcountry trails on the South Rim that exit from the Main Village and meet at the Colorado River to form a loop.
Mile and a Half ResthouseOverlook/Trail
Three Mile ResthouseOverlook/Trail
Havasupai GardensTrail
Cedar RidgeOverlook/Trail
Ooh Aah PointOverlook/Trail
* Note: POI labels are ordered within park section from west to east.
Table 2. Visitor descriptive characteristics by sample, year, and length of stay.
Table 2. Visitor descriptive characteristics by sample, year, and length of stay.
OnsiteLBS
YearStay Typen(%)POIs Visited (M)Stay Time (M)n(%)POIs Visited (M)Stay Time (M)
2022Day-use246(78.1)5.93:51:0311,693(77.2)1.91:01:27
Overnight69(21.9)8.231:37:113461(22.8)3.694:39:02
2023Day-use180(80.6)6.73:38:0819,933(80.9)1.91:41:11
Overnight44(19.6)10.826:46:224712(19.1)337:54:51
CombinedDay-use307(78.5)63:48:3031,626(79.5)1.91:26:30
Overnight84(21.5)8.630:45:468173(20.5)3.261:56:25
Table 3. Summary statistics and QAP correlation for all networks.
Table 3. Summary statistics and QAP correlation for all networks.
DensityTransitivityDiameterQAP Correlation
Network Type (Nodes)OnsiteLBSOnsiteLBSOnsiteLBSrSTD
Full (34)0.220.640.580.74620.82 ***0.04
Trail (25)0.220.660.550.74620.85 ***0.06
Road (13)0.710.870.770.90320.80 ***0.11
Bus (13)0.400.880.620.90320.72 ***0.10
Overlook (27)0.250.730.630.79520.82 ***0.05
Services-excluded (23)0.250.690.620.80630.82 ***0.06
Note: *** denotes p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Parkinson, C.; Pan, B.; Morris, S.A.; Rice, W.L.; Taff, B.D.; Chi, G.; Newman, P. A Comparison of Tourists’ Spatial–Temporal Behaviors Between Location-Based Service Data and Onsite GPS Tracks. Sustainability 2025, 17, 391. https://doi.org/10.3390/su17020391

AMA Style

Parkinson C, Pan B, Morris SA, Rice WL, Taff BD, Chi G, Newman P. A Comparison of Tourists’ Spatial–Temporal Behaviors Between Location-Based Service Data and Onsite GPS Tracks. Sustainability. 2025; 17(2):391. https://doi.org/10.3390/su17020391

Chicago/Turabian Style

Parkinson, Colby, Bing Pan, Sophie A. Morris, William L. Rice, B. Derrick Taff, Guangqing Chi, and Peter Newman. 2025. "A Comparison of Tourists’ Spatial–Temporal Behaviors Between Location-Based Service Data and Onsite GPS Tracks" Sustainability 17, no. 2: 391. https://doi.org/10.3390/su17020391

APA Style

Parkinson, C., Pan, B., Morris, S. A., Rice, W. L., Taff, B. D., Chi, G., & Newman, P. (2025). A Comparison of Tourists’ Spatial–Temporal Behaviors Between Location-Based Service Data and Onsite GPS Tracks. Sustainability, 17(2), 391. https://doi.org/10.3390/su17020391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop